text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Large-scale group decision-making based on Pythagorean linguistic preference relations using experts clustering and consensus measure with non-cooperative behavior analysis of clusters
To represent qualitative aspect of uncertainty and imprecise information, linguistic preference relation (LPR) is a powerful tool for experts expressing their opinions in group decision-making (GDM) according to linguistic variables (LVs). Since for an LV, it generally means that membership degree is one, and non-membership and hesitation degrees of the experts cannot be expressed. Pythagorean linguistic numbers/values (PLNs/PLVs) are novel choice to address this issue. The aim of this paper which we propose a GDM problem involved a large number of the experts is called large-scale GDM (LSGDM) based on Pythagorean linguistic preference relation (PLPR) with a consensus model. Sometimes, the experts do not modify their opinions to achieve consensus. Therefore, the experts’ proper opinions’ management with their non-cooperative behaviors (NCBs) is necessary to establish a consensus model. At the same time, it is essential to ensure the proper adjustment of the credibility information. The proposed model using grey clustering method is divided with the experts’ similar evaluations into a subgroup. Then, we aggregate the experts’ evaluations in each cluster. A cluster consensus index (CCI) and a group consensus index (GCI) are presented to measure consensus level among the clusters. Then, we provide a mechanism for managing the NCBs of the clusters, which contain two parts: (1) NCB degree is defined using CCI and GCI for identifying the NCBs of the clusters; (2) implemented the weight punishment mechanism of the NCBs clusters to consensus improvement. Finally, an example is offered for usefulness of the proposed approach.
Introduction
The decision data provided by a huge number of decisionmakers (DMs) or the experts are known as large-scale group decision-making (LSGDM) problem, which is a widespread human activity for the selection of the best option from a set of feasible alternatives. We solve real-life problems, some decision-making results related to the benefits of a large number of the experts. For example, emergency events often have a massive impact on public interest; emergency management usually requires the participation of many DMs from different professional backgrounds; the teachers' appointment reformation system is in universities and so on. To date, we can see that the studies about LSGDM problem can be classified into three angles, i.e., clustering approaches [22,57], consensus reaching process (CRP) [8,[41][42][43][49][50][51]55], the method of decision-making with various types of preferences [23,32,33,49,56]. These researches have contributed significantly to the development of LSGDM problems, but most of the cases DMs consider quantitative judgments. In [11][12][13][14], some scholars note that LSGDM problems might be too difficult to require the experts to deliver quantitative judgments. In this situation, Zadeh's linguistic variables (LVs) [54] are appropriate tools that use DMs to represent quan-titative judgment. However, Wang and Li [44] pointed out that the membership degree of a linguistic assessment value is one; the non-membership and hesitation degrees of DMs cannot be expressed. For example, if a DM compares two alternatives at a time and gives the opinion according to an LV such as "good", but he/she cannot be entirely sure that this assessment results. Maybe, he/she has 75% certain degree and 8% degree of confusion. In this situation, Mandal et al. [38] proposed the new type of preference relation called Pythagorean linguistic preference relation (PLPR), which is addressed the preferred degree and non-preferred degree of LVs according to Yagers Pythagorean fuzzy sets (PFSs) [53]. We suggest for interested researchers to see the attractive studies in [4,5,9,34,37].
The existing studies on LSGDM problems mostly combine with the clustering method and CRP. The objective of the method of clustering is to divide a large number of the experts into a few subgroups. The CRP aims to reach a final decision which can satisfy most of the experts, instead of giving some of the experts impression that their opinions are considered lightly [12,15,48]. Till now, different consensus method has been proposed for LSGDM problems [6,[45][46][47]. In CRP, some researchers are effectively handling the behaviors of the experts or clusters. The behaviors of the experts or clusters addressing in CRP are different types such as: (1) non-cooperative behaviors discussed in [7,8,42,43,50]; (2) self-confidence behaviors discussed in [27,28,30,31]; (3) overconfidence behaviors discussed in [29]; (4) personal individual semantics behaviors discussed in [20,21].
In this paper, in our proposed model, we consider noncooperative behaviors of the experts or clusters where the experts use PLPR to express their opinions in LSGDM problems. The contributions of the proposed model in this paper are summarized as follows: (I) We develop a new clustering algorithm according to the grey clustering method based on a similarity degree between two experts, where the experts use PLPVs for comparing two alternatives at a time. The experts' clustering method is shown in Algorithm 1. (II) According to the majority policy, we obtain the weight of each cluster and the weight of each expert in each cluster. We use the Pythagorean linguistic weighted averaging (PLWA) operator to aggregating the expert's opinions in each cluster. Then, we obtain weight collective PLPR of all clusters PLPRs. We define a cluster consensus index (CCI) of each cluster using division measure between each cluster PLPR and weight collective PLPR of all clusters. Then, we obtain sum of the multiplication of CCI and weight of each cluster, which is called a group consensus index (GCI). (III) If we achieve acceptable consensus, so CRP is necessary.
In CRP, here, we consider clusters behavior according to the non-cooperative behavior degree (NCBD) of each cluster, which is obtained from CCI of each cluster and GCI. The identification rule for identifying non-cooperative behavior clusters is designed, and then, feedback mechanisms are applied to these clusters. We control non-cooperative behavior clusters according to weight updating adjustment by the feedback parameter and the values of NCBD. (IV) We accept weight collective PLPR at the same time when we achieve acceptable consensus. Then, the selection process is applied to this accept weight collective PLPR. The selection process is done by row arithmetic values of weight collective PLPR and their lower index of expected values.
To do so, this paper is set out as follows: in the next section, we present notations and some basic concepts of linguistic set, Pythagorean linguistic set (PLS), and PLPR. In the following section, description of problem and proposed framework of consensus for the LSGDM problem based on PLPRs are discussed. In the next section, the consensus approach in the LSGDM with PLPRs: Method of expert clustering, Consensus measure of clusters, Non-cooperative behaviors managements mechanism, and Selection process are given. In the following section, one numerical example is presented to show the feasibility and validity of this study. This paper is concluded in the last section.
Preliminaries
In this section, offer some basic knowledge of the linguistic set, PLS, and PLPR are recalled.
Notations
To facilitate the comprehension of the paper, Table 1
Linguistic set
To represent the qualitative judgments, LVs [54] are a feasible and powerful tool of the experts. A linguistic term s i ∈ S is a possible value for a LV. In the following, we call up the basic operational laws on the linguistic term [12]: (1) the set is ordered: s i > s j if and only if i > j; (2) negation operator: neg(s i ) = s −i .
The PLPV provides the expert e k , k ∈ M The PLPR given by expert e k , k ∈ M ϕ k The weight of the expert e k , k ∈ M z Cluster numbers, 1 ≤ z ≤ m τ The number of iteration
Cˆz
The cluster ofẑ,ẑ ∈ Z The τ th iteration weight of the cluster Cˆz,ẑ ∈ Z oˆz The number of the experts' belongs to the cluster Cˆz,ẑ ∈ Z The PLPR of the cluster Cˆz The weight collective decision matrix PLPR Xu [52] defined the continuous linguistic term set denoted by S, which is an extension of the above discrete linguistic term set S to process linguistic information. Let s α ∈ S. If s α ∈ S, then it is called an original linguistic term. Otherwise, it is a virtual linguistic term [52].
In the following, we have operation laws [52] for any LVs s α , s β ∈ S and λ, λ 1 , λ 2 ∈ [0, 1]: For any s α ∈ S, the lower index α can be obtained in the following function: for any s α ∈ S.
PLS and PLPR
Definition 1 [38] A PLS P in X for S is defined as where μ P : X → [0, 1] and ν P : X → [0, 1] represent the membership and non-membership degrees of the element x to s θ(x) ∈ S, are, respectively, with the condition 0 ≤ μ 2 P (x) + ν 2 P (x) ≤ 1, ∀x ∈ X . The hesitancy degree of x to s θ(x) is represented by π P (x) = 1 − μ 2 P (x) − ν 2 P (x), ∀x ∈ X for any PFS. In special case, if X be a singleton set, then PLS P is reduced to s θ(x) , (μ P (x), ν P (x)) , which call it a Pythagorean linguistic value (PLV) or Pythagorean linguistic number (PLN). For convenience, a PLN is denoted by For example, let p = s 3 , (0.9, 0.2) be a PLN, and from it, we clear that the membership degree of s 3 is 0.9 and the non-membership degree of s 3 is 0.2, and the degree of hesitancy is √ 0.15.
Definition 2 [38] The expected value of any PLN p = s θ( p) , (μ(p), ν( p)) is denoted by E( p) and represented as follows: where For example, let p = s 3 , (0.9, 0.2) be a PLN, and then, the expected value E( p) of p can be calculated as follows: According to expression (1), the lower index of E( p) is I (E( p)) = 2.655.
Definition 3 [38] The score function of any PLN p = s θ( p) , (μ(p), ν( p)) is defined as and that of accuracy function is defined as where I (E( p)) is the lower index of E( p) of the PLN p.
Definition 4 [38] For comparing any two PLNs p = s θ( p) , (μ(p), ν( p)) and q = s θ(q) , (μ(q), ν(q)) , then we have the following: . . , n) be the collection of PLNs. Then, the aggregated value according to the PLWA operator is still a PLN and is given by where w = (w 1 , w 2 , . . . , w n ) T is the weight vector of p i with w i > 0 and n i=1 w i = 1.
Definition 5 [38] A PLPR P on the set X for the set S is defined as . . , n}, μ i j and ν i j are respective of the preferred and non-preferred degrees for the linguistic term s θ i j ∈ S of the alternative x i over the alternative x j in the following conditions: and In Definition 5, we say p i j = s θ i j , (μ i j , ν i j ) for all i, j ∈ {1, 2, . . . , n} is a PLPV or a PLPN. In addition, p i j = s θ i j , (μ i j , ν i j ) denotes the Pythagorean linguistic preference of the alternative x i over the alternative x j . Similar to fuzzy preference relation [40], the preferred and nonpreferred degrees of the alternative x j over the alternative x i according to PLPV are denoted by . . , m) be m PLPRs given by the experts e k (k = 1, 2, . . . , m) and ∀i, j = 1, 2, . . . , n and k = 1, 2, . . . , m.
Framework of consensus for the LSGDM problem based on PLPR
A framework is designed this section to solve the CRP for the LSGDM problem based on PLPRs.
Description of problem
There are a large number of experts E = {e 1 , e 2 , . . . , e m }, each of whom provides an opinion over alternatives in X . It is well known that preference relations are well-established tool to compare two alternatives at a time in the GDM problems. In this paper, the experts provide their decision over X using PLPR. The problem is addressed with a ranking of the alternatives or the selection of best options from the receivable information. Especially, the procedure for reaching a consensus the all individuals is analyzed in the following.
Proposed framework
It is necessary for a GDM problems before an optimal preference order can be analyzed: CRP and an appropriate selection process. In CRP, before making a decision, the DMs reach a mutual agreement with the expectation of gaining a more acceptable group solution. In the real LSGDM problem, the decision may affect entire groups or societies. Thus, in the LSGDM problem, DMs invite various professions from may be different countries, and hence, they are different interests. In this connection, the number of studies (discussed in Introduction) in the LSGDM problem is considered the experts behaviors. This paper is examined the non-cooperative behaviors of the experts or clusters in CRP. Non-cooperative behaviors may be classified in the following way: (i) a DM or an expert who demands that his/her evaluation is correct, but has no personal interests; (ii) an expert who realizes his/her assessment is accurate, where there may be own interests, and (iii) an independent expert demands that his/her evaluation is new. In this literature, we spotlight on the process of the first type of non-cooperative behavior. The other two types of non-cooperative behaviors are considered in the future. The existing studies [50] say that the non-cooperative behaviors handle two directions: one is weight punishment, i.e., weight modification where the experts' weight decrease for a greater consensus and other is the adjustment of the experts' opinion closure to the group evaluation.
Our CRP is shown in Fig. 1 which contains four blocks: a method of cluster, consensus measures, detecting and managing non-cooperative behaviors, and a selection process.
(1) The method of clustering displays in the first block. It is used to obtain clusters from the experts' opinions, which is the main part of our proposed approach. Then, we manage the obtained cluster in the next block. The popular clustering algorithms are fuzzy clustering algorithm [2], fuzzy c-means algorithm [3], grey clustering algorithm [26], and K-means algorithm [36]. In the LSGDM problem [8,55], the method of grey clustering is one of the effective methods and is broadly applied. In this paper, we apply this method for the experts' clustering according to their importance. In "Method of the expert clustering", we discuss this method (see Algorithm 1). (2) The measure of the classify clusters displays in the second block. After obtaining clusters, the weight of the clusters and the expert in a cluster are calculated. Then, we aggregate the experts' opinion in each cluster and obtain the weight collective of all clusters. A division measure is defined over the PLPR, which is used to obtain the CCI for each cluster. Then, we find the GCI. The detail discussion is available of the second block in "Consensus measure of clusters" section. (3) The detection and management of non-cooperative behavior strategy is displayed in the third block. A noncooperative behaviors detection and management strategy are handled through the changing of cluster weight. Detection rules and modification process are the techniques to use for management mechanism. In "Noncooperative behavior managements mechanism" section, we discuss third block.
(4) The selection process is displayed in fourth block. Once the final weight collective PLPR is obtained, the row arithmetic average values are calculated. Then, we find the lower index of each alternative from row arithmetic average values. The details process is discussed in "Selection process" section.
Proposed consensus approach
Since the CRP is iterative and which is the number of discussion rounds. The iteration-based approach is mainly concerned consensus measure and non-cooperative behaviors detection and management. In such an iterative process, the detection and management of the non-cooperative behaviors is an integral part, and this part is responsible for supervising and guiding the experts in a cluster through the iteration process. At first, we present the grey clustering method, and then, the consensus measures, non-cooperative behavior, and selection process are discussed.
Method of the expert clustering
Here, we consider the method of the experts' clustering according to the experts' opinions. It is a machine learning technique which is widely applied in data mining and machine learning communities [1]. Remarkable application is seen in information retrieval, segmentation of objects, recognition of object, etc. [10,17]. Clustering aims to shrink a large number of decisions according to finding subgroups with the experts similar opinion to improve the efficiency of CRP in the LSGDM problems. Several types of clustering methods are considered in [16][17][18].
Since the method of grey clustering is one of the effective tools which is broadly applied in the LSGDM problems. It is based on similarity measure within the experts to perform opinion clustering. Let SM = (e kh ) m×m be a similarity matrix within the experts in E, where e kh ∈ [0, 1] indicates the similarity degree between the experts e k and e h (k, h ∈ M). In the grey clustering algorithm [26], there are two laws which are dependent on two parameters: (1) Let ζ ∈ [0, 1] be a first parameter, which is used to find the neighbor of the experts, i.e., if e kh ≥ ζ , then the expert e k is the direct neighbor of the expert e h . (2) Let ξ ∈ [0, 1] be a second parameter, which is used to judge whether an expert belongs to a cluster or not. For a cluster Cˆz and an expert e k that needs to be classified, if the proportion of the neighbors of e k in Cˆz is larger than or equal to ξ , then e k can be classified into cluster Cˆz.
In this paper, the expert clustering is done by grey clustering method. In human decision-making, non-cooperative behaviors are leaded psychological factors, which a mental perception of the experts on their opinions. Various experts usually own different believe for their judgments. It is because there are several knowledge experiences, risk, or interests preferences among alliances or experts. Thus, in a few situations, the experts' behaviors can be reflected in the process of decision-making and its essential impact on the obtained result [27]. Therefore, managing non-cooperative behaviors of the experts are crucial tools in the CRP for the LSGDM problem based on PLPR. For that reason, to achieve more consistent clustering results, we put forth to make the clustering of the experts by PLPVs similarity. Then, the associated definitions for the expert clustering based on PLPRs are defined as follows: Definition 6 Let P k = ( p k i j ) n×n and P h = ( p h i j ) n×n be two PLPRs provided by the experts e k and e h , respectively, where . Then, the deviation measure between P k = ( p k i j ) n×n and P h = ( p h i j ) n×n isdefined as: Definition 7 Let P k = ( p k i j ) n×n and P h = ( p h i j ) n×n be two PLPRs provided by the experts e k and e h , and then, their degree of similarity is defined as Obviously, 0 ≤ ρ kh ≤ [0, 1]. The closer ρ kh is to 1, the more similar P k is to P h , while the closer ρ kh is to 0, the more distance P k is from P h .
The detailed experts clustering for the LSGDM based on PLPR is described in Algorithm 1.
Consensus measure of clusters
A large number of the experts are classified according to Algorithm 1, so we adopt in the following facts: 1: Using expression (10) to compute the value of d(P k , P h ) between the experts e k and e h . 2: Using expression (11) to calculate the value of ρ kh between e k and e h . If ρ kh ≥ ζ , (ζ ∈ [0, 1]), then e k is a neighbor of e h . Therefore, they can be arranged into one class. 3: The expert e k can be assigned within the cluster if the ratio of the experts in the group which are neighbors of the expert e k is greater than equal to a parameter ξ ∈ [0, 1], (ξ ∈ [0, 1]), then the expert e k can be divided into the cluster.
(1) All the experts in the same cluster have similar preference information, so their weights are allotted equally. (2) According to majority policy, the clusters have contained a large number of experts which should be allowed higher weights.
We give the following two formulas for calculating the weights of the cluster Cˆz and the expert e k ∈ Cˆz: For aggregating the PLPRs given by the experts in the cluster Cˆz, we use Theorem 1 and the expression (13). Therefore, the aggregated PLPR Pˆz = ( p i j,ẑ ) n×n of the cluster Cˆz can be obtained as: Similarly, we now use Theorem 2 and expression (14) for collect all clusters PLPRs, which is called weight collective PLPR, denoted by P c = ( p i j,c ) n×n , where We provide the following definition to find the GCI of the LSGDM based on PLPR. Pˆz = ( p i j,ẑ ) n×n be the PLPR of a cluster Cˆz, and P c = ( p i j,c ) n×n be the weight collective PLPR. Then, the deviation measure between Cˆz and P c is denoted by and defined as: E( p i j,ẑ )) − I (E( p i j,c ))) 2 1 2 .
Definition 9
Let (Pˆz, P c ) be the deviation measure between the cluster Cˆz and the weight collective P c , and then, the cluster consensus index (CCI) of Cˆz is defined as Using expression (17), the GC I can be calculated as Obviously, 0 ≤ GC I ≤ 1. If GC I = 1, then there is no deviation between clusters evaluations. Moreover, the consensus degree among the clusters is higher if GC I is to closure 1. When an adequate consensus degree is achieved or not among the experts, a consensus threshold σ ∈ [0, 1] is usually predefined to measure this. In this paper, we consider that if GC I ≥ σ for an acceptable consensus. Otherwise, we activate a feedback mechanism for an acceptable consensus.
Non-cooperative behavior managements mechanism
After shrinking a large number of the experts opinions, i.e., clustering the experts, it is natural phenomena toward opinion adjustment some clusters behave non-cooperative attitude. It is challenged to manage; however, the decision result is obtained by CRP. Motivated by the studied in [42,50], here, we discuss three facts such as identification rule, discussion and interaction, and proper adjustment for handling noncooperative behavior of clusters.
Non-cooperative behavior clusters identification
It is difficult to guarantee that each expert can express their evaluation by similar PLPVs completely in the LSGDM problems based on PLPR. For that, why a significant number of the experts are required, and each expert has various knowledge experiences or preferences interest. Thus, we offer to identify non-cooperative behavior cluster Cˆz by measuring their NCBDs. For that, the measure of NCBD of cluster Cˆz is defined as: Definition 10 Let CC I (Cˆz) be the CCI of the cluster Cˆz and the group consensus index of the LSGDM based on PLPR be GC I . Then, the NCBD of Cˆz is defined as As per Definition 10, the higher value of N C B D (Cˆz) is the higher degree of non-cooperative behavior cluster Cˆz will be.
Interaction and discussion
If a non-cooperative behavior cluster Cˆz is confirmed, then in-depth more analysis can be carried out between Cˆz and the other clusters. We have two outlines: (1) The weights of non-cooperative behavior clusters are reduced. If a cluster Cˆz is identified according to the noncooperative behavior, then should be reduced weight Cˆz for adverse effect on the final decision. (2) The non-cooperative behavior degree is high of the cluster Cˆz, the more the weights should be reduced. It is because the non-cooperative behavior hurts the LSGDM based on PLPR.
Proper adjustment
In the following, we give the rule to update the weight of cluster Cˆz according to the Definition 10 where δ ∈ [0, 1] is a feedback parameter to control the N C B D of cluster Cˆz on its weight. If the value of δ ∈ [0, 1] be larger, then the adjustment of the cluster weight will be smaller. In the following, the assessment information adjustment is to propose of non-cooperative behavior clusters, while we update their weights to improve CRP.
Find the position, i ς and j ς of the maximal elementsx (τ ) Then, return Pˆz to cluster Cˆz to construct a new PLPR P i j,ẑ ) n×n , and adjust the corresponding non-cooperative behavior levels: If the preference values of two elements are equal, i.e., i ς j ς ,ẑ )), then we find the adjustment rules for these two preference values are given below: i ς j ς ,ẑ to be modified in cluster Cˆz; i ς j ς ,ẑ to be modified in cluster Cˆz; j,c ))|, then randomly choose either p (τ ) i ς j ς ,ẑ or p (τ ) i ς j ς ,ẑ for modification in cluster Cˆz.
Selection process
When we obtain acceptable consensus, then we now ready for ranking of the alternatives. For that, we first obtain the row arithmetical average values of the weight collective PLPR P c = ( p i j,c ) n×n in the following way: From expression (22), we give the following definition for choosing the best alternatives.
Definition 11
Let X = {x 1 , x 2 , . . . , x n } be the set of alternatives and P c = ( p i j,c ) n×n be the weight collective PLPR of the LSGDM based on PLPR, and then, the lower index I (x i ) of P c = ( p i j,c ) n×n for each alternative x i is defined as i ∈ {1, 2, . . . , n}.
The detailed process of the ranking of alternatives LSGDM based on PLPR is depicted in Algorithm 2.
Algorithm 2
Finding the ranking of the alternatives.
Input:
The individual PLPRs P k = ( p k i j ) n×n and related parameters σ, δ ∈ [0, 1]. Output: Ranking of the alternatives.
1: Classify the experts with similar opinions into different groups using Algorithm 1. Let us assume that the large number of the experts can be divided into z clusters and denoted by each cluster Cˆz (ẑ = 1, 2, . . . , z). 2: Using expressions (12) and (13) to initially compute the weight of each cluster Cˆz, denoted by wˆz and the weight of each expert e k ∈ Cˆz. 3: Set τ = 0. Using expressions (14) and (15) z ) using expression (19). Then obtained maxˆz{NC B D(C (0) z )} for finding corresponding clusters to be non-cooperative behavior. 6: Using expression (21) to adjust the corresponding non-cooperative behavior levels. 7: Using expression (20) to modified the weights of each cluster. Set τ = τ + 1 and then go to step 3. 8: If achieving acceptable weight collective PLPR, then the final weight collective PLPR P (τ ) c = ( p (τ ) i j,c ) n×n . 9: Using expression (22) for finding the row arithmetical average values of the weight collective PLPR. 10: Using expression (23) to compute the lower index of acceptable weight collective PLPR for each alternatives. 11: Using the Definition 4, for ranking of the alternatives.
An illustrative numerical example and comparative analysis
In this section, we offer a numerical example to show the validity of the proposed consensus model for the LSGDM based on PLPR and compare with existing studies.
An example
Let X = {x 1 , x 2 , . . . , x 5 } be a set of alternatives and E = {e 1 , e 2 , . . . , e 20 } be the set of 20 experts are invited to make decisions. All the 20 experts individually make a pairwise comparison for the alternatives in X , and then give their opinion using PLPVs for the predefined linguistic term set s = {s −5 : extremely bad; s −4 : very bad; s −3 : bad; s −2 : relatively bad; s −1 : a little bad; s 0 : fair; s 1 : a little good; s 2 : relatively good; s 3 : good; s 4 : very good; s 5 : extremely good }. The detailed preference information is shown in Table 2.
According to proposed Algorithm 2, we have the following steps: Step 1: In this step, we apply Algorithm 1 for the experts clustering. For this, let us assume that two parameter ς = 0.86, which is the similarity degree threshold value among the experts and ξ = 0.7, which is the similarity degree threshold value between the expert and cluster. The results of clustering are shown in Table 3.
Steps 5, 6 & 7: Using expression (19), we compute the NCBD of each cluster C is the non-cooperative behavior cluster. Therefore, the first consensus iteration should be implemented on the cluster C (0) 1 . The detailed consensus round results are shown in Table 6. The trend chart of GCI and the feedback parameter for each round is shown in Fig. 2, when σ = 0.975. In Fig. 1, we also display the corresponding updating the weights and the trend of NCBD of clusters of different rounds, when σ = 0.975.
Step 8: From Table 6, we see that after 18th round iteration, the GC I is acceptable. Thus, using expression (15), we calculate the weight collective PLPR P (18) c , which is shown in expression (25): (18) i j,c ) 5×5 and the lower index of each alternative I (x i ) of P (18) c = ( p (18) i j,c ) 5×5 using expressions (22) and (23), are respectively. The detailed results are shown in Table 7. From Table 7 and the Definition 5, the alternative ranking is x 1 x 3 x 2 x 4 x 5 . Thus, the best option for this LSGDM problem based on PLPR is x 1 . The trend chart of I (x i ) of the alternatives x i (1 ≤ i ≤ 5) for various rounds when the acceptable consensus threshold value σ = 0.975 is shown in Fig. 3.
Discussion with comparative analysis
In general, comparative analysis is done by two points of view such as: the comparison of the GDM technique with numerical example and characteristics comparison of the GDM technique. PLPR is a new type preference relation, and so, there is no previous study about decision-making with the LSGDM problem based on PLPR. Therefore, no comparison is made here for GDM technique with numerical example. We only provide the characteristics comparison with intuitionistic linguistic sets (ILSs) [44], intuitionistic linguistic preference relations (ILPRs) [39], PLS [35,38], and PLPR [38], which is shown in Table 8.
Conclusions
In this paper, we have studied the LSGDM problem in the fuzzified linguistic context, that is proposed an approach the LSGDM problem based on PLPR. We focus on noncooperative behavior-based CRP with the degree of the non-cooperative behavior of clusters and feedback process for the LSGDM problem base on PLPR. We allow all the experts to use PLPR who express their opinion. In CRP, clusters are dynamically adjusted their non-cooperative behavior degrees while revising preferences values. Consequently, all . 3 The trend chart of updating the weights and the NCBD of clusters for different rounds when σ = 0.975 clusters achieve a consensus level. In CRP, the determination of non-cooperative behavior degree is utilized to assign the weights of clusters, and then, cluster modification is done by the feedback process. An example is offered to demonstrate the effectiveness of the proposed method. Incomplete PLPR is not considered in this LSGDM problem, which will be studied in future. | 7,700 | 2021-04-23T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Linguistics"
] |
Research into acetone removal from air by biofiltration using a biofilter with straight structure plates
The biological air treatment method is based on the biological destruction of organic compounds using certain cultures of microorganisms. This method is simple and may be applied in many branches of industry. The main element of biological air treatment devices is a filter charge. Tests were carried out using a new-generation laboratory air purifier with a plate structure. This purifier is called biofilter. The biofilter has a special system for packing material humidification which does not require additional energy inputs. In order to extend the packing material's durability, it was composed of thermally treated birch fibre. Pollutant (acetone) biodegradation occurred on thermally treated wood fibre in this research. According to the performed tests and the received results, the process of biodestruction was highly efficient. When acetone was passed through biofilter's packing material at 0.08 m s−1 rate, the efficiency of the biofiltration process was from 70% up to 90%. The species of bacteria capable of removing acetone vapour from the air, i.e. Bacillus (B. cereus, B. subtilis), Pseudomonas (P. aeruginosa, P. putida), Stapylococcus (S. aureus) and Rhodococcus sp., was identified in this study during the process of biofiltration. Their amount in the biological packing material changed from 1.6 × 107 to 3.7 × 1011 CFU g−1.
Introduction
Atmospheric emissions of volatile organic and inorganic compounds (acetone, xylene, ammonia, etc.) are relatively smaller than those of gaseous pollutants, such as carbon monoxide, carbon dioxide, nitrogen oxides or sulphur dioxide. Volatile organic compounds (VOCs), however, have greater influence on human beings and on the natural environment.
[1À3] The release of VOCs into the environment from industrial facilities, e.g. foundries, rubber production, pharmaceutical and chemical industry, paint production plants, increases air pollution and the likelihood of smog formation. [4] In order to minimize the release of these pollutants into the environment, it is necessary to apply the most efficient means possible. One of such means is the application of air purification techniques. The optimum cleaning method is selected taking into account the suitability, efficiency and cost-effectiveness of the purification technique.
Considering the above-mentioned criteria (suitability, efficiency and cost-effectiveness); currently, the most attractive cleaning method is the treatment of volatile organic and inorganic compounds with biofilters. VOC treatment using a biofilter is based on the biofiltration technique. Biofiltration is a method for degradation of VOCs using certain cultures of microorganisms. Some of those microorganisms (bacteria) are Pseudomonas fluorescens and Alcaligenes xylosoxidans. [5] The bacterium Pseudomonas putida has been used to remove VOCs from the air. A cleaning efficiency of 90% has been achieved during these tests. [6] Alcaligenes, Acinetobacter, Burkholderia, Pseudomonas, Xanthobacter and Hyphomicrobium bacteria are suitable for the removal of VOCs from the air. The amount of bacteria in a biological packing material should range between 10 8 and 10 10 CFU g ¡1 . [7] One population of microbes suffices to degrade the VOCs. [8] Microorganism's adaptation to a biological packing material may take from several days to several weeks. [9,10] Typically, the amount of bacteria in a biological packing material can range between 10 6 and 10 10 CFU g ¡1 , while that of micromycetes between 10 3 and 10 6 CFU g ¡1 . [11] The bacteria Corynebacterium and Rhodococcus are also suitable for VOC degradation. [12] Experimental tests performed by Chan and Chang [13] with a VOC (acetone) at different pollutant concentrations (from 0.12 to 0.71 g m ¡3 ) show a maximum pollutant removal capacity of 95%.
Zhang and Pierce [14] have also used the bacterium Rhodococcus to degrade VOCs and the results of their *Corresponding author. Email<EMAIL_ADDRESS>research have shown a treatment efficiency of about 90%. Italian and Tunisian scientists also used the bacterium Rhodococcus for VOC removal. They also claimed that this bacterium is capable of treating VOCs and that treatment efficiency can range between 81% and 100%. [15] In order for microorganisms to develop and remove VOCs from a polluted airflow, it is necessary to ensure favourable conditions for their growth and spreading on a biological packing material.
The physical factors that mostly have an influence on the growth and propagation of microorganisms are humidity and temperature. [16,17] Water is the most important medium in which the metabolism of organisms takes place; furthermore, all of the chemical reactions that occur in living microorganisms require water.
Since the packing material's humidity changes during VOC removal from the air, it is necessary to ensure the control of the humidification system in the biofilter. While flowing through the packing material, the air becomes saturated with vapour, eliminates humidity and reduces the moisture content of the packing material. However, at the same time, the process of biodegradation converts organic compounds into carbon dioxide (CO 2 ) and water (H 2 O), thus partially restoring the humidity content. Decomposition of 1 kg of hydrocarbon produces 1.5 kg of water. Generally, this amount of water is insufficient for packing material humidification and, therefore, the packing material must be humidified additionally. To ensure an efficient process of pollutant biodegradation, the packing material's humidity has to reach 40%À70%.[18À20] To achieve an efficient performance of the biofilter, the humidity of the packing material has to reach 55%, while its porosity has to be 80%. The humidity of the packing material depends on its type and the humidification system used in the biofilter. In addition, the efficiency of the biofiltration process is highly influenced by the air temperature in the biofilter, as it determines the microorganism's development and activity. The air temperature in biofilters has to be distributed evenly within the volume of the biological packing material. In some biofilters this is achieved by installing air supply ducts across the entire area of the biological packing material, which requires greater financial inputs. [21] Another important factor, necessary for efficient pollutant degradation, is the time of contact between the biological packing material and the pollutant. The longer the time of contact, the more efficient the biodestruction process. [22] The contact time depends on the thickness and porosity of the biological packing material. When the packing material in filters is thicker, but has lower porosity, the contact time is longer, but the aerodynamic resistance is higher.
An equally important factor is the pH of the medium. The transport mechanisms, reactions and growth rates of microorganism cells and also the destruction of some substances and the synthesis of others into new compounds depend on the acidity of the medium, which is defined by the pH measure. Most microorganisms tolerate §1 to 2 pH deviation from the optimum value. [23] When the reaction of the medium changes, the activity of heterotrophic enzymes also changes. Neutral or weakly alkaline or acidic media with hydrogen ion concentration pH ranging between 6 and 8 are used for biological air treatment. [24] Most media, used to destroy volatile organic compounds, have a neutral concentration of hydrogen ions (pH D 7).
The aim of the research was to analyse the biofiltration process' efficiency by supplying acetone vapour with different concentrations to the biofilter. We applied a biofilter with plates of straight internal structure.
Materials and methods
Experimental tests were carried out using a biological air purifier À biofilter. Figure 1 shows a chart of the biofilter with straight plates.
Biofilter operation principle
Polluted air is supplied to the biofilter ( Figure 1) via a polluted air duct (1) which is 100 mm in diameter. A ventilator (3) in the polluted air supply duct creates an airflow through the biofilter. The polluted air supply duct has a valve (2), which regulates the airflow velocity and, at the same time, the flow rate of the supplied air. Then, the polluted airflow enters a biofilter cartridge (16). The biofilter cartridge is packed with a packing material made of porous plates (Figure 1(d)). The airflow is evenly distributed by a perforated plate (15) over the entire volume of the packing material. The polluted air flows between the porous plates that are submersed in a liquid medium and arranged at 6 mm distance from each other. Having passed through the biofilter cartridge (16) with the packing material, a clean airflow enters a clean air duct (13), which is 100 mm in diameter, and is released into the environment. The cartridge is attached to the device by fixation elements (7). Sampling holes (6) are made in the polluted and clean air ducts. Airflow rate, temperature and pollutant concentrations supplied to and discharged from the biofilter are measured at these places. An excess of biomass is discharged from the biofilter via a biomass release valve (10). The required temperature of the supplied airflow is maintained by a channel air heater (17) with a thermal regulator (4) and a sensor (5). The biomedium temperature is maintained using a biomedium heating element (14) with a temperature sensor (8). A tank (9) with controllable valves (10,12) and supply hose (11) is used to supply a solution saturated with biogenes to the biofilter.
The main element of the biofilter is the cartridge made of straight polymer plates on which packing material, i.e. wood fibre, is applied. The cartridge dimension is 900 £ 200 £ 200 mm. Plates, which are arranged at 4 § 0.2 mm distance from each other, produce the capillary effect of humidification. As a result of this effect, the water is capable of ascending through the pores of the wood fibre, when the space between the plates is small. During our research, the capillary system of packing material humidification was installed in the biofilter. Such humidification system has an advantage over other systems, in terms of significantly lower energy costs. Furthermore, it operates at a full capacity even when the power supply is discontinued.
In order to extend the durability of the birch fibre, it is necessary to undergo thermal processing. The birch fibre is obtained by thermal treatment of birch sawdust in a steam explosion reactor at a pressure of 3.2 £ 10 6 Pa and a temperature of 235 C. Thus, changing the chemical structure of the wood prevents birch fibre from rotting in a humid medium, which results in an extension of the durability of the biofilter's packing material. The material was Figure 1. Chart of an air cleaning biofilter having a straight plate internal structure and capillary system for packing material humidification: side-view (a), view from above (b), view of biofilter cartridge (c), composition of the plate (d). Note: 1 À polluted air duct; 2 -valve; 3 À ventilator; 4 À a thermal regulator; 5 À sensor; 6 À sampling holes; 7 À fixation elements; 8 À temperature sensor; 9 À tank; 10 À controllable valves; 11 À supply hose; 12 À controllable valves; 13 À clean air duct; 14 À biomedium heating element; 15 À perforated plate; 16 À biofilter cartridge; 17 À channel air heater; WL À water level, WB À wood fiber, NWCM À not-woven caulking material, LPP À linear polymer plate. selected for its internal structure. The structure was determined by the method of electron microscopy. Scanning electron microscopy was performed using a field-emission scanning electron microscope JEOL ISM À 7600 F. Microscope enlargement was from £ 25 to £ 100,000. The electron acceleration voltage was from 0.1 to 30 kV. The image resolution was up to 5120 £ 3840 pixels.
Determining biomedium's pH and temperature
The required pH and temperature of the solution, saturated with biogenic elements, were maintained using KH 2 PO 4 and K 2 HPO 4 buffer solutions. [17] The values of pH and temperature were recorded on a daily basis. The porous plates of the biofilter cartridge were submersed into the solution, and saturated with biogenic elements (medium). The solution's composition used for the tests is presented in Table 1.
Such amounts of the used biogenic elements in the medium were selected after surveying data presented by other scientists during their research works. [22,25À29] Determining air humidity and temperature in the biofilter Air temperature in the biofilter was measured by TESTO 400. This device can measure airflow velocity, rate, temperature, pressure and humidity. Airflow humidity and temperature were measured in three sections of the biofilter between the plates, i.e. at 150 mm from the airflow inlet (at 12 points), at 500 mm from the airflow inlet (12 points) and at 150 mm from the airflow outlet (12 points).
Determining biological packing material's moisture content
The moisture of the biological packing material in the biofilter was determined by a moisture meter M0290. The parameter was measured at four points of each section every day of the experiment.
Activating the biological packing material and determining the efficiency of the biodestruction process During the conducted tests, an airflow polluted with acetone vapour was passed through the biofilter's packing material. The airflow rate between the wavy plates reached, on average, 0.08 m s ¡1 . The same rate was also measured every day by the airflow meter Testo 400 with a thermocouple. The accuracy of thermocouple measurement is §0.01 m s ¡1 when the airflow rate ranges between 0.01 and 2 m 3 s ¡1 .
The activation of the biological packing material lasted for 10 days. During the activation process, air polluted with acetone vapour was supplied to the biofilter. The initial pollutant concentration reached 0.256 g m ¡3 . The pollutant was supplied to the device four times a day for 15 min, each time. During the next day, the concentration of the organic compound increased by 0.020 § 0.005 g m ¡3 each time, by extending the duration of acetone vapour supply to 1 h. After the activation was completed, biofiltration efficiency tests were carried out for another five days, using air polluted with acetone vapour.
The efficiency of the pollutant decontamination was calculated by determining the pollutant concentration before entering the cleaning device and after leaving it.
Determining the amount and composition of microorganisms in the biological packing material A piece, weighing 1 g, was taken from each sample and placed into a flask containing 90 mL of 0.8% NaCl. In order to compare different samples with each other, calculations were made after the material in question was dried up to constant weight and the amount of microorganisms in 1 g of dry weight of the biofilter's material was calculated.
Micromycetes were grown on a medium agarized on beer mash. The cultures were incubated in Petri dishes for 5À7 days at a temperature of C28 C.
Yeast was grown in the Sabouraud agar nutrient media with chloraphenicol (Liofilfem, Italy) and Rose Bengal CAF agar (Liofilchem, Italy). The culture was incubated in Petri dishes for 3À4 days at a temperature of C28 C.
Yeasts were identified using the identification systems Api 20 C AUX (bioM erieux, France).
We prepared nutrient agar (NA), selective agarized cetrimido (Pseudomonas (cetrimide)) agar and agarized Bacillus cereus media for the growth of bacteria from the analysed samples. The following bacterial suspensions were prepared: 1:10, 1:100, 1:1000, 1:10,000, 1:100,000, 1:1,000,000, 1:10,000,000, 1:100,000,000, 1:1,000,000,000 and 1:10,000,000,000. An amount of 0.1 mL of each suspension was poured onto the surface of the medium in Petri dishes and smoothed over with a spatula. The cultures were incubated for 2À4 days at a temperature of C28 C. The grown bacteria were identified according to their morphological, biochemical and physical properties, and compared with the available reports. We used the descriptions of bacteria from Bergey's Manual of Systematic Bacteriology. [37À39]
Analytical methods
The value of pH was determined according to the standard LST ISO 10523. A pH meter Mettler Toledo was used to determine the pH and temperature. The instrument's measurement range is from 0 to 14, and error is §0.01. Airflow humidity and temperature were determined by TESTO 400. The range of airflow humidity measurement was from 0% to 100%, and the accuracy was 0.1%. The airflow temperature measurement range was from 200 to 800 C. Packing material humidity was determined with a humidity meter (Extech Instruments, model MO290). The material humidity measurement ranges from 0% to 99.9% (error is §0.1%). The pollutant's concentration was measured with the instrument MiniRae 2000 whose measurement limits range from 0 to 7.00 g m ¡3 . The accuracy of measurement at pollutant's concentration from 0 to 0.100 g m ¡3 is 0.0001 g m ¡3 , while at pollutant's concentration above 0.100 g m ¡3 is 0.001 g m ¡3 . The method of washing (serial dilution method) was used to separate microorganisms and calculate their amount.
Operating conditions
Studies were performed on the level of the acetone inlet load rate (ILR), the corresponding acetone removal efficiency (RE) and elimination capacity (EC). The definitions for these three parameters are set out as follows: where C I is the inlet acetone concentration in the biofilter (g m ¡3 ); C 0 is the outlet acetone concentration in the biofilter (g m ¡3 ); ILR is the inlet loading rate (g m 3 h ¡1 ; m s ¡1 ); V is the volume of the filter bed (m 3 ) and Q is the gas flow rate (m 3 h ¡1 ). All of these parameters were studied in accordance with the operating conditions, as summarized in Table 2.
Results and discussion
The whole period of experiments (15 days) was divided into five stages (A, B, C, D and E). During Stage A, the biomedium was activated with acetone vapour by increasing the pollutant's concentration with 0.020 § 0.005 g m ¡3 each day from Day 1 until Day 10 of the experiment. After Day 10 of the experiment, the efficiency of the biofiltration process was analysed (Stages B, C, D and E) ( Table 3) by increasing the pollutant's concentration in the supplied air and maintaining a steady flow of the air that was supplied to the biofilter.
The biological packing material is one of the main elements of the biofilter. Before undertaking any investigations, it is very important to analyse the structure of the packing material which determines the physical properties of the biomedium on which the efficiency of biofiltration process depends. The most important properties of the biomedium are porosity and capillarity. The greater the porosity of the material, the better the adsorption of pollutants from a polluted airflow. [40] When the porosity is higher, the capillarity is also more efficient. The better the parameters, the more efficient the pollutant biodegradation. Thermally treated birch fibre also has an uneven surface (Figure 2(a)), a porous structure (Figure 2(b)) and, at the same time, a bigger specific surface area.
As seen from the picture, it seems as if birch fibre is composed of many minor 'straws' arranged in parallel to each other, which are 15À30 mm thick. Wood fibre is necessary for the packing material in order for the microorganisms in the biomedium to assimilate organic carbon from it.
The Figures 3À7 show the results of experimental tests during which air polluted with acetone vapour was The initial pH of the medium was 6.72 ( Figure 3) and it depended on the chemical substances in the medium. During the experiment, the pH value of the medium increased and reached 7.27 on the experiment's Day 15. The temperature of the nutrient media in the biofilter was 30.2 C. Therefore, taking into account the recommended pH limits for VOCs' degradation, the optimum pH level was ensured during this experiment. Many other scientists also maintain that the optimum pH for bacterial growth should reach 7. [41] After comparing the results of our tests and the results of tests performed by American scientists who investigated a biofilter and removed VOCs from an airflow, it can be stated that we obtained similar results, which differ by a mere 4%. [42] For the purpose of ensuring optimum conditions for the process of biofiltration, airflow humidity and temperature were measured (Figure 4).
Biological air purification efficiency greatly depends on a humidification system installed in biofilters.
[43À45] In our research, a self-humidification system of the packing material was used in the biofilter which is based on the effect of capillary humidification of the packing material. The depth of porous plate soaking was 50 mm. The overall height of the plate was 200 mm. The porous structure of the plates and the size of the spaces (4 § 0.2 mm) between the plates arranged next to each other produce the capillary effect of the packing material humidification. As a result of the effect of the packing material's capillary humidification, the solution (biomedium) spontaneously ascends and humidifies the wood fibre. Therefore, this self-humidification system does not use additional energy and ensures the appropriate humidification of the packing material even when the technological process is interrupted during repair work or when the power supply is discontinued for some other reasons. At the beginning of the experiment, the humidity of the biological packing material reached 63.1% and rose up to 63.7% on Day 4 and afterwards remained steady, representing, on the average, 63.8%. For ammonia removal, the Taiwanese scientist Chung [46] used a compost-packed biofilter. The results of the research showed that at 40%À46% humidity of the packing material, the air purification efficiency was high À 98%. It can be stated, therefore, that during our experiments, the humidity of the packing material was also sufficient. Since the recommended packing material humidity of 40%À70% was ensured, it had no adverse effect on the degradation of microorganisms in the biological packing material.
Temperature is the most important factor, responsible for the rate of the microorganisms' development and the intensity of biochemical reactions. Different groups of microorganisms are adapted for living at different temperatures. While analysing the biofilter's cleaning efficiency, in order to achieve a high cleaning efficiency, Taiwanese scientists Chang and Lu [27] recommend maintaining air humidity from 85% to 95% and temperature from 25 to 35 C in the biofilter. According to many scientists, a change in the rate of pollutant degradation is not observed at a temperature of 20À30 C. [47] However, where the temperature of the supplied airflow is higher than 40 C, microorganisms may die, unless the microorganisms that are incubated in the biological packing material are thermophilic. [48] When the temperature of the discharged air increases or decreases, both the temperature of the supplied airflow and the fact that microorganisms emit heat during pollutant degradation can be important. Consequently, an increase or decrease in the amount of microorganisms in the biological packing material can result in changes in the air temperature. [49] The most efficient biodegradation of toluene in the polluted airflow was achieved at an airflow temperature of 30À35 C. [50] As seen from Figure 4, during our experiments, the airflow humidity in the biofilter between the straight plates varied in the range between 76% and 80%, and the temperature was between 26 and 27 C.
The main factor responsible for the device's efficiency is its capability to remove pollutants (acetone) from a supplied airflow with the help of microorganisms.
Prior to operating biological air purifiers, the biological packing material inside them undergoes 2 biological activation. The packing material is activated, when the organic pollutants air is passed through it. [16] A packing material is considered to be biologically activated when it is covered by a thin layer (5À30 mm thick) of biofilm, which contains microorganisms.
In this case, the packing material was activated until experiment's Day 10. The packing material was activated by air polluted with acetone, which passed through it and gradually increased its concentration and recording the biofilter's cleaning efficiency.
The initial pollutant concentration was 0.0256 g m ¡3 , while at the end of the experiment the concentration reached 0.997 g m ¡3 ( Figure 5). As the figure shows, the biofilter's cleaning efficiency gradually increased until Day 11 of the experiment and approached the limit of 90.3%. On that day, the concentration of acetone vapour stood at 0.295 g m ¡3 . Later, when the concentration of pollutant in the supplied air increased, the biofilter's cleaning efficiency decreased with 7% every day until Day 15 of the experiment. On Day 15 of the experiment, the efficiency of acetone vapour removal reached 69.8%. Air cleaning efficiency decreased due to the fact that the amount of bacteria in the biological packing material decreased from 3.7¢£ 10 11 to 7.6¢£ 10 10 CFU g ¡1 . Since there was an excess of pollutants, microorganisms were incapable of degrading such amount thereof, and therefore, their death was observed according to the decrease of their number.
Empty bed residence time (EBRT) has a major impact on air cleaning efficiency. [51] Most experimental tests show that when the residence time increases, the VOCs' removal efficiency improves. [52,53] In order to achieve the longest possible residence time, it is recommended to increase the volume of the filtering layer. Also, the residence time depends on the level of biological decomposition. [49] During our tests, the residence time reached 11 s.
When acetone vapour concentrations were supplied to the biofilter in Stages B, C, D and E, the pollutant elimination capacity ranged between 33 and 87 g m ¡3 h ¡1 . In Stage A, the biofilter's elimination capacity was from 0.6 to 28 g m ¡3 h ¡1 .
At the beginning of the experiment, yeasts dominated and only one or two fungi colonies grew. However, after 10 days both the amount of fungi and the variety of their species increased. In addition to Paecilomyces variotii, which are distinguished by high-level sporulation, yeast fungi of the genera Aureobasidium and Geotrichum also developed well.
It has been determined that during volatile substance (acetone) filtration, the yeast Rhodotorula mucilaginosa dominated. The yeast amount ranged from 0.13¢£ 10 8 CFU g ¡1 at the beginning of the experiment to 0.45¢£ 10 8 CFU g ¡1 at its end. It has been determined during the analysis of the amount of bacteria on birch fibre that their amount grows from 0.16¢£ 10 8 CFU g ¡1 on Day 5 to 0.83¢£ 10 8 CFU g ¡1 on Day 13 ( Figure 6).
Bacteria of the dominating genera and species were determined during the research. The largest amount of the determined bacteria belonged to the genera Bacillus (B. cereus, B. subtilis), Pseudomonas (P. aeruginosa, P. putida), Stapylococcus (S. aureus) and Rhodococcus sp (Figure 7).
Conclusions
When wood fibre was used for the biofilter's packing material, the optimum parameters ensuring an efficient work of microorganisms were maintained. Packing material's humidity reached 63.7% § 1%, airflow temperature was 26.6 § 2 C, air humidity was 78.1% § 5%, the medium pH was 7.1% § 0.6% and the medium temperature was 30.3 § 0.1 C.
The efficiency of removing acetone vapour from the air was between 70% and 90%. The highest air cleaning efficiency of 90.3% was achieved at 0.3 g m ¡3 concentration of pollutant supplied to the biofilter at 0.08 m s ¡1 rate.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
The project is funded by the European Social Fund. The project was supported and co-funded by the European Union and the Republic of Lithuania [grant number VP1-3.1-SMM-10-V-02-015]. | 6,311.2 | 2015-02-03T00:00:00.000 | [
"Engineering",
"Biology"
] |
Methods for simultaneously identifying coherent local clusters with smooth global patterns in gene expression profiles
Background The hierarchical clustering tree (HCT) with a dendrogram [1] and the singular value decomposition (SVD) with a dimension-reduced representative map [2] are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose) seriation by Chen [3] as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at .
Background
Matrix visualization [4], for example the Cluster and TreeView package [5], is an important exploratory data analysis tool in the study of microarray gene expression profiles. The visual patterns of genes (rows) and arrays (columns) in the permuted gene-by-array expression pro-file matrix are useful for clustering purposes. The hierarchical clustering tree and the singular value decomposition are the two methods for identifying suitable gene/array permutations. This section briefly reviews the advantages and disadvantages of the two techniques using the fibroblast to serum gene expression data [1,6].
The branching structure of a dendrogram plays an important role in identifying permutations of genes and arrays by its arrangement of intermediate nodes. For a given HCT with n terminal nodes (genes or arrays), there are n-1 intermediate nodes. Each of these intermediate nodes can be flipped independently resulting in 2 n-1 possible orderings of the terminal nodes from the same dendrogram built on the identical proximity matrix. Bar-Joseph et al. [7] had detailed discussion on the HCT intermediate nodes flipping phenomena. It was first formulated by Gruvaeus and Wainer [8]. To order the leaves of a binary HCT when two ordered branches are merged, the new branch is formed by placing the similar endpoints of the joining branches adjacent to each other. Many different heuristic ordering methods [1,9,10] have also been suggested for solving this problem. Bar-Joseph et al. [7] presented a fast optimal leaf ordering for the hierarchical clustering algorithm that maximizes the sum of the similarities of adjacent leaves in the Travelling Salesman sense [11], and we refer to this approach as the optimal tree method. Bar-Joseph et al. [12] proposed a heuristic algorithm for constructing k-ary trees by extending and improving the optimal leaf ordering algorithm in [7].
Singular value decomposition (SVD) and Rank-two ellipse seriation (R2E)
For identifying smooth transitional expression patterns and more global-grouping structures, people turn to dimension reduction techniques, such as singular value decomposition, for help [2,13,14]. Alter et al. [2] laid down the mathematics of SVD for analyzing gene expres-sion profiles and proposed the concept of eigenarrays and eigengenes as representative linear combinations of original arrays and genes. They further suggested that one sort the arrays and genes according to the relative positions on the subspaces spanned by the two leading eigenarrays and eigengenes.
Chen [3] introduced a sorting algorithm called rank-two ellipse (R2E) seriation which improves the SVD method by extracting the elliptical structure of the converging sequence of iteratively formed correlation matrices using the eigenvalue decomposition. Figure 1b displays the resulting matrix visualization of the human fibroblasts expression profile sorted by the R2E algorithm. We see that the R2E sorted correlation matrix identifies a very smooth transitional pattern. More advantages of the R2E method over the SVD method will be discussed in the Methods section.
The proposed rank-two ellipse seriation-guided hierarchical clustering tree (HCT_R2E)
We propose to guide the flipping mechanism of a conventional agglomerative HCT using the rank-two ellipse (R2E) seriation of Chen [3] as an external reference. The resulting algorithm automatically integrates the desirable properties of HCT and R2E so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends.
The R2E-guided HCT with the corresponding permuted matrices can be seen in Figure 1c. The permuted correlation and gene expression matrices in Figure 1c resemble the corresponding matrices in Figure 1b extremely well, meaning that the coherent local structure (clusters) identified by the HCT architecture and the smooth global transitional pattern explored by the R2E algorithm do not necessarily conflict with each other. An important note here is that the dendrogram (hierarchical tree) architecture (merging steps) in Figure 1c (with R2E guide) is identical to that of Figure 1a (without R2E guide). The only thing different is the flipping mechanism of intermediate nodes.
Global trend and the Robinson matrix
It is not common to permute the orders of arrays with time series nature for preserving the time-to-time local structure and the overall global time-trend. The local pattern and the global trend usually do not co-exist well in a given matrix unless a Robinson form [15] can be permuted from the matrix. A Robinson Matrix, R = [r ij ], is a symmetric matrix such that r ij ≤ r ik if j <k <i and r ij ≥ r ik if i <j <k. The basic property of a Robinson matrix is monotonicity as one proceeds from the main-diagonal elements to all four margins of the given matrix. For a permuted Matrix visualization for expression profiles map with corresponding pair-wise correlation map for Fibroblast to serum data [1]
Results
Three additional real data sets, together with the fibroblast to serum gene expression data, are analyzed to demonstrate the performance of the proposed method. The first one is the annotated subset cell cycle data from [16]; the second is the severe acute respiratory syndrome coronavirus (SARS-CoV) studied in [17]; the transition metal study in [18] is the final example. The same eight sorting algorithms (SVD with one eigenvector (SVD1), SVD with two eigenvectors (SVD2), self-organizing maps (SOM) [19], rank-two ellipse (R2E), HCT with random flips (HCT_RAM), optimal tree (HCT_OPT), SOM-guided tree (HCT_SOM), and R2E-guided tree (HCT_R2E)) are tested for all data sets. We only summarize the results of two HCT and two non-HCT algorithms: SVD2, R2E, HCT_OPT, and HCT_R2E. (Please see Additional file 1 for detailed comparison of all eight sorting algorithms.)
Fibroblast to serum data
For the gene expression data matrix of 517 genes observed in 12 arrays from the time series of fibroblasts to serum in [1], we plot the GAR loss scores and the RGAR loss scores in Figures 3ab without redrawing the permuted matrix visualizations.
Results
The GAR curves (window-size ranges from 1 to 516) for the four sorting algorithms plotted in Figure 3a produce the following observations: • the R2E (smooth green line) clearly outperforms (lowest GAR scores) the other three methods ; • the HCT_OPT algorithm has poor global (large windowsize) performance; • the proposed HCT_R2E method outperforms HCT_OPT, and is nearly as good as the SVD2 algorithm in the global sense.
We plot in Figure 3b the relative generalized anti-Robinson (RGAR) loss scores for better comparison of local behaviours among the four methods, to observe the following: • both HCT algorithms (curves with dots) outperform two non-HCT (smooth curves) in small window-size area (1 Ϲ w Ϲ 50); • the optimal hierarchical clustering tree, HCT_OPT, has the best performance among the four HCTs for the smallest window-size area (1 Ϲ w Ϲ 35); • the proposed HCT_R2E method actually scores best for a small period in the middle range (35 Ϲ w Ϲ 75); • the R2E algorithm dominates the competition from w = 100 on.
Without the visualization of two smooth transitional patterns for up-and down-regulated genes in Figure 1b, HCT in Figure 1a suggests many gene-clusters with very coherent expression profiles, but with no knowledge of the possible embedded smooth transitional patterns. The proposed HCT_R2E method automatically integrates the coherent local property of HCT and the smooth global trend of R2E to provide users the improved Figure 1c. The visualization of the expression profile and the correlation matrices in Figure 1c provide users exploration for local behaviour of genes function closely together in small time scale and for more complicate global relationship with larger time interval simultaneously in such a time series expression experiment.
Generalized anti-Robinson (GAR) loss scores for Fibroblast to serum data [1]
Yeast cell cycle data
These data are a subset of the original 6240 genes expressed at 17 time points used in Cho et al. [16]. We selected the 145 genes that have been biologically characterized and assigned to five different cell cycle phases (early G1, late G1, S, G2, and M). Expression at one abnormal time point was removed from the data set (as suggested by [20]) resulting in our gene expression profile of 145 genes at 16 time points.
Results
In addition to lower intermediate to global GAR and RGAR loss scores (see Additional file 1 for details), the permutation identified by the proposed HCT_R2E method also possesses more meaningful biological implications than the other algorithms. The cell cycle phase diagrams for the three seriation algorithms (SVD2, HCT_OPT, and HCT_R2E) are shown in Figure 4, where the identical inner circle represents the 145 genes sorted with the known cell cycle phase information. The outer circle for each algorithm is rotated to its best position among all 145 possible rotations according to the following criteria: the simple match score computes the proportion of correct (against known phase information) matches for all 145 gene positions, ranging from 0 (worst) to 1 (best); the weighted match score assigns weights of (2, 1, 0) to genes that deviate from the known phase by (0, 1, 2) phase groups, and is also scaled to 0 (worst) to 1 (best); the total deviation score sums the deviations (by number of genes) of all 145 genes to the boundaries of their known phases. Both the simple match and weighted match are gain scores (the higher the better) while the total deviation is a loss score (the lower the better).
From Table 1 we see that the proposed HCT_R2E algorithm outperforms the other seven algorithms in all three matching scores. Through visualization, the cell cycle diagrams sorted by the three algorithms can be roughly separated into three classes: • SVD2 performed rather poorly; • HCT_OPT permutation showed better correlation to the known phases than SVD2; • HCT_R2E arranged the 145 genes at positions very close to their annotated phase positions.
Although the HCT_R2E algorithm aligned the 145 genes close to their known phases, several genes deviated far away from their annotated cell cycle phases, as can be seen from the cell cycle diagram in Figure 4c. We further examined the phase annotations provided by another yeast cell cycle study of Spellman et al. [21]; the cross-annotated phase labels for both studies are listed in Additional file 2.
Cell cycle phase diagrams for Yeast cell cycle data [16] The 15 genes with largest deviations from their annotated phase groups sorted by the proposed HCT_R2E algorithm are bold-faced. From the corresponding annotated phases of [21], in the last column, we see that the Spellman et al. [21] annotated phases for these 15 genes either fit better into the overall cell cycle pattern (e.g., YKL067W from S to G1, and YEL017W from early G1 to S/G2), or their phase conditions are not annotated (7 out of 15). This result further implies the proposed algorithm can be applied to either verify known biological conditions or to explore unknown phenomena.
Severe acute respiratory syndrome coronavirus (SARS-CoV) data
In the severe acute respiratory syndrome (SARS) study of Lee et al. [17], the expression profiles of 52 signature genes are used to explore the between-sample severity pattern from normal controls to acute SARS patients. A Euclidean distance matrix among 55 samples (11 acute SARS (AS) patients, 33 recovering SARS (RS) patients, and 11 normal control (NC) subjects) using these 52 genes is computed to identify a potential order that could reflect the severity structure of the disease. There are three major differences between this SARS example and the yeast cell cycle data analysis. These are not time series gene expression data; the focus is on the between-sample structure instead of the gene set; and the proximity measure adopted is the between-sample Euclidean distance instead of the correlation coefficient.
Results
The same eight algorithms are used to sort the Euclidean distance matrices for the 55 samples but only results of the three methods, HCT_OPT, R2E, and HCT_R2E are displayed. The corresponding expression profile matrices with related HCT dendrograms and the sorted colour bands for sample identities are displayed in Figure 5. We observe the following: • there is a clear uni-dimensional Robinson pattern for this SARS Euclidean matrix; • the HCT_OPT (Figure 5a) algorithm presented rather coherent local structure; • R2E ( Figure 5b) sorted samples identify colour bands that exhibit a clear blue (NC) to yellow (RS) to red (AS) severity structure of the disease; • the Euclidean matrix sorted by the proposed HCT_R2E (Figure 5c) method displays very coherent local relationships, as well as extremely good global structure. Its identity colour band has a coherent within sample-subtype pattern We have summarized the numerical comparisons (GAR, RGAR) for the eight sorting algorithms in Additional file 1.
In [17], the R2E permuted sample rank of SARS severity was identified to be significantly correlated with the clinical pulmonary infection score (CPIS) and other clinical factors. The severity rank of samples was also found to be highly correlated with the suppression of the human Table 2 and we note the following: • the proposed HCT_R2E algorithm has the highest correlation with number of days after the onset of disease while the R2E method comes next; • the proposed HCT_R2E algorithm has the highest correlation with CPIS among all eight sorting methods, while the SVD1 and HCT_OPT algorithms share second place From these comparisons we observe a significant advantage of the proposed R2E-guided hierarchical clustering tree in searching for meaningful biomedical information and correlation such that researchers can further propose more precise hypotheses and conducting more accurate experiments.
Transition metal stress data
Kaur et al. [18] ). Halobacterium NRC-1 was exposed for five hours to at least three concentrations of each of the six transition metals. In Figure 5 of [18], using 468 genes that changed significantly in at least two conditions out of a total of 19 (3 concentrations for , an HCT and a correspondence analysis (CA, [22]) are carried out (we only obtained 444 genes using identical selection criteria). Their HCT permutation for the 19 metal conditions does not correlate well with the pattern displayed in their CA plot for the conditions. Our task here is to guide the flips of HCT intermediate nodes by the R2E algorithm with the hope that the resulting permutation does not contradict that of the CA analysis.
Results
The CA plot is reconstructed in Figure 6a. Information for the 444 genes is not displayed for better illustration of the 19 [18] did not specify proximity measure and linkage type in their study). The optimal HCT and the proposed elliptical seriation-guided HCT with their permuted Euclidean matrices are displayed in Figure 6bc. Although HCT_OPT does identify good local clusters for the metal groups, the overall permutation does not correlate well with the linear trend from the CA analysis. The HCT_R2E permutation not only correlates with the linear trend of transition metal groups very well, it also sorts the within-metal group concentration levels precisely following those orders in the CA analysis in Figure 6a.
This study illustrates well that the proposed HCT_R2E method is capable of providing permutations with both good global and local properties, although the optimal HCT still outputs better local orders numerically. The accompanying distance matrix map clearly indicates the Zn(0.005) and Cu(0.7) conditions, in addition to the Ni [II] conditions, deviate from the main linear trend of these transition metals and the Robinson pattern.
Discussion and Conclusion
When analyzing gene expression profile data sets, researchers usually apply a hierarchical clustering tree (HCT) to search for coherent local clusters and the singular value decomposition (SVD) to identify smooth global trends. Users of HCT dendrograms would identify only local clusters without knowing the existence of global structure that might accompany cell cycle-regulated experiments, dosage level studies, or subtypes of tumours. Applications of SVD on the other hand may overlook the importance of local behaviour.
While the optimal HCT [7] always produces permutations with best local behaviour, the rank-two ellipse seriation [3] gives the best global grouping patterns and smooth transitional trends. The proposed hierarchical clustering tree guided by rank-two ellipse seriation (HCT_R2E) nicely integrates these two extremes and provides users both coherent local clusters and smooth global patterns for gene expression profile studies.
In four data analyses, the proposed HCT_R2E algorithm not only exhibits outstanding numerical (statistical) performance, it also provides us better insights into the biomedical information embedded in these high dimensional data structures. Visualization of sorted proximity matrices in addition to the visualization of the expression profile matrices also greatly enhances the overall comprehension of the association structures of arrays and genes.
Applicability and limitation
As was illustrated in the two time series data sets, the proposed rank-two ellipse-guided hierarchical clustering (HCT_R2E) is very powerful in identifying smooth time series patterns. The SARS data and the transitional metal data, on the other hand, showed the proposed method can also be used to search for potential global grouping structure for genes, and for arrays embedded in the given gene expression profiles. Transition metal stress data [18] Figure 6 Transition metal stress data [18]. Visualization of differential gene expression profiles of Halobacterium NRC-1 exposed for five hours to at least three concentrations of each of the six transition metals ( When the underlying clustering pattern is a clear disjoint one, the rank-two ellipse seriation method is only capable of identifying the global between-cluster pattern, not the within-cluster relationship. The optimal tree method gives better permutations than the proposed method for such circumstances. The R2E algorithm (and the HCT_R2E method) is computationally more time consuming than other methods. It takes a personal computer (Celeron (R) 3.2 GHz CPU with 512 MB RAM) running C++ on Windows XP about (0.09 sec, 9.09 sec, and 2.71 hr) to obtain the R2E permutations for proximity matrices with (50, 500, 5000) rows/ columns. The computation complexity for R2E is of order n 3 . The computing speed is much slower in the current Java version GAP package although we are implementing a much faster algorithm now. We have also developed a prototype PC cluster system for performing the proposed methods for very large proximity matrices that will be released after it has been fully tested.
Methods
Various concepts have been proposed for rearranging objects in statistical graphs in order to display information structure more effectively. Chen [3] proposed the concept of "relativity of a statistical graph" for placing similar (different) objects at closer (distant) positions in a statistical graph. The local property optimized by the aforementioned HCT techniques realizes only half of the relativity concept when it places similar objects in closer proximity without the necessity of distancing distinct objects.
Rank-two ellipse seriation
Chen [3] introduced a sorting algorithm called rank-two ellipse (R2E) seriation that extracts the elliptical structure at iteration with rank two of the converging sequence of iteratively formed correlation matrices. R2E improves SVD in identifying even smoother global permutations. There are two advantages of the R2E method over the SVD method in the sorting of arrays and genes in expression profile matrices. The first is that users do not need to choose the number of leading components; the R2E method always summarizes the embedding variation structure into the final two eigenvectors of the rank-two correlation matrix. With a uni-dimensional underlying structure, the two eigenvectors form a half-ellipse pattern for sorting purposes. The second advantage is that it can be applied to any given proximity matrix, be it correlation, covariance, Euclidean distance, or other proximity matrix for genes and arrays.
Proximity matrix visualization
Although both the dendrogram of an HCT and representative genes (arrays) of an SVD are generated from given proximity matrices, researchers usually do not pay much attention to the sorted proximity matrices.
Comparing the permuted gene-by-gene correlation matrices in Figures 1a and 1c we see that the HCT forms many blocks along the main diagonal of the correlation matrix while rank-two method identifies two smooth transitional patterns for up-and down-regulated genes. Without the visualization of correlation matrix in Figure 1a, HCT suggests many gene-clusters with very coherent expression profiles, but with no knowledge of the possible embedded smooth transitional patterns. In light of both correlation matrices in Figures 1a and 1c one can see that the gene-clusters actually are formed only because of the constraints imposed by the HCT dendrogram branching structure; the within-cluster coherent expression profiles are correctly identified, but the between-clusters contrasting patterns may not be applicable.
In addition to the visualization of permuted expression profile matrices, we want to emphasize the importance of visualization of sorted proximity matrices for comparing the differences in permutations that result from various sorting algorithms.
Integration of local clustering patterns and global grouping structures
Local coherent gene clusters with very similar expression profiles may represent groups of genes that are co-regulated by certain transcription factors or activated by identical binding sites. Global clustering patterns and smooth transitional trends on the other hand, could signal some biological processes at a higher-level control, such as metabolite pathways or the cell-cycle operation. It is necessary to develop clustering and visualization methods that can simultaneously explore local behaviours as well as global grouping effects of gene expression profiles.
This study proposes to guide the flipping mechanism of a conventional agglomerative HCT with the rank-two ellipse (R2E) seriation as an external reference. The standard working procedure of the proposed algorithm for gene clustering is illustrated as steps 0~5 in Figure 7,
Generalized anti-Robinson criteria
In order to compare the performances of different sorting algorithms, some standard criteria have to be established. As is illustrated in Figure 8a, the minimum travelling distance in a travelling salesman problem can be used to evaluate local behaviour, while the anti-Robinson eventcount (AR in equation 1 and Figure 8b) works well for global performance. Given a distance-type proximity matrix, Proposed R2E guided HCT procedure for gene clustering Figure 7 Proposed R2E guided HCT procedure for gene clustering. The proposed algorithm for constructing the R2E-guided HCT for gene permutation using the between array correlation matrix in Figure 2.
0. Given a gene-by-array expression profile matrix, M g*p , with g genes and p arrays. The following process can be applied for either gene or array grouping and sorting.
3. Apply the converging sequence of iteratively formed correlation matrices [1] to R g*g and obtain the rank-two ellipse seriation, R2E, for R g*g . Define relative positions (1, …, g) of the g terminal nodes, the travelling salesman algorithm optimizes the permutation by minimizing the total consecutive distances along the entire permutation. That is, one minimizes the summation along the off-diagonal containing the ith to (i+1)st components of the matrix (Figure 8a).
For a permuted proximity matrix, D n × n = [d ij ], the generalized anti-Robinson loss function is defined as the number of deviation from the Robinson form, where w is the window-size defining the range of summation, and I is an indicator function that outputs 1 if the condition is satisfied. Window-size is the number of col- Generalized anti-Robinson criteria (c) umns (rows) from the diagonal of D that we consider in calculating the anti-Robinson events. Small window-sizes refer to criteria for considering only local behaviours, and larger window-sizes refer to criteria for more global relationship between subjects.
The minimum travelling distance can be treated as one special Robinson form with a smallest window-size (w = 1) in counting the anti-Robinson events, while the original AR (equation 1) criterion has the largest window-size (w = n -1). A window-size between 1 and n-1 opens up a banding area from the main-diagonal for counting the number of anti-Robinson events. This is called the generalized anti-Robinson criterion (GAR) here. When we plot the GAR scores against w (window-size) we usually see a monotonic smooth increasing curve since the number of anti-Robinson events grows larger with window-size. In order to have better comparison among different sorting algorithms for small window-sizes we also define the relative generalized anti-Robinson loss function, which ranges between 0 (no anti-Robinson events) to 1 (all anti-Robinson events). The RGAR curves have better resolution for small window-size region than the GAR curves for comparing performance of algorithms.
Availability and requirements
The rank-two ellipse (R2E) seriation and the R2E-guided hierarchical clustering tree methods are implemented in the GAP (generalized association plots) system. [21]. Genes are arranged by the proposed HCT_R2E algorithm; phase conditions for [16] are colour coded according to the phase legend provided in Figure 4. | 5,945.8 | 2008-03-20T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Cross section covariances: a user perspective
This paper makes a brief historical review, from the user point of view, of the impact of the different covariance datamade available during this period, then look at today current situation in the different evaluated files, using practical examples. In particular, the performance, in terms of C/E, of two of the most recent and reliable evaluate files, CIELO and CIELO-2, is presented for a set of selected integral experiments. Moreover, this performance is compared for consistency against the uncertainty computed with two of the current most used covariance data (COMMARA-2.0 and COMACV1). A limited number of comparisons are also shown relative to the JENDL-4 data. Of specific interest is the observed difference due separately to the diagonal and off-diagonal (correlations) contributions. Finally, some perspective and future needs are discussed.
Introduction
In the last two decades there has been a significant effort by the neutron cross section evaluators in generating new high quality data and in parallel a large effort has been made for producing covariance matrices to be used mainly in uncertainty analyses and data assimilations. The need for good quality covariance data was expressed, motivated, and quantified by the users especially in view of reductions of safety margins and economical optimization of advanced reactor designs. This paper intends to offer an assessment of the current situation on cross section covariance matrices from the point of view of a user. First a brief historical review of the use of covariance data for uncertainty quantification purposes will be provided, and then some background information, both theoretical and practical, will be illustrated. Next, a look will be taken at what is the performance (i.e. impact on measurement/calculations discrepancy) of three of the most recent and reliable evaluate files: CIELO (aka ENDF/B-VIIIb4) [1] and CIELO-2 (aka JEFF3.3T3) [2], and JENDL-4 [3]. CIELO (Collaborative International Evaluated Library Organization) has provided a new working frame to facilitate evaluated nuclear reaction data advances. It has brought together experts from across the international nuclear reaction data community to identify and document discrepancies among existing evaluated data libraries, measured data, and model calculation interpretations, and has aimed to make progress in reconciling these discrepancies to create more accurate ENDF-formatted files. The focus has initially been on a small number of the highestpriority isotopes, namely 1 H, 16 O, 56 Fe, 235,238 U, and 239 Pu. The performance in terms of discrepancies between calculated and measured values will be compared for consistency against the uncertainty computed with two of the current most used covariance data. In particular, effect for specific isotope and reactions will be shown, limited for space constraints to the two CIELO data sets.
Finally, some conclusions, perspectives and future needs will be provided.
Brief historical review
The first library that provided "serious" covariance data was ENDF/B-IV distributed in the second half of the 70'. This library was not open for use outside the United States. In parallel at the same period codes were developed for calculating sensitivity coefficients for integral parameters and for processing covariance data (e.g. PUFF [4] at ORNL). This would allow performing uncertainty quantification evaluations.
After a first burst of interest, however, the use of covariance data fell in "disgrace" with few exceptions, for instance applications to fast reactor design. In fact, the authors of this paper published in the mid 80' an article [5] where they used an in house very crude covariance matrix that computed an uncertainty for the critical mass of a large sodium cooled fast reactor of ∼1000 pcm.
A long time went by with no major activities until JENDL-3 came toward the end of the 90' with a quite comprehensive set of isotopes and reactions with covariance data. In 2005 [6] again the authors of this paper issued, "provocatively", a in house covariance data set for several isotopes, which included physically based correlations (e.g. by energy range and cross section characteristics: resolved and unresolved resonance ranges, heavy isotopes inelastic cross section threshold, fission cross section thresholds, etc.), and was derived through an educated guess based on nuclear data performance in the analysis of selected clean integral experiments. This in turn induced a feedback from the nuclear data evaluators of the western world that lead to produce more reliable and useful covariance data sets.
The in house covariance data was used for several works of uncertainty quantifications including ADS applications [7,8]. Interestingly enough, the uncertainty for the critical mass of a large sodium cooled fast reactor was ∼1000 pcm. At this point the nuclear data evaluators had received the message and the first, still "low quality" (i.e. a limited effort was put in producing this set, giving priority to a first uncertainty analysis and its requirements), covariance data matrix, BOLNA (collaboration among BNL, ORNL, LANL, NRG, ANL) [9], was produced and extensively used for the seminal work of WPEC SG26 [10]. The uncertainty for the critical mass of a large sodium cooled fast reactor was still ∼1000 pcm.
The rest is history of our days with "high quality" covariance data in all major data libraries (ENDF, JEFF, JENDL); however, the uncertainty for the critical mass of a large sodium cooled fast reactor is still ∼1000 pcm.
This value using the different previously mentioned covariance matrices was the results of different components, but, in general, is dominated by the large contribution of the uncertainty of 238 U inelastic reaction. This already indicated the struggle, during all these years, with improving such an important cross section for the design of advanced fast reactors.
Background and premises of computed values
In order to calculate the C/E performances corresponding to two sets of CIELO isotopes, the linearity hypothesis was used and the calculated values related to the CIELO isotopes was derived by using sensitivity coefficients. The following formula was used for deriving the new calculated value C 0 : where C is the reference calculated value, S the sensitivity coefficients for the measured quantity and the corresponding s cross sections. The reference value C was obtained using the ENDF/B-VII.0 cross section library [11] and the best available computational tool (Monte Carlo). In order to derive the Ds/s relative variations the infinite dilute cross sections were used. In fact, all the experiments shown later are related to fast spectrum systems and in this case the use of the infinite dilute cross sections is justified as illustrated in [8]. They were computed for the three cross section sets (ENDF/B-VII.0, CIELO, and CIELO-2) using the latest version of (.84) of NJOY2012 [12].
An initial large set of 158 experiments was used [13]. This set includes not only criticality and reaction rate (spectral indices) measurements but also sample irradiations, reactivity measurements and neutron propagation experiments. Results shown later in this paper are limited only to the most significant ones and will not include the energy breakdown, even though this information is available.
Regarding covariance matrices, the two covariance matrices more extensively used in the analysis are COMMARA-2.0 [14] and COMACV1 [15]. Even if the covariance matrix is available for CIELO-2, this one has not been used because it includes an adjustment that uses an integral experiment (JEZEBEL). The fact of using only one experiment could completely skew the covariance data. In general, using only one integral experiment will severely limit the amount of information that is necessary to correctly perform the data assimilation and could be at the origin of some compensating effects. In general the issue of including adjusted data in the original evaluations is a controversial issue, still under discussion in international bodies, such as the NEA WPEC Expert Groups. Finally, as indicated above, some JENDL-4 data were also included for a preliminary and still partial analysis.
Both covariance matrices originally do not include the P 1 elastic data. The matrices were, therefore, completed using the corresponding JENDL-4 [4] P 1 covariance data. This is a significant missing in view of the large impact in many types of experiments. The users expect that also the secondary energy distribution for inelastic scattering covariance data would have a significant impact on uncertainty analysis. Moreover, a significant impact can be expected when cross correlations among isotopes will be included (today only one exists related to 235 U fission).
As illustration for the differences among the covariance matrices we show in Figures 1-6 those relative to 239 Pu fission and 238 U inelastic for three covariance matrices: COMMARA-2.0, COMACV1, and JENDl-4. As it can be seen, the differences are quite remarkable for both diagonal uncertainties and correlations: -In the case of Pu-239 fission, the diagonal values are rather small in the three files (slightly higher in COMACV1), but the energy correlations above ∼10 keV are much smaller in COMMARA with respect to the other two data files.
In the case of U-238 inelastic, both the diagonal values and the energy correlations are rather different in the three files, with, e.g., "longer" correlations in energy below few MeV in the case of COMACV1The analysis looks at two main consistencies: the consistency between the individual (E-C)/E of the two evaluated files and the associated uncertainties calculated with the two covariance matrices; the consistency between the differences (impact on the C/E) between the two evaluated files and the corresponding uncertainties.
For this latter case also we look at the differences between the uncertainties obtained with the two covariance matrices of individual isotopes and, among others, at the impact of correlations.
Analysis of consistency between covariance matrices and C/E
The first set of experiments considered is that of the LANL small criticals. In Table 1 we show, for some integral measured parameters of this set of experiments, the (C-E)/E for both CIELO and CIELO-2 and the corresponding uncertainties computed with COMMARA 2.0 and COMAC-V1 covariance matrices. We also report for illustration the (C-E)/E for JENDL-4, without uncertainties.
As for the C-E/E values, the performances of the three files, even if not always comparable, are not radically different. Some significant differences among the files are found, e.g., for the ZPR-3-53 keff and the ZPPR-15 keff. This last case is of particular interest, since that experiment is rather representative of fast cores presently under study in different laboratories.
When a more detailed investigation is performed, it can be observed that in general there is, e.g., a good agreement among the (C-E)/E of CIELO and CIELO-2 for this set of experiments; however, we will soon see that this can be misleading, because, in some cases, it is the result of huge compensations. There is a good agreement also on the uncertainties calculated with the two covariance matrices except for the case of the BIGTEN 238 U fission spectral index. Also noticeable is the fact that the COMACV1 uncertainty is lower than the calculation/experiment discrepancy for the same integral parameter as well as for the BIGTEN 237 Np fission spectral index.
In Table 2 we take a deeper look at the curious case of the JEZEBEL critical mass integral. In The first striking point is that the small total difference (less than 100 pcm) is the result of large compensation. In effect, there is ∼À800 pcm difference on the inelastic term that is compensated by large (few hundred pcm each) positive differences on the elastic, P 1 , and fission components.
When summed independently from the signs, the total is 1760 pcm that indicates the high degree of compensation. Concerning the total uncertainty values there is a factor ∼two between the two covariance matrices, and, interestingly enough, the correlation effect goes on the opposite side: lowering the total for COMMARA 2.0 and increasing it in COMACV1. Looking at the component again we have contradicting correlation effects among the two covariance sets for the elastic and inelastic components, while for the fission one the increasing effect of the correlation is a lot more pronounced for the COMAC-V1 values. We will remark that the correlations contributions to the uncertainty is mostly coming from the correlations in energy, as, in fact, correlations among reactions are scarce in the two matrices (as in other major uncertainty data files), while those among isotopes are, practically, not existent.
In Table 3 there is a summary for selected ZPRs assemblies and different integral parameters (critical masses and spectral indices). As for the following cases, sensitivity analysis results were extensively discussed in previous publications (see e.g. Ref. [13]). The discrepancies between calculations and experiments are particularly remarkable for the K eff of the ZPR9-34 and ZPR3-54 for both sets CIELO and CIELO-2. In the case of ZPR9-34 the main discrepancies come from 56 Fe and 235 U, while for ZPR3-54, which has an iron reflector, the major impact is related to 56 Fe. In both cases also the two covariance matrices fail to cover the discrepancies. As for the JEZEBEL case, in Table 4 we present a detailed analysis relative to the ZPR9-34 K eff case and 235 U.
In Table 4 (No. 1), where the differences between CIELO-2 and CIELO are shown, the largest impact is associated to the capture term. Regarding uncertainties (Tab. 4 (Nos. 2 to 5)) COMACV1 produces significant greater value for the total uncertainty, but this is still not enough for covering the discrepancy between calculation and experiments. Again, the capture term dominates the uncertainty, while the effect of correlations looks quite similar in the two covariance matrices. In Table 5 we summarize the results for selected ZPPR assemblies. Besides critical masses and spectral indices, we report also results for the sodium void reactivity measurements, and a central control rod worth. For the sodium void reactivity very large discrepancies are observed for both files and the two covariance matrices fail to compute an adequate uncertainty consistent with the observed discrepancies. It is interesting to note that CIELO-2 has a notable discrepancy for the ZPPR-15, which is the reference assembly used for validating metal fuel, K eff while CIELO is right on the spot. Table 6 shows the results for several reaction rates slopes of the ASPIS-88 campaign, a neutron propagation experiment in a block of iron. Very large discrepancies are found for the slopes of the reactions very sensitive in the hard part of the spectrum and both covariance matrices seem to be inadequate to explain this poor behavior. Of course, the major impact is related to 56 Fe for which we do a more detailed analysis in the case of the S(n, p) reaction rate ratio measured at two different penetrations in the iron block (see Tab. 7).
Regarding differences between CIELO-2 and CIELO the largest impact is related to the inelastic term, but also the elastic and P 1 anisotropy give significant discrepancy. Looking at the total uncertainties they look consistent; however, the correlation effect of the inelastic term has opposite behavior between the two covariance matrices. This is the case also for the elastic term. For illustration we show in Figures 7 and 8 the uncertainty and correlation values for the two covariance matrices of the 56 Fe inelastic cross sections. As it can be observed, the differences are quite significant for both diagonal uncertainties and correlations. This is a further indication to the evaluator communities of potential inconsistencies that will have an impact on applications and their credibility. We now summarize the major finding of this investigation.
In assessing the differences between the CIELO and CIELO-2 cross sections we can make the following remarks: -As general observation from the point of view of a user, one can say that we are far away from reaching a consensus both on nominal values and on covariance data. -The case of the JEZEBEL critical mass is emblematic.
The large compensations among the different reactions (elastic, inelastic, P 1 , and fission) yield the same critical mass. The user is disoriented: where is the truth? -Regarding the 5 CIELO isotopes, the major impacts are related to: Regarding the consistency between the two covariance matrices, we can make the following observations: -In many instances the calculated uncertainties would not cover the C/E spread of the experiments. This is true at one sigma level, while at two sigma most, but not all, the spreads would be covered. -Of specific interest is the effect of the correlations. In some cases the difference in correlations leads to a different sign in the contribution to the total uncertainty: what can explain this completely different behavior? -Regarding specific differences between the two covariance matrices related to the 5 isotopes, the major impacts are associated to:
Conclusions
Despite significant efforts in generating new high quality neutron cross section data and in producing associated covariance matrices, the state of affairs is not yet fully satisfactory. The user is puzzled by many inconsistencies among evaluated cross sections and corresponding covariance data that in many cases fail to explain discrepancies between measurements and calculations for integral experiments. Moreover, the observed differences on correlation effects between two covariance matrices more specifically used in this paper, are quite noticeable.
Among recommendations to evaluators that can be done from a user point of view, we can list: -Provide the missing data in covariance matrix: P 1 elastic, secondary energy distribution for inelastic cross sections (multigroup transfer matrix), cross correlations (reactions and isotopes), delayed data (nubar and fission spectra). -Finer energy grid and eigenvalue decomposition of the covariance matrix will be welcome for use in Monte Carlo assimilation [16]. -In integral testing look also if the observed C/E discrepancy is consistent with covariance data.
When covariance will be available for CIELO isotopes, more relevant feedback could be provided through data assimilation using PIA strategy [17], which is intended for avoiding compensations, and careful choice among available experiments. | 4,100.6 | 2018-11-01T00:00:00.000 | [
"Physics"
] |
Sugars induced exfoliation of porous graphitic carbon nitride for efficient hydrogen evolution in photocatalytic water-splitting reaction
Photocatalytic hydrogen evolution holds great promise for addressing critical energy and environmental challenges, making it an important area in scientific research. One of the most popular photocatalysts is graphitic carbon nitride (gCN), which has emerged as a noteworthy candidate for hydrogen generation through water splitting. However, ongoing research aims to enhance its properties for practical applications. Herein, we introduce a green approach for the fabrication of porous few-layered gCN with surface modifications (such as oxygen doping, carbon deposition, nitrogen defects) with promoted performance in the hydrogen evolution reaction. The fabrication process involves a one-step solvothermal treatment of bulk graphitic carbon nitride (bulk-gCN) in the presence of different sugars (glucose, sucrose, and fructose). Interestingly, the conducted time-dependent process revealed that porous gCN exfoliated in the presence of fructose at 180 °C for 6 h (fructose_6h) exhibits a remarkable 13-fold promotion of photocatalytic hydrogen evolution compared to bulk-gCN. The studied materials were extensively characterized by microscopic and spectroscopic techniques, allowing us to propose a reaction mechanism for hydrogen evolution during water-splitting over fructose_6h. Furthermore, the study highlights the potential of employing a facile and environmentally friendly fructose-assisted solvothermal process to improve the efficiency and stability of catalysts based on graphitic carbon nitride.
Duration optimization of fructose-assisted solvothermal modification of bulk graphitic carbon nitride
In the subsequent step, 40 mg of fructose (Sigma-Aldrich) was dissolved in a solution of 60 mL of distilled water and ethanol, mixed in a 1:1 volume ratio.Next, 400 mg of bulk-gCN was added to the prepared solution.The mixture was subjected to vigorous stirring for 0.5 h, followed by 0.5 h of sonication.Afterward, the resulting suspension was transferred into a 100 mL Teflon-lined autoclave, where it was maintained at a temperature of 180 ℃ for different durations (3, 6, 12, 18, or 24 h).After cooling down the suspension was centrifuged and washed three times with distilled water and ethanol, followed by drying at 60 ℃ overnight.The naming convention of the samples involved using "fructose_xh", where "xh" indicates the duration of the reaction in the autoclave.For instance, "fructose_12h" signifies that bulk graphitic carbon nitride (bulk-gCN) was modified with fructose, and the reaction duration was 12 h.The XRD, and TGA results of the sugar-assisted solvothermal modification of graphitic carbon nitride Fig. S2.In detail, after sugar-assisted modification of graphitic carbon nitride, the peak at around 27° shifts towards higher angles, which indicates a concentration of the interlayer distance that has been correlated to the increased interaction induced by the more electronegative O-atoms replacing the C-atoms in the layer (oxygen doping) [S1, S2].The diffractograms of pure sugars (glucose, sucrose, and fructose) are presented in Fig. S3a.
Results and discussion
Glucose exhibits peaks at 10. 49 (110), 11.97 (020), 14.77 (120), 17.21 (200), 18.88 (011) in the range of 3000-3600 cm -1 are attributed to the O-H stretching [S5].Notably, the spectra of sucrose reveal overlapping absorption peaks primarily in the 3000-3600 cm -1 range, originating from both glucose and fructose.This is due to the fact that sucrose, known as disaccharide, is composed of these two monosaccharides.The XRD, FTIR-ATR, UV-vis with corresponding Tauc plot, PL, CA, and EIS results of the fructose-assisted solvothermal modification of graphitic carbon nitride for optimization of duration are depicted in Fig. S4.The description of graphitic carbon nitride is fully described in the main manuscript, thus in Supplementary Material, the authors described the differences of fructose-assisted modification of graphitic carbon nitride -duration of reaction dependence.
In detail, the shift in peak position at around 27° (Fig. S4a) is influenced by the duration of the solvothermal reaction of fructose-assisted modification.This peak gradually shifts towards higher angles until 12 hours of reaction.Further extension of the solvothermal duration of the reaction (fructose_18h and fructose_24h) results in a shift in the opposite direction, indicating an improved interlayer stacking order [S2].The absorption spectra of graphitic carbon nitride modified with fructose (Fig. S4b) reveal similar absorption peaks as observed in pristine bulk-gCN, indicating that the primary chemical structure of the graphitic carbon nitride structure remains intact, which aligns with XRD results.The energy band gaps (Fig. S4cd) were determined as 2.75, 2.80, 2.89, 2.84, 2.83, and 2.80 eV for bulk-gCN, fructose_3h, fructose_6h, fructose_12h, fructose_18h, and fructose_24h, respectively.Interestingly, the PL spectra (Fig. S4e) show that the duration of solvothermal reaction between 6-24 h has no significant impact on the recombination process.Both CA and EIS (Fig. 4fg) confirm that fructose_6h and fructose_12h have the highest mobility of charge carriers, thus both samples have similarly high photoactivity toward hydrogen production.
Fig.
Fig.S3bdisplays the TGA/DTA results of pure sugars (glucose, sucrose, and fructose).The weight loss for fructose commenced at 130 ℃, while glucose exhibited weight loss at 161 ℃, and sucrose started losing weight at 194 ℃[S3].Furthermore, the melting points of fructose, glucose, and sucrose are 103, 148, and 179 ℃, respectively[S3, S4].To effectively modify graphitic carbon nitride using various sugars, it is advisable to maintain the reaction temperature above the respective melting points of utilized sugars.The FTIR-ATR spectra of pure sugars (glucose, sucrose, and fructose) are depicted in Fig.S3c.All samples display similar absorption patterns, indicating a resemblance in the chemical structures of the green reducing agents used for the exfoliation of graphitic carbon nitride purpose.To elaborate, in the region of 1000 to 1200, the spectra exhibit vibration modes associated with C-C and C-O bonds, typical for carbohydrates[S5].The region from 1350 to 1500 cm -1 shows the combination bands of C-O-C and C-O-H deformations, while absorption bands around 2900 cm -1 correspond to the aliphatic C-H stretching, and the absorption bands
Fig. S5 .
Fig. S5.(a) SEM image, (b, c) TEM images, (d) AFM image, (e) XRD diffractogram of bulk-gCN after solvothermal reaction in the absence of sugars, and (f) hydrogen evolution from water splitting of studied materials.
Table S1 .
AFM data of graphitic carbon nitride modified with different sugars. | 1,365 | 2024-01-23T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Environmental Science"
] |
Improvement of Dissolution Rate of Chlorzoxazone by Solid Dispersion Technique and Development of Buccal Patch
Chlorzoxazone (CLZ) is insoluble in water. Its half life is 1.1 hr, dose is 250 mg and has first pass metabolism. Formation of solid dispersions of CLZ with pluronic F127 will enhance the bioavailability of the drug. Phase-solubility studies revealed AL type of curves, indicating the ability of pluronic F127 to disperse with CLZ and significantly increase in solubility. The solid dispersions of CLZ was carried out with pluronic F127 by different methods and characterized by in vitro drug release, drug content, FTIR, DSC, XRD. All the solid dispersions showed dissolution improvement compare to pure drug. These techniques revealed distinct loss of drug crystallinity in the formulation accounting for enhancement in dissolution rate. The stability study for solid dispersions indicated, all formulations were stable. Methods showing best in vitro drug release profile were selected in further development of mucoadhesive buccal patches. A buccal patch has been developed using mucoadhesive polymers HPMC K4M and carbapol 974. The developed patches were evaluated for the physicochemical, mechanical and drug release characteristics. The optimized patches showed good mechanical and physicochemical properties to withstand environment of oral cavity. The in-vitro permeation study showed that patches could deliver drug to the oral mucosa for a period of 8 hrs. The results indicate that suitable bioadhesive buccal patches with good permeability could be prepared. The batches FH4 and FC4 showed 79.65% and 79.93% permeated through goat mucosa membrane in 8 hrs. The stability study for buccal patch revealed that all batches were stable.
INTRODUCTION
The poor solubility of drug (BCS Class II) in gastrointestinal fluid gives rise to variations in dissolution rate and incomplete bioavailability.An improvement of the dissolution rates of water-insoluble drugs is one of the most challenging and important tasks of drug development as it can increase drug bioavailability [1][2][3].Chemically, Chlorzoxazone (CLZ) is 5-chloro-3H-benzooxazol-2-one, which belongs to skeletal muscle relaxant (centrally acting) class.It has half life of 1.1 hours & dose is 250mg.It is soluble in methanol, ethanol & isopropanol; freely soluble in aq.solutions of alkali hydroxides and slightly soluble in water.It is necessary to improve the dissolution rate of CLZ to enhance the bioavailability [4][5].
There are different chemical or formulation approaches to improve drug dissolution and bioavailability.Among the various strategies solid dispersion technique has often proved to be the most successful in improving the dissolution and bioavailability of poorly soluble drug.There are different methods for preparation of solid dispersion.Solid dispersion is most successful technique as it is simple, economic, and advantageous to enhance dissolution rate [6][7][8][9][10][11][12][13].
*Address correspondence to this author at the Department of Pharmaceutics, MAAER's Maharashtra Institute of Pharmacy, S. No. 124, MIT Campus, Exserviceman Colony, Paud Road, Kothrud, Pune-411038, India; Tel: +91-9881478118; E-mail<EMAIL_ADDRESS>Buccal delivery of drugs provides an attractive alternate to the oral route of drug administration, particularly in overcoming deficiencies associated with the latter mode of dosing.Problems such as high first pass metabolism and drug degradation in gastrointestinal environment can be circumvented by administering the drug via the buccal route.It is also possible to administer drugs to patients who cannot be dosed orally.Therefore, adhesive mucosal dosage forms were suggested for oral delivery that included adhesive tablets adhesive gels and adhesive patches.However, buccal patches are preferable over adhesive tablets in terms of flexibility and comfort.Now day's bioadhesive polymers received considerable attention as platforms for buccal controlled delivery due to their ability to localize the dosage form in specific regions to enhance drug bioavailability.In order to prepare films having the appropriate characteristics, film-forming polymers were initially used alone and successively in combination with mucoadhesive polymers.The patches with the best characteristics were selected for testing.The plasticizer interposes itself between the polymer chains and interacts with the forces held together by extending and softening the polymer matrix.The commonly used plasticizers include phthalate esters, phosphate esters, fatty acid esters and glycol derivatives [14][15][16][17][18][19].
The solid dispersion showing best in vitro drug release profile can be selected in further development of buccal patches of CLZ.This would help to facilitate its absorption from the buccal cavity, overcome its firstpass metabolism and thereby improve bioavailability.
The purpose of the present investigation was to improve the solubility and dissolution rate of CLZ by forming a solid dispersion with pluronic F127 and to develop buccal mucoadhesive patch to ensure satisfactory CLZ level for prolonged periods.To evaluate performance of the prepared patches and in addition, to investigate the effect of ageing on the performance of the prepared patches.
Materials
Chlorzoxazone was supplied as a gift sample by Twilight Litaka Pharmaceuticals Pvt. Ltd, Pune, India.HPMC K4M and pluronic F127 were gifted from Colorcon Pharmaceuticals (Bangalore, India).Carbapol 974 was gifted by Oxford chemicals (Mumbai, India).
Drug Characterization
Melting point of CLZ was determined by melting point apparatus by capillary method to check purity of CLZ.
From the calibration curve of CLZ on UV spectrophotometer (Varian Carry 100, Australia), the wavelength at which maximum absorbance found selected as max.
Solubility
CLZ solubility studies were performed in triplicate by adding excess amounts of CLZ to water.The flasks containing solutions were kept on a rotary shaker for 72 hrs then solutions were analyzed using UV spectrophotometer at max and concentration was calculated.
Stability in Solvents
The stability of CLZ was checked by using various solvents like water, phosphate buffer pH 6.8, 0.1 N HCl.The CLZ was kept in solvent for 72 hrs.The changes in absorbances were noted after a specific period of time.
Infra Red Spectroscopy
To characterize CLZ, FTIR spectrophotometer (Varian 640 IR, Australia) was employed.The samples were prepared by the KBr pellet method.The spectra were scanned over a frequency range 4000 to 400 cm -1 .
Phase Solubility Studies
An excess amount of CLZ was added to conical flask containing aqueous solutions of pluronic F127 in increasing concentration (1%, 2%, 3%, 4% and 5% w/v).The flasks were sealed and shaken at 37±0.5ºC for 72 hrs in a mechanical shaker.At equilibrium after 72 hrs, aliquots were withdrawn, centrifuged at 4000 rpm for 10 minutes, filtered, diluted suitably and analyzed using by UV spectrophotometer at 280 nm to determine the solubilities of CLZ at different concentrations of pluronic F127.The experiment was performed in triplicate.
Physical Mixtures (PM)
The CLZ with pluronic F127 were grinded thoroughly in a mortar, sieved and stored.
Solvent Evaporation Method (SE)
The CLZ and pluronic F127 were dissolved in methanol separately, sonicated for 20 minutes and then solvent was evaporated under reduced pressure at room temperature in dessicator.
Co-Grinding Method (COG)
CLZ was triturated in minimum quantity of methanol in mortar until it dissolved.The pluronic F127 was then added and suspension was triturated rapidly at room temperature until the solvent evaporated and passed through sieve and stored.
Co-Precipitation Method (COP)
An accurately weighed amount of pluronic F127 was dissolved in water and CLZ in methanol, separately.After complete dissolution, the aqueous solution of pluronic F127 was poured into the methanolic solution of the drug.The solvents were then heated and evaporated under reduced pressure at room temperature in a dessicator.
Kneading Method (KN)
The mixture of pluronic F127 and CLZ was wetted with water and kneaded thoroughly for 30 minutes in mortar.The paste formed was dried under vacuum for 24 hrs, passed through sieve and stored in a dessicator.
Closed Melting Method (CM)
PM was transferred into glass ampoules, sealed and heated for 30 min. in water bath.After slow cooling the ampoules were opened and Solid Dispersions werecollected.All solid dispersion were pulverized in mortar, sieved and dried in an oven for at least 48 hrs.
Spray Drying Method (SpD)
The CLZ and pluronic F127 were dissolved in methanol and solution was kept to run from spray drier (Labultima LU 222).The inlet temperature maintained was at 65
Saturation Solubility Studies
By equilibrating an excess of CLZ, PM and solid dispersions were placed separately in stoppered conical flasks containing 10 ml distilled water for 48 hours on a mechanical shaker at room temperature.At equilibrium after 48 hrs, aliquots were withdrawn, centrifuged at 4000 rpm for 10 minutes, filtered, diluted suitably and analyzed by UV spectrophotometer at 280 nm to determine CLZ.
Percent Drug Content and Yield Study
The Solid Dispersion equivalent to 250 mg of CLZ was added in 5ml methanol solution kept in ultra sonicator for 10 min., volume was adjusted to 100 ml with distilled water.The solution was filtered, suitably diluted and assayed using UV spectrophotometer at 280 nm.The CLZ content was calculated using calibration curve.The Solid Dispersion were weighed and yield was calculated for each preparation using following formula Where, 'a' is the practical weight of solid dispersion obtained and 'b' is the theoretical weight of solid dispersion.
In Vitro Release Study
In vitro release studies of CLZ from solid dispersion (equivalent with 250 mg) were studied in 900 ml distilled water as dissolution medium, at 37±0.5ºC and 75 rpm, using USP I dissolution test apparatus (basket type) (TDT-08L Electrolab, Mumbai, India).Aliquots were withdrawn, at the specified time interval and replenished immediately with same volume of fresh medium.Aliquots were filtered, diluted suitably and assayed for CLZ using UV spectrophotometer at 280 nm.Dissolution profiles of the formulations were analyzed by plotting time versus % drug release.The whole study was also performed with phosphate buffer (pH 6.8) as a dissolution medium with all other same parameters.The dissolution experiment was performed in triplicate for each sample.
Fourier Transform Infrared Spectroscopy Study (FTIR)
The spectra of CLZ, pluronic F127 and solid dispersions were recorded with FTIR spectrophotometer using KBr disks.The spectra were comparatively analyzed for drug-carrier interaction.The scanning range used was 4000 to 500 cm- 1 .
Powder X-Ray Diffraction Study (PXRD)
XRD (Philips PW 1729, Netherlands) was employed for tracing XRD patterns of CLZ and Solid Dispersions, using Ni filter, CuK ( ) radiation, a voltage of kV, a current of 20 mA and receiving slit of 0.2 in.The samples were analyzed over 2 range of 5° to 50°, with scan step size of 0.020° (2 ) and scan step time of 1 second.
DSC (Lab Mettler Star
e SW 9.20, Switzerland) was used to obtain the curves of CLZ, pluronic F127 and solid dispersions representing the rates of heat uptake.About 2-5mg of sample was weighed in a standard open aluminum pans, were scanned from 40-250°C, at a heating rate of 10°C/min while being purged with dry nitrogen.The instrument was calibrated prior to sample analysis, using an indium standard.
Stability Study of Solid Dispersions
The selected solid dispersions were packed in amber-colored bottles, which were tightly plugged with cotton and capped with aluminum.They were stored at 25ºC/60% Relative Humidity (RH), 30ºC/65% RH, & 40ºC/75% RH for 3 months and evaluated for their physical changes such as color and texture, drugpolymer interaction using FTIR, drug content and In vitro drug release study.
Formulation of Buccal Patches
Solid dispersion showing high solubility was selected for development of mucoadhesive buccal patches.The formulation of buccal patch (Table 1) is carried out by using two different water swellable polymers hydroxy propyl methyl cellulose (HPMC K4M) and carbapol 974.The plasticizer used was propylene glycol.The patch consists of solid dispersion (equivalent to 250 mg of CLZ).All patches were prepared by solvent casting method [16].
The variables used while formulating the patch were concentrations of HPMC K4M, carbapol 974 and plasticizer (propylene glycol).The concentration of HPMC K4M was varied from 1% to 4% and for carbapol 974 was varied from 2% to 5%.The concentration of plasticizer was finalized differently for the two polymers from the plasticity of the film.It is varied from 10% to 15% for the patch.
Tissue Preparation
The goat esophageal tissue was obtained immediately post-sacrifice from a local slaughterhouse (Kothrud, Pune).It is transported to the laboratory in isotonic phosphate buffer (pH 7.4) and opened longitudinally and rinsed with same.The mucosa was removed from the underlying muscular layer by cutting the loose connective fibers with a scalpel.Circular pieces were then punched out.The excised mucosa was immersed in isotonic saline at 60 °C for 1min then the epithelium was peeled away from connective tissue.
Evaluation of Buccal Patch
Randomly five patches of 10 mm size (1x1cm 2 ) were selected from every batch for every test and a mean of five readings were recorded.
Patch Thickness and Mass
The thickness was determined using a standard micrometer screw gauge and weight uniformity was determined by weighing on electronic balance.
Surface pH
Agar plate, prepared by dissolving 2% (w/v) agar in warmed isotonic phosphate buffer (pH 7.4) under stirring and then pouring the solution in a petri dish and cooling till gelling at room temperature.The patches were left to swell for 2 hrs on the surface of agar plates then pH was measured with pH paper placed on the surface of the swollen patch.
Folding Endurance Test
This test was done by repeatedly folding the patch at the same place up to maximum 200 times or till it broke.
Swelling Index
The weight and diameter of original patches were determined, placed on the surface of 2% agar gel plate kept in an incubator maintained at 37±0.5 0 C.After the preset time interval (1hr) the patches were removed from the petri dish and excess surface water was removed carefully using the filter paper.Then swollen patches were reweighed and the swelling index was calculated.The measurements of diameters of patches were done by using microscope after one hour interval for 5 hrs.The percentage swelling (%S) was calculated using the following equation: Wt is weight of the patch after time t and W 0 is the initial weight at zero time.
Vapour Transmission Test (VTR)
The glass-bottle (length=5cm, narrow mouth with internal diameter=0.8cm)filled with 2gm anhydrous calcium chloride and an adhesive (Feviquick) spread across its rim, was used in the study.The patch was fixed over the adhesive and the assembly was placed in a constant humidity chamber, prepared using saturated solution of ammonium chloride and maintained at 37±2°C.The difference in weight after 24 h, 3rd day and 1 week was calculated.The vapor transmission rate was obtained as follow: VTR = (Amount of moisture transmitted) / (Area x T i m e ) e q n .3
In-Vitro Mucoadhesive Strength
The strength of bond formed between the patch and mucosa membrane excised from goat mucosa was determined using two-arm balance method.Fresh goat mucosa section was fixed on the plane surface of glass slide (3 5 cm) attached (with adhesive tape) to bottom of smaller beaker, kept inverted in 500 ml beaker attached to the bigger beaker.Isotonic phosphate buffer (pH 6.8) was added to the beaker up to the upper surface inverted beaker with goat mucosa.The patch was stuck to the lower side of the upper clamp with cyanoacrylate adhesive.The exposed patch surface was moistened with phosphate buffer (pH 6.8) and left for 30s for initial hydration and swelling.Then the platform was slowly raised until the patch surface came in contact with mucosa.Two sides of the balance were made equal before study.After a preload (50 gm) time of 2 minutes, water was added to the polypropylene bottle present in another arm, until the patch was detached from the mucosa.The water collected in the bottle was measured and expressed as weight (gm) required for the detachment.The force measurement was repeated 3 times for each formulation.The following parameters were calculated from the bioadhesive strength: Force of adhesion (N) = (Bioadhesive strength (gm) * 9.81) / 1000 eqn. 4 Bond strength (Nm-2 ) = Force of adhesion / Disk surface area eqn. 5
In-Vitro Residence Time
It was evaluated after application of the patches onto freshly cut goat mucosa.The fresh goat mucosa was fixed in the inner side of the beaker, above 2.5cm from the bottom, with cyanoacrylate glue.One side of each patch was wetted with one drop of phosphate buffer (pH 6.8) and pasted to the goat mucosa by applying a light force with a fingertip for 30 seconds.The beaker was filled with 500 mL of isotonic phosphate buffer (pH 6.8) and was kept at 37±1°C.After 2 minutes, a 50 rpm stirring rate was applied to simulate the buccal cavity environment, and patch adhesion was monitored up to 8 hrs.The time required for the patch to detach from the mucosa was recorded as the mucoadhesion time.
Content Uniformity
The buccal patch (1x1cm 2 ) was dissolved in 100 ml of phosphate buffer (pH 6.8) for 6 hrs under occasional shaking.The 5 ml solution was withdrawn, filtered, diluted with same medium upto 20 ml and CLZ content was determined by using a UV-spectrophotometer at 280 nm.
In-Vitro Release Study
The release study was done in the Keshery-Chien diffusion cell using phosphate buffer (pH 6.8).The membrane was carefully mounted in between the compartments of a diffusion cell.The receptor and donor compartment contained 12 ml and 3ml of phosphate buffer (pH 6.8), respectively.The entire set up was placed over magnetic stirrer and temperature was maintained at 37±2°C.Aliquots (1ml) were collected from receptor compartment at predetermined time interval upto 8 hrs and replaced with equal volumes of fresh medium kept at the same temperature.Aliquots were filtered, diluted suitably with same medium and assayed for CLZ using UV spectrophotometer at 280 nm.The diffusion profiles of the formulations were analyzed by plotting time versus % drug release.The membranes used for diffusion study was celophan for all batches.Then for optimized batches, egg membrane and goat mucosa were used and the study was repeated in triplicate with each membrane.
FTIR Study of Buccal Patches
The samples of CLZ with HPMC K4M and with carbapol 974 were prepared by simple blending with KBr.The scanning range used was 4000 to 500 cm- 1 .
Then the spectra were comparatively analyzed for drug-carrier interaction.
Stability Study of Buccal Patches
The stability study was carried out for optimized batches of formulated buccal patch.The selected patches were packed in aluminum then in ambercolored bottles, which were tightly plugged with cotton and capped with aluminum.They were stored at 25ºC/60% RH, 30ºC/65% RH, & 40ºC/75% RH for 3 months and evaluated for their physical changes such as color and texture, drug-polymer interaction, drug content and in vitro drug release study after 1, 2, 3 months.The maximum absorbance of CLZ found at 280nm.So, max of CLZ selected at 280nm.
Solubility
Solubility of CLZ in water was found to be 25.58 g/ml indicating it is slightly soluble in water.
Stability in Solvents
There were no major changes in absorbances observed.It indicated that CLZ was stable in solvents water, phosphate buffer pH 6.8, 0.1 N HCl.
Infra Red Spectroscopy
The FTIR spectra analysis of CLZ showed, the characteristic peaks for specific structural groups were observed at wave numbers 3587.98, 3221.36, 3066.96, 2901.12, 1765.57, 1623.81, 1582.71, 1356.98, 850.02, 766.25 cm -1 confirming the purity of drug as per established standards.
Phase Solubility Study
This study show that the curve (Figure 1) obtained are A L type because of linear increase in solubility with the value of R 2 closed to unity.The solubility parameters of CLZ at 25ºC when carrier is pluronic F127 is used, the slope was 0.04765, stability constant (Ks) was 89.12, value of R 2 was 0.9984 and Gibb's free energy of transfer ( G tr 0 ) was -29.42 joules/mol.The negative sign of Gibb's free energy of transfer indicates the spontaneous nature of CLZ solubilization.
The results of saturation solubility study (Table 2) indicated that maximum increase in solubility in ratio 1:1.The co-precipitation method showed maximum saturation solubility.
Percent Practical Yield & Drug Content
The practical yield was in the range of 94.05 ± 3.02% to 99.13 ± 1.76% and drug content ranged between 95.68 ±3.11% and 99.55 ±1.84 (Table 3).The results indicate that processes employed to prepare Solid Dispersion in this study were capable of producing formulation with uniform drug content.
In Vitro Release Study
The in-vitro release of solid dispersions showed a significant increase in drug release, in comparison with pure crystalline CLZ in both dissolution medium (Figure 2, 3).Among different methods, the maximum of dissolution enhancement was found with kneading method.The order of dissolution enhancement by solid dispersion is COP > KN > COG > SpD > PM > SE > CM.
Fourier Transform Infrared Spectroscopy (FTIR)
The IR spectra of solid dispersion (Figure 4) showed some additional peaks were observed, which could be due to the presence of pluronic F127.While all other characteristic peaks of CLZ are at the same wave number.This indicated that no interactions of CLZ with pluronic F127.
Differential Scanning Calorimetry (DSC)
In DSC curve of CLZ and pluronic F127 (Figure 6) sharp endothermic peaks were observed at 191.16°C and 58.7ºC, respectively.It is due to fusion, corresponding to its melting point.DSC curve of solid dispersions COP, KN and COG (Figure 6) showed position of endothermic peak of is shifted at 44.97ºC, 48.14ºC, 49.03ºC, respectively and intensity of peak is reduced.It indicates melting of CLZ becomes faster and it might be due to the amorphous form of CLZ in solid dispersion or dissolution of crystalline CLZ into the molten carrier.
Stability Study of Solid Dispersion
The stability study was carried out for solid dispersion prepared by COP, KN and COG methods and parameters for study were drug content, in vitro release studies and FTIR.These studies conclude that there was no degradation.Hence, these Solid Dispersions can be kept for a period of one year or more with respect to time.
Buccal Patch
The concentrations of polymers were decided by varying the concentration.The patches prepared other than optimized concentration showed some problem with their physicochemical properties like no flexibility, no drying, hard, brittle, entrapment of air bubble, stickiness etc.The patches showing good results were selected.The concentration of HPMC K4M and carbapol 974 were finalized as 2.5% and 3.5% respectively, as it showed good results.The of plasticizer was finalized differently for the two polymers from the plasticity of the film.It is 10% and 13% for the patch of HPMC K4M and carbapol 974, respectively.
In comparison to pure drug, solid dispersion prepared by co-precipitation method showed maximum release profile among all solid dispersions.So, it was selected for further patch formulation study as it.The solid dispersion equivalent to 250mg of CLZ was weighed accurately and incorporated in a buccal patch.
The physical characteristics of all patches are shown in Table 3.All the patches were 10 mm in diameter.The thickness and mass of all formulations were in the range of 0.63 ± 0.28 to 1.19 ± 0.52 and 27.13 ± 0.58 to 425.27 ± 0.38, respectively.
The surface pH within the desirable 6-7 units which were near to neutral pH and hence no mucosal irritation would be expected (Table 4).
The folding endurance recorded for all patches were more than 200.This might be due to adequate content of propylene glycol, provides high mechanical strength and good elasticity.
Assessment of the swelling behavior was done by measuring radial swelling.The medicated patches showed higher radial swelling compared to placebo patches.The swelling index after 5 hrs was in the range of 34.96 ± 2.81 to 65.43 ± 3.91.Higher swelling index would result in excessively increased surface area which could result in unmanageable faster release of the drug and also may cause patient discomfort due to occupying of larger space in the oral cavity and chances of dislodgement.
In vapour transmission, the formulation batches FH1, FH3, FC1, FC2, FC5 were indicate less vapour transmission as compared to other batches on day seven.The highest vapour permeation 1.67 10 5).
The in vitro residence time was found in the range of 2.54± 0.82 to 3.73 ± 0.88 hrs.
The drug content in the patches was found in the range of 96.18 ± 0.27% and 101.98 ± 0.44%.
The optimized batches FH4 and FC4 showed 85.92% and 86.26% CLZ release, respectively from celophan membrane.From egg membrane, 83.28% and 82.36% and from goat mucosa, 79.65% and 79.93% CLZ released, respectively for FH4 and FC4.All the release profile follows peppas model.The release profile of CLZ from patches was shown in Figures 7 and 8.
In mucoadhesion study goat mucosa was used as biological membrane and study is carried out with the optimized batches only.Bioadhesive strength of the batch FH4 and FC4 was found to be 12.67±1., respectively for FH4 and FC4 batches (Table 6).
The release profile of different patches are as reported in Table 7. Patches shown matrix and Peppas release.
FTIR Study
The IR spectra of the patch showed the same absorption bands as the solid dispersion, illustrating absence of interaction between CLZ with HPMC K4M and carbapol 974.
Stability Study of Buccal Patch
No significant changes were observed in drug content and diffusion study after 3 months.IR data indicated that there were no interactions of CLZ with the excipients with respect to time.Hence, patches can be kept for a period of one year or more.
CONCLUSION
The solid dispersion system provides better control of drug release rate than the physical mixture at the same ratio of drug and polymer.The water-soluble carrier pluronic F127 and different techniques were investigated in the current study to formulate solid dispersion of CLZ, which enhanced the solubility and dissolution characteristics of the drug.The DSC, XRD and FTIR study had shown no interaction between CLZ and pluronic F127.The solid dispersion system is more efficient for preparation of CLZ sustained-release mucoadhesive buccal patches which can reduce first pass metabolism of the drug with improved dissolution.
aspiration rate was 50-55.The feed pump speed was 10ml/min.The cooling temperature was maintained at 35 0 C. The powder was collected from collector and stored.
Melting point of CLZ was found in the range of 191
Figure 2 :
Figure 2: Release profile of all solid dispersions in distilled water.
Figure 3 :
Figure 3: Release profile of all solid dispersions in phosphate buffer (pH 6.8).
Table 6 : Physical, Mucoadhesive and In-Vitro Release Study of Buccal Patches Formulation code Drug content ±SD
* (%) In vitro residence time ±SD * ( | 5,980.8 | 2013-05-31T00:00:00.000 | [
"Materials Science"
] |
Boundary controllability of the Korteweg-de Vries equation on a tree-shaped network
Controllability of coupled systems is a complex issue depending on the coupling conditions and the equations themselves. Roughly speaking, the main challenge is controlling a system with less inputs than equations. In this paper this is successfully done for a system of Korteweg-de Vries equations posed on an oriented tree shaped network. The couplings and the controls appear only on boundary conditions
Introduction and main result
Partial differential equations (PDE) appear in many contexts to model different phenomena.
Most of time, these equations are coupled and their study gets much more difficult than when a single equation appear.The control of such systems is not the exception and thus it is very important to understand how we can get controllable systems by using the properties of the single equations and the couplings.
The most known PDE are parabolic and hyperbolic equations.Thus, it is very natural to find many works concerning the controllability of coupled systems involving them.If we restrict our attention to boundary controllability, which is the main issue of this paper, we can mention among a huge literature, [1,5,16] for parabolic equations and [3,13,19] for hyperbolic equations.
In this context, a network is a particular kind of coupled systems in which different PDE are posed on different domains (edges of the network) with coupled boundary conditions (acting on the nodes of the network).Depending on the topology of the structure edges-nodes we define star-shaped, tree-shaped or just general networks.For this particular kind of coupled systems we already find some boundary controllability results.In fact, we can mention [7] for parabolic systems and [4,14,15,18,21,25] for hyperbolic systems.
In this work, we are interested in the controllability of oriented networks for the Korteweg-de Vries (KdV) equation.In the literature there is already a good understanding of the control of the single KdV equation.When we deal with the KdV equation with homogeneous Dirichlet conditions and right Neumann condition on a bounded domain, the length L of the interval where the equation is set plays a role in the ability of controlling the solution of the equation ( [9,12,23]).Indeed, it is well-known that if L = 2π, there exists a stationary solution (y(x, t) = 1 − cos x) of the linearized system around 0 which has constant energy.More generally, defining the set of critical lengths one can recall that the linearized equation around 0 is exactly controllable with only one right Neumann control if and only if L / ∈ N (see [23]) and the local exact controllability result holds for the nonlinear KdV equation (using a fixed point argument) if L / ∈ N .Further results show that the nonlinear KdV equation is in fact locally exactly controllable for all critical lengths contrary to the linear KdV equation (see [8,10,12]).See also [9] and [24] for a complete bibliographical review.
As we have a rather complete understanding of the boundary controllability of this equation, we deal here with the KdV equation posed on a network.Recently, we find two papers dealing with the controllability of the KdV equation on a network.In both, the topology considered is a star-shaped network, having in this way one central node and several external nodes.These two papers giving a positive answer to the controllability of the nonlinear KdV equation on network are [2] with N + 1 boundary controls for N edges (the main topic of that work is the stabilization) and [11] with N boundary controls for N edges.
Generally speaking, the main differences between papers [2] and [11] and the present work are: the sense of the propagation of the water wave on the first edge; the transmission conditions at the central node; and the fact that we improve the previous results having one control less here.
More precisely, in this paper we consider a tree-shaped network R of (N + 1) edges e i (where N ∈ N * ), of lengths l i > 0, i ∈ {1, .., N + 1}, connected at one vertex that we assume to be 0 for all the edges.We assume that the first edge e 1 is parametrized on the interval I 1 := (−l 1 , 0) and the N other edges e i are parametrized on the interval I i := (0, l i ) (see Figure 1).
On each edge we pose a nonlinear Korteweg-de Vries (KdV) equation.On the first edge (i = 1) Figure 1: A tree-shaped network with 3 edges (N = 2) we put no control and on the other edges (i = 2, • • • , N + 1) we consider Neumann boundary controls.Thus, we can write the system where y i (x, t) is the amplitude of the water wave on the edge e i at position x ∈ I i at time t, ) are positive constants.The initial data y i0 are supposed to be L 2 functions of the space variable.
It is worth to mention that the transmission conditions at the central node 0 are inspired by the recent papers [20] and [6].It is not the only possible choice, and the main motivation is that they guarantee uniqueness of the regular solutions of the KdV equation linearized around 0 (see [6,26]).A characterization of boundary conditions that imply a well-posedness dynamics for the linear Airy-type evolution equation (u t = αu xxx + βu x , where α ∈ R * , β ∈ R) on star graphs of half-lines are given in [20].
Let us introduce some notations.First, for any function f : R → R we set In the sequel, we shall use the following notations : Then the inner products and the norms of the Hilbert spaces L 2 (R) and H 1 0 (R) are defined by The main goal of this paper is to study the controllability of the nonlinear KdV equation on the tree shaped network of N + 1 edges with N controls.The controllability problem can be stated as following.For any T > 0, l i > 0, y 0 ∈ L 2 (R) and y T ∈ L 2 (R), is it possible to find N Neumann boundary controls h i ∈ L 2 (0, T ) such that the solution y to (1.1) on the tree shaped network of N + 1 edges satisfies y(•, 0) = y 0 and y(•, T ) = y T ?
The main result of this paper gives a positive answer if the time of control is large enough and the lengths of the edges are small enough.
and assume that There exists a positive constant T min such that the system (1.1) is locally exactly controllable in any time T > T min .More precisely, there exists r > 0 sufficiently small such that for any states there exist N Neumann boundary controls h i ∈ L 2 (0, T ) such that the solution y to (1.1) on the tree shaped network of N + 1 edges satisfies y(•, 0) = y 0 and y(•, T ) = y T for T > T min .
Remark 1.A same type of result can be obtained for a general tree with N +1 external vertices, we get the controllability result with only N Neumann controls.For a sake of clarity on the notations, we choose to write our result for a simplified tree with only one internal vertex.
Remark 2.
An open problem is whether it is possible to reduce the number of controls at the external vertices and still having a control result.What is clear is that if we make zero one of our controls h i , then in that branch i the exact controllability does not hold anymore.It is known that the single KdV equation controlled from the left (here, through the couplings) is null controllable only [17].Regarding other similar systems, we can mention the well known result of controllability of the wave equation on a network where we can reduce the number of control if the ratio of the lengths is not rational (see for instance [14]).In our case the conclusion is not easy to obtain.
In order to prove Theorem 1 we prove first the exact controllability result of the KdV equation linearized around 0. Our proof is based on an observability inequality for the linear backward adjoint system obtained by a multiplier approach.We recall that the KdV equation linearized around 0 writes We then get the local exact controllability result of the nonlinear KdV equation applying a fixed point argument.The drawback of this method is that we do not obtain sharp conditions on the lengths l i and on the time of control T min .However, we get an explicit constant of observability.
The paper is organized as follows.Section 2 is devoted to the necessary preliminary step dealing with the well-posedness and regularity of the solutions of the linear and nonlinear KdV equation.
Section 3 will develop the proof of the local controllability result stated in Theorem 1 with a first step concerning the linearized KdV equation and a second step dealing with the original nonlinear system.
Well-posedness and regularity results
In this section, we follow [23] (see also [2,9,11]).We first study the homogeneous linear system (without control), then the linear KdV equation with regular initial data and controls, and by density and the multiplier method, with less regularity on the data.Secondly, we consider the case of the linear system with a source term in order to pass to the nonlinear KdV equation by a fixed point argument.
Study of the linear equation
We begin by proving the well-posedness of the linear KdV equation (1.4) with h i = 0 for any We consider the operator A defined by with domain , where Then we can rewrite the homogeneous linear KdV equation (1.4) with (2.5) It is not difficult to show that the adjoint of A, denoted by A * , is defined by with domain where the operators A and A * are dissipative.
Proof.We first prove that the operator we have with Cauchy-Schwarz inequality If we take α i and β i such that (2.6) holds, then Ay, y L 2 (R) ≤ 0, which means that the operator We now prove that the adjoint operator Then we have If we take α i and β i such that (2.6) holds, then A * z, z L 2 (R) ≤ 0, which means that the operator Consequently, A generates a strongly continuous semigroup of contractions S on L 2 (R) (see [22]).
We denote by {S(t), t ≥ 0} the semigroup of contractions associated with A. For any there exists a unique mild solution y = S(.)y0 ∈ C([0, T ], L 2 (R)) of (2.5).Moreover, if y 0 ∈ D(A), then the solution of (2.5) is classical and satisfies We now prove the well-posedness result for the linear equation (1.4) with regular initial data and controls.More precisely, we assume that the N boundary controls This choice is possible by taking, for instance, for all i ∈ {2, • • • , N + 1}, the functions Moreover, we can define the function .
We now define z = y − φ, which satisfies where g(x, t) . We deduce from classical results on semigroup theory (see [22]) and from the fact that A generates a strongly continuous semigroup of contractions on L 2 (R) that there exists a unique classical solution ) of (2.7).Consequently, there exists a unique solution (1.4).
We now study the same system but with less regularity on the data, using a density argument and the multiplier method.
KdV linear equation with a source term
In order to prove the well-posedness result for the nonlinear KdV equation (1.1), we use a well-posedness and regularity result for the linear KdV equation with a source term: ) and h i ∈ L 2 (0, T ) for any i ∈ {2, • • • , N + 1}.Then, there exists a unique solution y ∈ ).Moreover, there exists C > 0 such that the following estimate holds: (2.17) Proof.Using Proposition 3, it suffices to consider the case y 0 = 0 and h i = 0 for any i ∈ {2, • • • , N + 1}.Since A generates a strongly continuous semigroup of contractions on L 2 (R), , there exists a unique mild solution y ∈ C([0, T ], L 2 (R)) (see [22]) given by the Duhamel's formula and there exists C > 0 such that It remains to prove that y ∈ L 2 (0, T, H 1 0 (R)) and that To prove this we follow exactly the steps of the proof of Proposition 3 paying attention to the fact that the right hand side terms are not homogeneous anymore but involve the source f .
Well-posedness result of the nonlinear equation
We endow the space To prove the well-posedness result of the nonlinear system (1.1), we follow [12] (see also [9]).
The first step is to show that the nonlinear term yy x can be considered as a source term of the linear equation (2.16).
Proof.The proof can be found in [23] or [9].
where r > 0 is chosen small enough later.Given y ∈ B, we consider the map Φ : B → B defined by Φ(y) = ỹ where ỹ is the solution of Clearly y ∈ B is a solution of (1.1) if and only if y is a fixed point of the map Φ.From (2.17) and Proposition 5, we get Moreover, for the same reasons, we have We consider Φ restricted to the closed ball B(0, R) = {y ∈ B, y B ≤ R} with R > 0 to be chosen later.Then Φ(y we take R and r satisfying Consequently, we can apply the Banach fixed point theorem and the map Φ has a unique fixed point.We have then shown the following proposition. Proposition 6.Let T > 0, l i > 0 and assume that (2.6) holds.Then, there exist r > 0 and C > 0 such that for every y 0 ∈ L 2 (R) and there exists a unique y ∈ B solution of system (1.1) which satisfies
Controllability results
We first prove the exact controllability result of the linear system (1.4) by using a duality argument and the multiplier method in order to prove the observability inequality.Then, we obtain the local exact controllability result of the nonlinear system (1.1) by a fixed point theorem.
Linear system
Due to the linearity of the system (1.4), we can consider the case of a null initial data, i.e. by taking y 0 = 0 on R. It can be easily seen that the exact controllability of (1.4) is equivalent to the surjectivity of the operator where y = (y 1 , y 2 , • • • , y N +1 ) is the solution of (1.4) when controls (h 2 , • • • , h N +1 ) are chosen.
It is known that the surjectivity of this operator is equivalent to an observability inequality for the adjoint operator of Λ, which is given by ) is the solution of the backward adjoint system The first step is to prove an observability inequality for the backward adjoint system (3.18),stated below and obtained by a multiplier method.
Theorem 2. Let l i > 0 for any i ∈ {1, • • • , N + 1} satisfying (1.2) and assume that (1.3) holds.There exists a positive constant T min such that if T > T min , then we have the following observability inequality ) . By multiplying ϕ t + ϕ x + ϕ xxx = 0 by qϕ and integrating by parts on R × [s, T ], we get after some computations • Let us first choose q(x, t) = t and s = 0 in (3.20).Then we obtain and using the boundary condition of (3.18) at the internal node 0, we have By using Poincaré inequality and the estimation of the trace of the function, we have, As we can not estimate the trace of ϕ x (0, t), we need to use the strong hypothesis in (1.3), i.e.
Then, from (3.21) we get • Taking now q(x, t) = 1 and s = 0 in (3.20), we obtain Using again the boundary condition of (3.18) at the internal node 0, we have • Picking s = 0, q 1 (x, t) = x and q i (x, t) = αi N βi x in (3.20), we obtain Using again the boundary condition of (3.18) at the internal node 0, we have Then, from (3.24) we get, with Poincaré inequality and the fact that the operator A * is dissipative, x dxdt ≤ 2 max 1, where we used the notation Note that Γ > 0 under the condition which is weaker than the hypothesis (1.2).
In order to have the observability inequality (3.19) from (3.26), we have to impose that leads us to
.27)
This condition on time T makes also appear other condition on L: which is equivalent to that is exactly hypothesis (1.2).This finishes the proof of Theorem 2 where the existence of time T min is given by condition (3.27) and the observability constant is Remark 3. From previous proof we can deduce that if we have the condition Once the observability inequality is established as in Theorem 2, then the exact controllability result of the linear system (1.4) is obtained by duality and the Hilbert Uniqueness Method (HUM).Thus, the following is true.that impose the second condition 2C 5 R < 1.These conditions are satisfied for instance if we choose r and R such that , that ends the proof of Theorem 1. | 4,144 | 2020-09-01T00:00:00.000 | [
"Mathematics"
] |
Contribution of the GABAergic System to Non-Motor Manifestations in Premotor and Early Stages of Parkinson’s Disease
Non-motor symptoms are common in Parkinson’s disease (PD) and they represent a major source of disease burden. Several non-motor manifestations, such as rapid eye movement sleep behavior disorder, olfactory loss, gastrointestinal abnormalities, visual alterations, cognitive and mood disorders, are known to precede the onset of motor signs. Nonetheless, the mechanisms mediating these alterations are poorly understood and probably involve several neurotransmitter systems. The dysregulation of GABAergic system has received little attention in PD, although the spectrum of non-motor symptoms might be linked to this pathway. This Mini Review aims to provide up-to-date information about the involvement of the GABAergic system for explaining non-motor manifestations in early stages of PD. Therefore, special attention is paid to the clinical data derived from patients with isolated REM sleep behavior disorder or drug-naïve patients with PD, as they represent prodromal and early stages of the disease, respectively. This, in combination with animal studies, might help us to understand how the disturbance of the GABAergic system is related to non-motor manifestations of PD.
INTRODUCTION
Parkinson's disease (PD) is the second most common neurodegenerative disorder with a prevalence of between 1% and 4% in over-60-year-olds (Tysnes and Storstein, 2017). The diagnosis of PD currently depends on the identification of motor clinical features, including rest tremor, rigidity and bradykinesia. In addition, patients with PD develop a wide range of non-motor manifestations including cognitive impairment and dementia, mood and sleep disturbances, sensory abnormalities and autonomic nervous system dysfunction (Poewe, 2008). It has been estimated that the first motor signs appear when 50% to 80% of dopaminergic neurons in the substantia nigra pars compacta have been lost (Cheng et al., 2010). Thus, by the time of diagnosis, brain injury has been ongoing for years and any attempt at neuroprotection at this stage might be unsuccessful. Great efforts are being made to detect markers of neuronal dysfunction early in the course of the disease (Postuma and Berg, 2019). In relation to this, it is increasingly recognized that non-motor symptoms not only accompany but also precede motor signs in PD (Poewe, 2008). This is consistent with the Braak PD staging system, which suggests that α-synuclein deposition starts in areas involved in sleep regulation, olfaction or autonomic function before affecting the basal ganglia or cerebral cortex (Braak et al., 2003). The array of premotor symptoms might help to identify patients at high risk of developing α-synucleinmediated neurodegenerative diseases, such as PD, dementia with Lewy bodies (DLB) or multiple system atrophy (MSA).
While the mechanisms for motor impairment are fairly well established, the neuroanatomical and molecular substrates for non-motor manifestations are far from clear. Current evidence suggests that neurotransmitters, such as acetylcholine, serotonin, noradrenaline, glutamate and gamma-aminobutyric acid (GABA), play an important role in the pathophysiology of PD (Sanjari Moghaddam et al., 2017). GABA is the main inhibitory neurotransmitter in the central nervous system (CNS), acts through GABA A and GABA B receptors, and is primarily released by local interneurons to regulate cortical and subcortical microcircuits (Figures 1A, B). GABAergic signaling modulates a wide range of physiological functions, including sensory perception, information processing and cognition. In patients with PD, GABAergic dysregulation has been observed in the basal ganglia postmortem and in vivo with magnetic resonance spectroscopy (Kish et al., FIGURE 1 | The GABAergic system and non-motor symptoms in Parkinson's Disease. (A) GABA receptors. The inhibitory neurotransmitter GABA acts through ionotropic GABA A receptor or metabotropic GABA B receptor to reduce the membrane potential. The activation of GABA A receptors allows chloride (Cl -) entry into the cytoplasm, while GABA B receptor activation leads to a cellular cascade resulting in calcium (Ca 2+ ) channel deactivation and potassium (K + ) channel opening. (B) Schematic representation of cortical and subcortical local microcircuit organization of GABAergic cells. Inhibitory GABAergic cells are primarily local projecting neurons with a broad array of anatomical and physiological properties. The effect resulting from the inhibition exerted by GABAergic cells depends on their sensitivity to incoming stimuli, their firing properties and the subcellular domain of excitatory cells targeted by each interneuron. The diversity of GABAergic cells provides the brain with extensive computational power to regulate sensory and cognitive processes. (C) Brain areas associated with non-motor symptoms in Parkinson's disease. Each color corresponds to a specific non-motor symptom and the associated area of the presumed GABAergic dysfunction.
1986; Emir et al., 2012;O'Gorman Tuura et al., 2018). Recently, it has been shown that striatal dopaminergic axons co-release GABA (Tritsch et al., 2012;Tritsch et al., 2014), which suggests that dopaminergic neurodegeneration could lead to GABA decline in basal ganglia circuits (O'Gorman Tuura et al., 2018). Considering that GABAergic networks regulate calcium-mediated mechanisms, like mitochondrial function and oxidative stress, loss of GABA inhibitory tone would facilitate accumulation of abnormal levels of intracellular calcium, triggering neurodegenerative processes. Consistent with this idea, it has been shown that GABA agonists, such as Baclofen or Bumetadine, relieve motor symptoms and protect dopaminergic cell bodies in mice models of PD (Hajj et al., 2015;Lozovaya et al., 2018). Nonetheless, GABAergic alterations might go beyond the basal ganglia. Unfortunately, few studies have investigated how GABAergic or other neurotransmitter systems may induce or modulate non-motor symptoms of PD. Identifying the separate role of each pathway may allow us to develop novel pharmacological compounds targeted to specific symptoms. Seeking to provide up-to-date information about the role of the GABAergic system, in this Mini Review, we focus on its ability to explain some of the non-motor manifestations that appear early in PD ( Table 1).
REM SLEEP BEHAvIOR DISORDER AND PONTINE GABAERGIC CELL DYSFUNCTION IN PD
Sleep disturbances are the most common non-motor manifestations of PD, with wide variability in the reported prevalence (66% to 98%) (Garcia-Borreguero et al., 2003). Sleep-related abnormalities in PD include insomnia, sleep fragmentation, restless legs syndrome, excessive daytime sleepiness and rapid eye movement (REM) sleep behavior disorder (RBD), among others. Some sleep abnormalities occur in early stages of the disease, even during the prodromal phase, including RBD (Iranzo et al., 2006), restless legs syndrome (Wong et al., 2014) and excessive daytime sleepiness (Abbott et al., 2005). Nonetheless, only idiopathic RBD (iRBD) has consistently shown to be an early predictor of the development of PD (Iranzo et al., 2017) (Postuma and Berg, 2019).
RBD is a parasomnia characterized by the loss of the normal muscle atonia of REM sleep. The diagnostic hallmark is excessive electromyographic activity during REM sleep as documented by polysomnography (Ferini-Strambi et al., 2016). Patients with RBD report the enactments of dreams, including kicking, punching or talking. In the absence of other neurological signs or CNS lesions, patients with iRBD are at high risk of developing α-synuclein-mediated neurodegenerative diseases in the years following the diagnosis (Hogl et al., 2018). It has also been suggested that the presence of RBD in PD patients is associated with an aggressive phenotype, these patients showing a higher density of α-synuclein aggregates (Knudsen et al., 2018).
Therefore, in recent years, special attention has been paid to PD-related non-motor manifestations and symptom progression in iRBD patients, seeking to find novel biomarkers of PD. Although the precise pathophysiological mechanisms for iRBD have not been fully determined, iRBD seems to be attributable to neurochemical imbalances in sleep regulatory systems (Boucetta et al., 2014). Previous studies have pointed toward a significant and specific neurodegeneration of GABA or glycine-containing neurons in the ventral medulla, such as in the nucleus raphe magnus and the ventral gigantocellular, alpha gigantocellular and lateral paragigantocellular reticular nuclei that directly project to spinal motor neurons to produce atonia during REM sleep (Iranzo, 2018) (Figure 1C). This hypothesis is supported by preclinical studies in transgenic mice that exhibit an RBD phenotype when glycine and GABA receptor function is impaired (Brooks and Peever, 2011). Moreover, allosteric agonists that bind at the α/γ subunit interface of GABA A receptors, i.e., benzodiazepines, including clonazepam, triazolam or alprazolam, are the first-line therapy in iRBD (Anderson and Shneerson, 2009), and the effectiveness of this treatment could be explained by GABAergic neurotransmission disruption in prodromal stages of PD.
OLFACTORY LOSS AND ITS RELATIONSHIP wITH GABAERGIC NEUROTRANSMISSION IN PD
Olfactory dysfunction is observed in more than 90% of PD patients (Doty, 2012), frequently precedes the onset of motor (Hilgen et al., 2015) GABA antagonism in visual cortex decreases stimulus orientation and direction selectivity † (Katzner et al., 2011) GABA levels in visual cortex are predictive of visuospatial abilities * (Cook et al., 2016) Visual hallucinations are associated with decreased occipital GABA in PD (Firbank et al., 2018) symptoms (Fantini et al., 2006;Ross et al., 2008;Postuma et al., 2009), and predicts the early conversion of iRBD to PD or DLB (Mahlknecht et al., 2015;Fereshtehnejad et al., 2017). The mechanisms responsible for olfactory dysfunction in PD are currently unknown. Magnetic resonance imaging studies have shown significantly smaller olfactory bulb volumes in patients with PD than controls (Brodoehl et al., 2012;Li et al., 2016;Tanik et al., 2016), although other authors failed to find such differences (Altinayar et al., 2014). Axonal and myelin damage of olfactory tracts has also been observed using diffusor-tensor imaging (Scherfler et al., 2006;Scherfler et al., 2013). These results have been confirmed by postmortem analysis of olfactory bulbs, in which global glomerular voxel volume was found to be smaller in five PD cases than six healthy controls (Zapiec et al., 2017). Moreover, hyposmia has been related to pathological changes in other areas of the olfactory system, such as the anterior olfactory nucleus or basolateral nucleus of the amygdala (Pearce et al., 1995;Harding et al., 2002). On the other hand, sensory perception disturbances might represent subtle alterations of normal functioning that precede neuronal degeneration. Changes in network connectivity of brain structures related to olfaction have already been described (Westermann et al., 2008;Bohnen et al., 2010;Wen et al., 2017), and these functional abnormalities may arise from iron and sodium deposition (Gardner et al., 2017).
The limited literature about the precise anatomy and physiology of the human olfactory bulb makes it difficult to assess the mechanisms related to olfactory dysfunction in humans. In this regard, animal studies provide a wealth of knowledge, as the olfactory bulb of rodents has been well characterized. It has been shown that interneurons-GABA-releasing cells-are essential for odor detection, and functionally distinct GABAergic circuits within the olfactory bulb of rodents play different roles in olfactory coding. The tonic inhibition exerted by these cells is thought to regulate the sensitivity of odor detection and odor perception in the mammalian brain (Pirez and Wachowiak, 2008;Shao et al., 2009;Acebes et al., 2011) (Figure 1C). Even though animal findings suggest that interneuron connectivity is the major determinant of odor perception, whether the loss of inhibitory synapses contributes to olfactory changes in PD in humans needs further research.
vISUAL DISTURBANCES
Among primary visual functions, low-contrast visual acuity, contrast sensitivity and color vision are typically affected in PD (Weil et al., 2016). Patients with drug-naïve PD or iRBD also show decreased contrast sensitivity (Righi et al., 2007;Marques et al., 2010), and abnormal color vision discrimination has been described in iRBD, these patients having a 3-fold higher risk of conversion (Postuma et al., 2015). Nevertheless, color discrimination is not consistently impaired in the early stages of PD, indicating that color vision abnormalities may represent a specific PD phenotype (Vesela et al., 2001). Indeed, Postuma and colleagues reported that abnormal color vision in iRBD was a stronger predictor of primary dementia than parkinsonism (Postuma et al., 2015), which is in line with findings in PD, RBD increasing the risk of cognitive decline (Pagano et al., 2018). Patients with iRBD or de novo PD also display visuoconstructional and visuoperceptual disturbances that may be related to nondopaminergic impairment (Ferini-Strambi et al., 2004;Gagnon et al., 2009;Aarsland et al., 2009a;Marques et al., 2010;Fantini et al., 2011;Kim et al., 2011;Ota et al., 2016).
In vivo neuroimaging studies in newly diagnosed and drugnaïve PD patients have detected structural alterations in the visual pathway, ranging from thinning of inner retinal layers to increased optic radiation mean diffusivity and reduced visual cortical volumes (Arrigo et al., 2017;Ahn et al., 2018;Murueta-Goyena et al., 2019), which might explain some visual disturbances. There is, however, a growing body of evidence highlighting the role of GABA in perceptual aspects of vision.
Retinal amacrine cells co-release dopamine and GABA and the degeneration of these specialized cells has been suggested to cause primary visual dysfunction, although this hypothesis has not been confirmed (Nguyen-Legros, 1988). In line with this, pharmacological depletion of endogenous retinal GABA with allylglycine induces changes in contrast sensitivity (Hilgen et al., 2015). On the other hand, animal studies have shown that GABA A receptor antagonist infusion in cat primary visual cortex decreases selectivity for stimulus orientation and direction, but not contrast sensitivity (Katzner et al., 2011). More recently, it has been observed that GABA levels measured by magnetic resonance spectroscopy are strong predictors of visuospatial abilities in healthy adults (Cook et al., 2016), and increasing GABA activity with systemic midazolam injections decreases visual sensitivity, preferentially affecting medium-to-high spatial frequencies and low temporal frequencies (Blin et al., 1993). Additionally, higher GABA concentrations in the visual cortex, as well as administration of the GABA agonist lorazepam, induce slower perceptual dynamics (van Loon et al., 2013).
These findings suggest that GABA signaling plays a central role in visual perception and a disturbance of this circuit at any level of the visual pathway could influence proper sensory processing. Consistent with this, recent studies show that PD patients with visual hallucinations have low occipital GABA concentrations (Firbank et al., 2018), and complex visual hallucinations in DLB are associated with altered GABAergic synaptic activity (Khundakar et al., 2016), which further supports the view that dysregulation of GABAergic system is involved in the visual pathway of PD ( Figure 1C). Whether this system is affected in the retina and visual cortex of all PD patients, and from early stages, remains to be determined.
COGNITIvE DYSFUNCTION AND GABAERGIC SIGNALING IN FRONTOSTRIATAL CIRCUITS
Cognitive manifestations are frequently reported in PD, with a prevalence of 20-25% for mild cognitive impairment and 30% for dementia. It is estimated that PD patients have a 3-to 6-fold higher risk of developing dementia than age-matched controls (Svenningsson et al., 2012). Cognitive dysfunction is thought to be one of the key premotor manifestations of PD. At diagnosis, 15-20% of PD patients have mild cognitive impairment (Aarsland et al., 2009b) and several studies in patients with iRBD have identified cognitive disturbances, including delayed verbal memory, poorer decision-making, worse attention and slower processing speed, these domains being predictive of future risk of developing PD or DLB (Fantini et al., 2011;Terzaghi et al., 2013;Youn et al., 2016;Genier Marchand et al., 2017). Thus, early onset cognitive abnormalities are mainly dependent on the frontal lobe (Ferini-Strambi et al., 2004;Massicotte-Marquez et al., 2008;Sasai et al., 2012;Chahine et al., 2018).
Despite the evidence of neuropathological abnormalities in frontal brain areas in PD, their molecular and cellular alterations are poorly understood. Several studies have suggested that cognitive impairment in PD is attributable to neurotransmitter dysregulation rather than frank neurodegeneration (Kehagia et al., 2010;Ray and Strafella, 2012). Dopamine and acetylcholine deficiencies in frontostriatal pathways play a major role in cognitive impairment in PD, but the contribution of the other neurotransmitter systems remains less certain. Regarding the role of GABA, in frontal cortex global transcriptional changes of GABAergic neurotransmission have been observed in DLB patients (Santpere et al., 2018). It has also been shown that mRNA expression of the GABA-synthesizing enzyme glutamic acid decarboxylase-67 (GAD67) (Lanoue et al., 2010) and the calcium-binding protein parvalbumin (PV) (Lanoue et al., 2013)-two key markers of GABAergic cells-is low in the dorsolateral prefrontal cortex of PD patients without evidence of cell loss, further suggesting the downregulation of inhibitory neurotransmission in the frontal cortex ( Figure 1C). In basal ganglia, in vivo GABA concentration changes have been detected in PD patients performing cognitive tasks (Buchanan et al., 2015). Interestingly, boosting GABAergic neurotransmission by zolpidem administration in early stage PD patients modulates aberrant beta-frequency oscillations (Hall et al., 2014), and the desynchronization of low-frequency activity seems to restore cognitive functions (Hall et al., 2010). Although these findings point towards decreased GABAergic activity in frontostriatal circuits in PD and DLB, whether GABAergic neurotransmission is also perturbed in premotor stages of PD has not been established, and its contribution to cognitive dysfunction needs to be elucidated in future studies.
GABA IN ANXIETY AND DEPRESSION
Anxiety and depression are common non-motor symptoms of PD, with reported prevalence rates of 20-40% (Chen and Marsh, 2014) and 50% (Reijnders et al., 2008), respectively, and may precede motor signs (Jacob et al., 2010). Notably, RBD patients score worse on anxiety and depression scales than controls and even PD patients (Barber et al., 2017). Although the exact neurobiological mechanisms that underlie anxiety and depression have not been fully elucidated, they seem to be intrinsically interrelated.
Pharmacological studies in both humans and animals have revealed that positive modulators of GABA A receptors are anxiolytic and antidepressant, whereas negative modulators produce anxiogenic and depressive-like effects (Kalueff and Nutt, 2007;Mohler, 2012). Agents that enhance GABA A receptor conductance (e.g., benzodiazepines) and GABA metabolism (e.g., valproate, vigabatrin, and tiagabine) exert anxiolytic effects, and it seems that partial agonists of α 2 /α 3 GABA A receptors, such as TPA-023, may also serve as antidepressants (Mohler, 2012). Furthermore, genetic studies implicate GABA-receptor dysfunction in the risk of developing anxiety and depression (Kalueff and Nutt, 2007). Recent evidence suggest that somatostatin contributes to the pathology of anxiety and depression (Fuchs et al., 2016;Fee et al., 2017), levels of this GABAergic marker being low in cerebrospinal fluid and induced pluripotent cells of PD patients (Dupont et al., 1982;Iwasawa et al., 2019). Nonetheless, there is still a lack of studies exploring the role of somatostatin in anxiety and depressive disorders in PD.
GASTROINTESTINAL SYMPTOMS AND GABA SIGNALING IN THE ENTERIC NERvOUS SYSTEM
Gastrointestinal disturbances fall within the spectrum of autonomic manifestations of PD patients. Hypersalivation, dysphagia, nausea, gastroparesis, small intestinal dysfunction, slow transit constipation and defecatory dysfunction have been attributed to α-synuclein-mediated small fiber neuropathy of the enteric nervous system (ENS) and to the neurodegeneration of the enteric branches of the vagus nerve in the brainstem (Pfeiffer, 2018). Among the gastrointestinal symptoms, constipation is the most frequent manifestation in PD and recent evidence suggests that it might also be one of the most common disturbances in prodromal PD (Stirpe et al., 2016). A multicenter study of 318 patients with polysomnography-confirmed iRBD concluded that they had substantially more autonomic symptoms than controls (SCOPA-AUT questionnaire), gastrointestinal symptoms being the most prominent domain (Ferini-Strambi et al., 2014). Nonetheless, gastric emptying measured with the 13Coctanoate breath test showed that only drug-naïve and earlystage Parkinson's disease patients had delayed gastric emptying, authors suggesting that changes in structures modulating gastric motility might not be sufficiently severe in iRBD (Unger et al., 2011).
The last three decades have seen an expansion in the literature on the role of GABA in the control of gastrointestinal function, including mobility and inflammatory responses (Auteri et al., 2015a;Auteri et al., 2015b). GABA has been identified as an important modulator of gastrointestinal tract function. This neurotransmitter can stimulate or inhibit the enteric neurons acting though GABA A or GABA B receptors (Auteri et al., 2015a). Its role is particularly important in the colon, where it modulates the peristatic reflex. On the other hand, enteric inflammation occurs in PD and has been related to the initiation and progression of the disease (Houser and Tansey, 2017). Nonetheless, it has yet to be determined why the production of pro-inflammatory cytokines takes place in the enteric tract. The purinergic system controls enteric inflammation, but GABA also has a major role in immune cell activity and inflammatory events in the gastrointestinal tract (Jin et al., 2013). Topiramatean anti-epileptic drug that acts as a GABA A agonist-reduces gastrointestinal inflammation in rats (Dudley et al., 2011), identifying GABA as a putative neuroimmune modulator. A better understanding of the relationship of GABA signaling with intestinal motility and inflammation is necessary, however, to reveal a possible functional link between this neurotransmitter, the ENS, and gastrointestinal symptoms of PD.
FINAL REMARKS AND CONCLUSIONS
Current evidence supports the view that PD is a degenerative disorder that affects multiple systems and presents with several non-motor symptoms. Over recent years, the importance of early, non-motor manifestations of PD has been increasingly recognized, as they may help to identify patients at high risk of developing α-synucleinopathies. Even though the neuronal circuits for motor symptoms are fairly well understood, the pathophysiological mechanisms for perceptual, cognitive, mood and autonomic disturbances of PD remain unclear.
Here, we report evidence consistent with the view that the GABAergic system is altered in PD and may contribute to non-motor symptoms that appear early in disease progression. Nonetheless, the literature in this field is dominated by nonplacebo controlled and postmortem studies, generally based on small series and providing low-level evidence. To summarize, based on current findings, PD patients in premotor stages have anxiety and depression and alterations in the olfactory system, visual perception and visuospatial abilities, frontostriatal-related cognition, and gastrointestinal function. The neurobiological correlates of these deficits are unclear, in part because of the complex dynamic interactions between several neurotransmitter systems. Still, the dysfunction of GABAergic neurons in ventral medullary reticular formation seems to be linked to RBD. Moreover, preclinical studies show the relevance of interneurons in odor detection and the causal role of GABA in anxiety and depressive disorders, but we are far from establishing whether this also occurs in PD. On the other hand, disturbance of GABA signaling by pharmacological compounds affects visual processing and cognition, and GABA levels in the visual cortex are low in PD patients with visual hallucinations. It has been also shown that GABA controls gastrointestinal function, although it is not known whether this is associated with the gastrointestinal symptoms reported by PD patients. All these findings suggest that intervening in GABAergic signaling might modulate nonmotor manifestations of PD and provide a novel avenue for non-dopaminergic therapy.
Nevertheless, there is a paucity of replication and large case-control studies. Future research should include in vivo longitudinal studies that examine the link between alterations in the GABAergic system and early non-motor symptoms by exploiting advances in PET ligands, magnetic resonance spectroscopy and CSF biomarkers. Preclinical studies might help to investigate the effects of GABA in the pathogenesis of non-motor symptoms, but we suggest that identifying the neurotransmitter deficits that correlate with clinical severity should be the mainstay for guiding future treatment studies.
AUTHOR CONTRIBUTIONS
AM-G conceptualized and wrote the manuscript. AA organized and prepared the manuscript. JG-E and IG contributed to writing and reviewing the manuscript.
FUNDING
This study was partially funded by the Michael J. Fox Foundation (2014 Rapid Response Innovation Awards; Grant 10189), the Carlos III Health Institute through Projects PI14/00679 and PI16/00005, and a Juan Rodes Grant (JR15/00008) (I.G.) (cofunded by the "Investing in Your Future" European Regional Development Fund/ European Social Fund programme), the Department of Health of the Basque Government through Project 2016111009, and EITB/ BIOEF telemarathon for Neurodegenerative Diseases (BIO17/ ND/010/BC). | 5,374.6 | 2019-10-30T00:00:00.000 | [
"Psychology",
"Biology"
] |
Transport Theorem for Spaces and Subspaces of Arbitrary Dimensions
: Using the apparatus of traditional differential geometry, the transport theorem is derived for the general case of a M -dimensional domain moving in a N -dimensional space, £ M N . The interesting concepts of curvatures and normals are illustrated with well-known examples of lines, surfaces and volumes. The special cases where either the space or the moving subdomain are material are discussed. Then, the transport at hypersurfaces of discontinuity is considered. Finally, the general local balance equations for continuum of arbitrary dimensions with discontinuities are derived.
Introduction
The transport theorem is a fundamental theorem used in formulating the basic conservation and balance laws in continuum mechanics (mass, momentum, and energy), which are adopted from classical mechanics and thermodynamics where the system approach is normally followed. Analogous to the classical Reynolds transport in continuum mechanics, the surface transport theorem is essential in the study of thin films undergoing large deformations, in epitaxial growth and in the study of phase boundary evolution. It is also important in the modeling of a singular surface which carries a certain structure of its own as it migrates. There is a vast literature on transport theorem and many references can be found in [1][2][3].
Betounes formulated and proved the general transport theorem associated with the motion of an arbitrary p -dimensional submanifold in a n -dimensional semi-Riemannian manifold [4]. He used the language and notation of modern differential geometry on manifolds e.g., [5,6], which is inconvenient for engineering and physics applications. Here, we formulate and prove the theorem using the language and concepts of traditional differential geometry and tensor calculus (e.g., [7,8]). Moreover, we apply the transport theorem to hypersurfaces of discontinuities and discuss the applications in continuum mechanics.
Petryk and Mroz derived the expressions for the first-and second-time derivatives of integrals and functionals defined on volume and surface domains which vary in time [9]. Their result is more general then the classical transport theorem as it pertains to piecewise regular surfaces and contains the edge terms. Cermelli et al. proved a transport theorem for smooth surfaces which evolve with time in Euclidean space, expressed in terms of the parameter-independent derivatives [10]. Recently, Sequin et al. extended the 3D transport theorem to rough domains of integration [11].
The need for the transport theorem arises in different contexts and consequently requires different derivation methods. The space-time approach [12] was used in [13] to derive the transport theorem for moving surface in a moving 3D region. A general transport theorem for moving surfaces based on the theory of generalized derivatives in n-dimensional space is presented in [14].
Two interesting attempts to present a unified approach to the topic of continuum mechanics on arbitrarily moving domains are given in [15,16]. They point out that it is desirable to formulate the transport theorem in a single unified way by using the classical approach expressed in terms of standards quantities from differential geometry and explicitly displaying the features that are common to all submanifolds, regardless of their finite dimensions.
This paper is organized as follows: In Sections 2 and 3, we consider geometry and kinematics in higher dimensions with special emphasis on the definitions of curvature and normals. Section 4 contains the derivation of the generalized transport theorem. In Section 5, we illustrate the concepts with well-known examples of lines, surfaces and volumes. In Section 6, we consider dependence of parametrization, i.e., on the choice of coordinates. In Sections 7-9, we consider the cases where the space and/or the moving subdomain are material in the sense of material in continuum mechanics. In Section 10, we consider a moving domain with hypersurfaces of discontinuity. Finally, in Section 11, we use the transport theorem to formulate the general local balance equations for continuum of arbitrary dimensions with discontinuities.
Geometry of
where or a u u we denote a typical point of ( ) Equivalently, ( , ) 0, 1, 2,..., where To illustrate dual representations (2) and (3), consider the representations of 1 ( ) V t in 3 E . The family of curves in 3 E , given in the parametric form may be represented as the intersection of circular cylinders: and as hyperbolic paraboloids: Each of the two representations has advantages and further we shall make use of both. While the representation (2) provides a convenient description of kinematics of ( ) M V t , it is dependent on parametrization. On the other hand, the representation (3) is independent of parameterization, i.e., independent of the choice of intrinsic coordinate system a u . It means that any transformation of intrinsic coordinate systems: does not change the representation (3). Consequently, the vectors independent of parameterization a u . Moreover, they are linearly independent because of (4), and may be taken as the base in Consider a different parametrization U such that From this, we have: are the basis vectors of ( ) M V t with respect to coordinate systems a u and U G , respectively.
Moreover, since ( , ) a k x u t in (2) satisfies (3), it follows that: whence we conclude that are also independent of parameterization. Moreover, from (8), it follows that: Further, we make use of vectors ( ) Then, vectors k g at the points x of ( ) M V t may be decomposed as: This relation is of crucial importance for decomposition of any tensor quantity defined on We will often make use of the relation between metric tensors of ( ) M V t defined in two coordinate systems a u and U L , i.e., the relation Then, The metric tensor
Kinematics of
Making use of (3), we obtain: . By d dt we denote the time derivative along a = u const . Note that: represents the scalar-normal velocity in ( ) p n direction which is independent of parameterization. Then, we can write: We will consider the coordinate system A U as a convected system in ( ) M V t . In continuum mechanics, the terms material or Lagrangian coordinates are used for convected coordinates, since the material particles can be labeled by these coordinates and, as such, they do not change their values during the motion of the material body. Note that, for a moving non-material domain, the choice of convected coordinates is arbitrary. However, once chosen, they remain fixed. The final result-the transport theorem-will be expressed in terms independent of the choice of coordinates.
At any time t , we consider the relation as the coordinate transformation between the convected coordinate system U L and intrinsic coordinate system a u . Further, we consider where C L are some constants. Generally, we denote by d dt the time derivative of any quantity In particular: From (5), we obtain the relation between velocities v and V of the point From (12), (10), and (7), we conclude that: thus proving that the scalar-normal velocity in ( ) p n direction is independent of parameterization. Instead of (13), a more compact representation is given by: Alternatively, We may also write: In particular, for the hypersurface
Generalized Transport Theorem in
, then according to [8] (p.
262):
Next, making use of (5), we write: Thus, in view of this and (15) we have: where M R is defined with respect to convected coordinates U L , and therefore the integration with respect to U L is independent of t . Now, we show in Appendix A, that: where The geometric meaning of ( ) which is independent of parameterization. Further, The expression
Remark 1.
To justify its name, we write in a unique compact invariant form as: where n is a unit vector orthogonal to ( ) M R t . Then, we have: . Accordingly, we have simple expression for V : The has been discussed previously in [10,15].
, it easy to see that: Therefore, After substituting (18) and (19) into (16), we obtain: Further, using divergence theorem we obtain (when the boundary consists of "material" points defined by convected coordinates, see Appendix B): is intrinsic to the motion of ). Finally, the transport theorem with respect to convective coordinate reads: We emphasize that the first integral is intrinsic to ( ) M R t , while the second integral is intrinsic with respect to ( ) ¶ M R t . Since the first integral on the right-hand side of (22) is invariant to any parameterization we may equally apply it for parameters a u . In this case, the scalar normal velocity of ( ) ¶ M R t with respect to parameters a u , i.e., needs to be used (Appendix B). Then, we write the generalized transport theorem with respect to non-convective coordinates a u in the form:
Examples
In this section, we consider familiar special cases and motivate the subsequent analysis of material domains and propagating discontinuity fronts. The section ends with the analysis of the capillary flow problem, which encompasses many of the special cases.
In all the cases below, the transport theorems holds also for the coordinates a u , i.e., (22), when we substitute , as defined in Appendix B.
3D Domain Moving in 3D Space
On the other hand, the familiar form of the transport theorem for the material body V t ( ) and the material field ( , ) t F x given per unit mass [ ( , ) t r x is mass density], reads: The volume V t ( ) in (24) is the material volume and the motion of the each material point
Closed Line Moving in a 3D Space
Define the unit tangent, normal, and binormal, , and , t m b in the standard way, the curvature As a special case, we compute the change of the total line energy arising from the line energy density In anticipation of the next example, we establish the relationship between the normal line velocity m V m and one of the surfaces. Let the surface with the normal 2 ( ) n be a material surface, moving with the velocity (2) w . The line then glides on this surface with the velocity 2 ν x ( ) relative to the surface (2). The unit normal-tangent vectors point outside the respective surfaces: The normal component of the line velocity the normal-tangent component relative to the surface (1) are then:
Interface Energies for Liquid Drop on a Solid Surface
The free liquid-gas interface ( ) a t has the unit normal 1 Then, from (25) and (28): where K is twice the mean curvature of the surface a t ( ) .
Capillary Flow
We consider an incompressible liquid flowing over a rigid solid and the surrounded by a gas with negligible viscosity and mass density, so that, without loss of generality, we assume the uniform vanishing pressure in the gas. The motion of the solid surface (2) w is prescribed and the triple line glides on the solid surface with the relative velocity The total energy of the system can be written as the sum of the bulk energy (kinetic + gravitational potential), the interface energies and the line energy: where y( ) x is the gravitational potential. Although the liquid-gas interface is not a material surface, the normal component of its velocity is identical to corresponding component of the material velocity of the fluid: Using (24), (26), (27) and (29), the rate of change of the total energy is where we have taken into account that for the rigid body motion (2) w : The last term in (30) is the correction arising from the rigid body motion of the whole assembly. To illustrate that point, consider a uniform translation . const (2) = w The last term in (30) is then: The liquid-gas surface contribution in (30) (for pure translation of the solid substrate) can then be written as: The energy balance requires that the total power input P be equal to the sum of the rate of total energy and dissipation rate D . We include the incompressibility condition, div , with the Lagrange multiplier field p which is recognized as the pressure in the fluid: where T represents the traction vector exerted by the solid on the liquid. We assume, without loss of generality, that the pressure in the (inviscid) gas vanishes, so that the traction on ( ) a t vanishes.
The flow at the re-entrant corner where the two surfaces intersect is singular (Taylor 1960). Within the sharp interface model the typical solution is to allow slip in some vicinity of the triple line [17]. Thus, at the solid surface we allow for slip, but not separation/penetration: The dissipation includes viscous dissipation in the bulk and the dissipation at the triple line: where τ is the viscous stress (deviatoric for incompressible fluid: tr 0 τ ( ) = ) and Q is the power conjugate of the triple line glide. Although we allow the slip at the solid surface, the dissipation (33) does not include this slip, owing to the choice of the external power input: ⋅ T v . Had we represented the external power as 2 ( ) ⋅ T w , the slip dissipation would have to be included. We will shortly see that the triple line dissipation is necessary if the experimentally observed difference between advancing and receding triple lines is to be described. The 2nd law of thermodynamics for isothermal processes reduces to the requirement that dissipation be positive. Assuming that the constitutive law for the fluid satisfies this requirement on its own (as it does for Newtonian fluids), it follows that the triple line force Q must have the same sign as the triple line glide x . The simplest linear constitutive law Upon substitution of (30) and (33) into (31) and some manipulation, we obtain the power balance equation: The simplest method for deriving the strong form of governing equations is to formulate the weak form directly via the Principle of virtual power (PVP) [18,19]. Application of the PVP yields the following governing equations and boundary conditions:
•
The natural boundary condition on ( ) A t : The governing (Cauchy) equations of motion in v t ( ) : The capillary jump in normal stress across ( ) a t : τ p K g -= I I. The deviatoric nature of the viscous stress τ then implies that on ( ) a t : The sign in the pressure jump condition is the consequence of the choice of 1 Moreover, from (35) for the advancing contact line Conversely, for the receding contact line the contact angle is smaller than the equilibrium contact angle. This is consistent with experimental observations [20]. Moreover, failure to include the triple line dissipation implies that the contact angle is always equal to the equilibrium one [21], which is in direct contradiction to experimental results.
Finally, we note that to complete the formulation, a definition of the slip constitutive law on
( )
A t is needed. This is beyond the scope of the current paper. We only note that such definition will not change any of the derived equations.
Transport Theorem Depending on Parametrization
In some cases, it is useful to use transport theorem in terms depending on parameterization. This can be done simply if we substitute (19), now written as: (20). It is convenient to put it in more familiar form: However: ( )
M V t in N-Dimensional Fluid
The transport relation given by (23) permits us to consider several particular cases of importance in continuum mechanics in a unified way. We consider N-dimensional fluid in analogy to 3dimensional fluid. First, we denote by ( , ) w x t the velocity of the fluid. We may write it as: where w a a = w tan a . Then, represent is the relative velocities of motion of ( ) M V t with respect to the fluid. We may decompose it as: Now, looking at the transport theorem the only term which may be influenced by motion of the fluid is . For definiteness we write: for the relative velocities of motion and normal migrational velocity of and hence: Upon substituting this into (23), we obtain: The corresponding transport theorem reads: ( ) Then, we note that: Therefore, the transport theorem becomes:
General Balance Laws
The basic laws of mechanics in 3D can all be expressed, in general, in the following form, for any bounded regular subregion P of the body B , and the vector field n , the outward unit normal to the boundary of the region t P in the current configuration. The quantities y and y s are tensor fields of order m , and y F is a tensor field of order 1 + m .
The relation (38) − the general balance of y in integral form − asserts that the rate of increase of the quantity y in a part P of a body is affected by the inflow of y through the boundary of P and the production of y within P . y F is the flux of y , and y s is the source of y . In general, the source y s may include external and internal sources.
We state general balance laws for where, according to (12), we have Thus, (A5) Formally, we may write (A4) as here, are the components of symmetric 2nd order tensor for each value of p [22]. Alternatively: Next, we calculate (A3) and obtain where we have made use of (A4) and (A5). Therefore (A2) is given by Next, we use (A4) and obtain Hence, Then, Grad F and its unit normal, | 4,149.6 | 2020-06-03T00:00:00.000 | [
"Mathematics"
] |
Use of quasi-SMILES to build models based on quantitative results from experiments with nanomaterials
Quasi-SMILES
Introduction
Engineered nanoparticles (NPs) are defined as materials of 1-100 nm in at least one of their dimensions. NPs are applied today in more than 5000 consumer products ranging from those pertinent to biomedical applications, energy conservation/generation, food additives/preservation, electronics (quantum components and sensors), chemical catalysis, construction materials (nanocomposites), and others (Abd Elkodous et al., 2019;Theerthagiri et al., 2019;Bhuyan et al., 2019;Kaur et al., 2018;Kumar et al., 2018). As a consequence, increased human exposure to NP is expected (Westmeier et al., 2016). The tiny size of NPs makes them capable of penetrating the cell membrane and affect biological functions, which potentially make nanoparticles a global health risk factor. Due to the diversity of NPs, it is not possible to perform a single health and safety assessment that covers all of them. Instead, each type of NPs must be classified by composition, size, and other parameters regarding the NPs. The importance of future sustainable nanotechnology and the potential health risk justifies developments of improved analysis methods to assess the interactions of NPs with cells and organs from plants, animals, and humans (EFSA, 2021). Computational methods are being investigated for reducing the cost, time, and resources of nanotoxicology testing (Buglak et al., 2019). One such approach is based on quantitative structure-property/activity relationships (QSPR/QSAR) which assume that the activity of a substance is related to its physicochemical properties. However, QSPR/QSAR was originally intended to make use of the properties of smaller organic compounds (Cronin et al., 2019). Applying QSPR/QSAR to nanomaterials meets several challenges, which not only concern the difficulty of defining the 'structure' as a source of descriptors in the case of nanomaterials . Configurations of atoms and bonds are not enough to account for all system behavior of nanomaterials because other observed (as well as latent) factors can influence the biological impact of nanomaterials. In fact, it is well known today that the biological response to NP exposure is highly affected by a diversity of eclectic experimental parameters that needs further attention in future computational biology (Toropova and Toropov, 2022;Trinh et al., 2018;Toropov and Toropova, 2015). The consequence of the wide experimental difference in published nano-bio studies makes comparison and generalization of the biological impact to specific NPs complicated using traditional QSPR/QSAR SMILES-based models. This essentially becomes insurmountable when one considers variable parameters such as organisms/organs/cell type, gender specificity, dose, NP physicochemical characteristics, media/temperature/ionic strength/pH, synergy with other NPs/contaminants, and exposure duration. Because of this, models that include all available experimental parameters are highly valuable and should increase both confidence, accuracy, and predictive power of nano-bio interactions. Traditional QSAR models are based on the representation of the molecular structure via molecular graphs, vector of physicochemical parameters, and simplified molecular input-line entry system (SMILES). To include other experimental parameters, we and others have shown that SMILES can be extended by special symbols to represent n numbers of diverse eclectic data (Toropov and Toropova, 2015Toropova and Toropov, 2019). We term this methodology 'Quasi-SMILES' (Toropov and Toropova, 2015Toropova and Toropov, 2019). Clearly, the quasi-SMILES approach is promoting stronger collaborative work and understanding between experimentalists and computational researchers. In the present study, we demonstrate that quasi-SMILES is an efficient methodology to compare and build predictive models from data obtained under widely different experimental conditions. As a case study, we have analyzed the toxicity testing data of silver NPs obtained from fish and daphnia.
Data
The data used in this study are identical to those in publication (Jung et al., 2021) and describe the ecological acute AgNP toxicity to daphnia and fish. Fig. 1 demonstrates the general scheme of converting the experimental data into quasi-SMILES using the experimental data acquired for the species daphnia (Daphnia magna) and zebrafish (Danio rerio), while considering both size (nm), zeta potentials (mV), and surface coating material of the NPs (bare, coated NPs, and coated NPs including coating material descriptors according to the reference Jung et al., 2021). In total, the 170 experimental quantities of reference (Jung et al., 2021) are converted into quasi-SMILES. In doing so, the quasi-SMILES operation produces duplicates, i.e., situations expressed by the same quasi-SMILES. After removing the duplicates, the total set contains 102 quasi-SMILES represented in Table 1. These quasi-SMILES were randomly split into the active training set (indicated by '+'); passive training set ('-'); calibration set ('#'); and validation set ('*'). The active training set is used to build the model: molecular features extracted from quasi-SMILES of the active training set are applied in the process of Monte Carlo optimization aimed to provide correlation weights for the above features, which give maximal correlation coefficient between the descriptor and the endpoint on the active training set.
Since the passive training set contains data that were not included in the active training set, it serves as a source to validate whether the model obtained for the active training set is satisfactory for external invisible quasi-SMILES. The calibration set is used to limit overtraining (overfitting) of the model. The overall workflow of the model generation is as follows: At the beginning of the optimization, the correlation coefficients between the experimental values of the endpoints and the descriptor contemporaneously increases for all sets, while the correlation coefficient for the calibration set reaches a maximum indicating the beginning of the overtraining. At this point, the Monte Carlo optimization procedure is kept on hold, and the validation set is then applied to assess the predictive potential of the obtained model. Tables 1 and 2 contain the ranges of NP size and zeta potential used to generate the quasi-SMILES examined here. Table 3 contains the list of quasi-SMILES used to build the models for pLC50.
Optimal descriptors calculated with quasi-SMILES
The model of toxicity examined here is a mathematical function of four variables (T) Status of NPs (bare, coat, cons); (DF) organisms (Daphnia or Zebrafish); (S) Size (nm); (Z) Zeta-potential (mV) the function based on the values of the optimal descriptors calculated with the so-called correlation weights.
The qS are elements of quasi-SMILES from the list represented in Table 2. The CW(qSk) are correlation weights of quasi-SMILES elements, i.e., special coefficients calculated by the Monte Carlo method. The model for pLC50 is one variable correlation T is the threshold for detecting rare codes. If the frequency of codes in the active training set is less than T, the code is blocked, i.e., its correlation weights are fixed to be zero, and the code is not included further in the modeling process. N is the number of epochs of the Monte Carlo optimization. One epoch is the modification of all non-blocked codes. The sequence of the modifications is random. a '+' represents the active training set (≈25%); '-' represents the passive training set (≈25%); '#' represents the calibration set (≈25%), and '*' represents the validation set. Three possible forms of silver nanoparticles, namely "bare" describes nanoparticles without any coating; coat (coating) describes nanoparticles with a shell; "cons" describes nanoparticles including coating material descriptors; Daphnia magna (Daph) and zebrafish (Fish) are organisms examined; size (nm); zeta is zeta-potential of nanoparticles (mV); pLC50 is the decimal logarithm of concentration causing mortality in 50% of daphnia or fish.
The Monte Carlo method
Eq.
(2) needs the numerical data on the above correlation weights. The Monte Carlo optimization is a tool to calculate those correlation weights. The target functions for the Monte Carlo optimization is the following: The r AT and r PT are correlation coefficients between observed and predicted endpoint for the active training set and passive training set, respectively.
The IIC C is the index of ideality of correlation . The IIC C is calculated with data on the calibration set as follows: The observed and calculated are corresponding values of the endpoint. Table 4 contains the numerical data on the correlation weights of quasi-SMILES codes obtained by the Monte Carlo method. Table 5 contains as an example of the calculation of the DCW(1,15).
Models
The models for pLC50 obtained for the three random splits are as follows: Table 6 contains the statistical characteristics of models for splits #1, #2, and #3. The statistical quality of the above models (pLC50) is quite good. Fig. 2 contains the graphical representation of model for pLC50 obtained for split 1.
Applicability domain
The statistical significance of the different codes is different. The defect of the codes is the measure of the statistical significance of a code. Defects of quasi-SMILES codes calculated as Table 6 Statistical quality of models for pLC50 for three random splits. Q 2 is the leave-one-out cross-validated R 2 (Shayanfar and Shayanfar, 2014). Q 2 F1 , Q 2 F2 , and Q 2 F1 are the modifications of Q 2 (Chirico and Gramatica, 2011). a IIC is the index of ideality correlation (Eq. (6)).
where P(qS k ) and P ′ (qS k ) are the probability of qS k in the active training and passive training sets, respectively; N(qS k ) and N ′ (qS k ) are frequencies of qS k in the active training and passive training sets, respectively. The statistical quasi-SMILES-defects (D j ) calculated as where N is the number of non-blocked quasi-SMILES codes in the quasi-
SMILES. A quasi-SMILES falls in the domain of applicability if
Dj < 2*D (17) Fig. 3 represents the defects of codes that are not blocked.
Mechanistic interpretation
Having numerical data on the correlation weights of quasi-SMILES codes observed in several runs of the Monte Carlo optimization one can obtain for a given code correlation weights which are solely positive. In this case, the code can be interpreted as a promoter of increase for the endpoint. If a code has in the several runs solely negative values, the code can be interpreted as a promoter of decrease for the endpoint. If the code has both positive and negative values, the role of the code is unclear. Table 7 contains a collection of promoters of the increase and decrease for pLC50 together with the statistical defects calculated by Eq. (15).
One can observe in Table 7 that coating (coat) and coating that considers the molecular features of the coating (cons) are quasi-SMILES codes which are promoters of a pLC50 increase. With regard to animal species, the observation can be made that the quasi-SMILES code for "Fish", which is negative, dictates promotion towards a decrease in pLC50, while the opposite is true for "Daph". In other words, the impact of NPs on fish is smaller than that on daphnia.
Estimation of influence of size and zeta potential to pLC50
To study of the impact of different sizes and zeta potentials, one can compare the distributions displayed in Fig. 3. The ranges of the largest sizes (80-140 nm) were excluded from the consideration, because the number of NPs that fall in these ranges (from 80 to 140) is very small. From this analysis, it becomes clear that between the variables of zeta potential and NP size, the greatest contribution to the increase in pLC50 originate from the zeta potential. The range of zeta potential denoted by z%14 (i.e., from − 46.210 to − 42.780) defines the global maximum for this component for both Fish and Daphnia. The NP size has a moderate impact and reaches maximum at [s%11], but also show many local maxima with comparable magnitude (Fig. 4).
The ability to predict quantitative results is an attractive aptitude in computational model development. The quasi-SMILES model presented here is an example of a model that enables that even when data are obtained with different experimental conditions. Thus, the work demonstrated here is an attempt to bridge experimental data and predictable models. The gradual expansion of the experimental results in the field continually adds to improve the reliability of the system "experiment-model" (or of the system "experimentalist-model developer) to provide new style and new research possibilities.
Supplementary materials section contains the technical details on three random models.
Conclusions
The nano-QSAR models were generated to predict the pLC50 in two biological systems. Three random distributions formed from the available experimental data, including the training and validation sub-sets, confirm the robust predictive potential of these models. Quasi-SMILES is an approach to build models of new quality: the descriptor becomes a mathematical function of structure and experimental conditions or even a mathematical function of experimental conditions together with arbitrary circumstances that can impact the results of the experiment. In other words, the quasi-SMILES technique can be the source of a new way forward to address both theoretical and practical chemistry, biochemistry, and toxicology.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2,965.2 | 2022-05-01T00:00:00.000 | [
"Biology"
] |
Does Abortion Harm the Fetus?
Abstract A central claim in abortion ethics is what might be called the Harm Claim – the claim that abortion harms the fetus. In this article, we put forward a simple and straightforward reason to reject the Harm Claim. Rather than invoking controversial assumptions about personal identity, or some nonstandard account of harm, as many other critics of the Harm Claim have done, we suggest that the aborted fetus cannot be harmed for the simple reason that it does not occupy any well-being level.
I. Introduction
A central claim in abortion ethics is what we might call the Harm Claimthe claim that abortion harms the fetus.
It is plausible to think that the moral status of abortion depends substantially on whether the Harm Claim is true. In particular, since an action's harmfulness apparently counts morally against it, the Harm Claim can be used to argue for the moral impermissibility of abortion (see, e.g., Lee 2010: 34-35;Marquis 2004Marquis , 2005Marquis , 2008Marquis , 2011Marquis , 2013. Moreover, those who criticize the Harm Claim can use their criticism not only to respond to such anti-abortion arguments, but also to argue more directly for the thesis that abortion is morally permissible. After all, a natural view is that if abortion is harmless, then it does not require any special justification. Our focus, however, will not be on the moral issue, but on the Harm Claim itself. We shall propose a simple and straightforward reason to reject the Harm Claim. Rather than invoking controversial assumptions about personal identity, or some nonstandard account of harm, as many other critics of the Harm Claim have done, we shall suggest that the aborted fetus cannot be harmed for the simple reason that it does not occupy any well-being level. This strategy for rejecting the Harm Claim has been previously neglected, and our main aim in this article is simply to draw attention to it and some of its advantages. Determining whether the strategy should, in the end, be wholeheartedly embraced requires further discussion (see section VII).
A clarification: what we will be concerned with, although this restriction will be largely left implicit, are abortions that are not unusually latenot performed after, say, week 20. Abortions that are unusually late give rise to special questions, at least on the approach that we shall propose. Those cases are important, too, but will not be discussed here.
II. The Harm Claim and the counterfactual comparative account
It will be instructive to begin by considering the most popular and widely discussed view in the general debate on the nature of harm. This view is usually called the counterfactual comparative account: The counterfactual comparative account. An event (e.g., an action) harms an individual if and only if, and to the extent that, the individual would have been on balance better offthat is, her lifetime well-being level would have been higherif it had not taken place. 1 One prominent way of arguing for the Harm Claimand one to which critics of the Harm Claim have been especially eager to respondis to appeal to something like the counterfactual comparative account. For example, Don Marquis writes: Killing someone harms her by making her life shorter, and therefore (typically) worse, than it otherwise would have been. Therefore, because killing someone makes her worse off, killing her harms her. (Marquis 2011: 4) Similarly, Marquis claims that on his view, premature death is a very serious harm and, like all harms, is understood as making an individual worse off than she otherwise would have been. (Marquis 2004: 56) These remarks, Marquis contends, apply to abortion no less than to other kinds of killing. According to him, the aborted fetus would have gone on to have a valuable "future like ours" if it had not been aborted and would therefore have been overall better off than it actually is. Hence, abortion harms the fetus. 2 It might thus seem that the counterfactual comparative account provides a straightforward reason to accept the Harm Claim. 3 Importantly, however, the reason to deny the Harm Claim that we shall present is also a direct reason to deny that abortion 1 For defenses of the counterfactual comparative account and closely related views, see, e.g., Bradley (2009), Feit (2015, (2016), Hanna (2016), Klocksiem (2012), and Timmerman (2019). 2 E.g., Marquis (2004Marquis ( , 2005Marquis ( , 2008Marquis ( , 2011Marquis ( , 2013. Of course, Marquis's claim that abortion deprives the fetus of a valuable "future like ours" (a phrase made famous in Marquis 1989) can be used in various different ways in abortion ethics. Again, as we understand Marquis, his own view is that abortion makes the fetus overall worse off than it would have otherwise been (and thereby harms it) because it deprives the fetus of a valuable future like ours (see, e.g., Marquis 2011: 4, 11). Another possible view, which we shall consider briefly in section VII, is that an event can deprive an individual of a valuable future like ours without making her overall worse off, and that (contrary to the counterfactual comparative account) depriving her of such a future is sufficient for harming her. Yet another possible view is that an action's depriving an individual of a valuable future like ours may or may not be sufficient for harming her, but is sufficient for there to be a moral reason against it. As our focus is on the Harm Claim, we shall not discuss the latter view. 3 Because some individuals would not have been well off even if they had been allowed to live, advocates of this strategy cannot say that every abortion harms the fetus (cf. the "typically" qualification in the above quote from Marquis 2011). They will thus want to restrict the scope of the Harm Claim to more typical cases. This complication does not affect our own approach (sections IV-V), which applies both to cases where the fetus would have been well off if it had been allowed to live and to cases where it would not. satisfies the condition in the counterfactual comparative account. If our proposal is right, then, the counterfactual comparative account does not support the Harm Claim, but its denial.
III. Two other strategies It will also be instructive, especially in virtue of bringing out the advantages of our own strategy, to briefly consider two other strategies for rejecting the Harm Claim. Both of these strategies appeal, in different ways, to the fetus's lack of psychological connections to the individual who, in the absence of abortion, will receive various goods in the future.
The first strategy is to argue that abortion cannot harm the fetus since no future goods would have awaited the fetus even if it had not been aborted (see, e.g. , Brill 2003;McInerney 1990;McMahan 2002;Reitan 2016). This strategy is motivated by the view that we have never been fetuses, but instead started to exist at a later pointin particular, when a sufficiently rich psychology emerged. Clearly, in order for it to be true that a fetus would have received future goods if it had not been aborted, it needs to be true that the fetus would have been identical to some future recipient of goods. Thus, if we have never been fetuses, proponents of this strategy argue, then abortion would not have deprived us of future goods.
This strategy is compatible with the counterfactual comparative account. But of course, it involves rejection of animalism, the view that we are human animals. 4 Since the animal with which you are associated has been a fetus, it follows from animalism that you have been a fetus. Aborting the fetus would thus have prevented it from receiving the goods that you actually receive. While proponents of the present strategy might not regard rejection of animalism as a serious cost, many others doafter all, animalism is one of the major views of personal identity. 5 Surely it is at least interesting to see if there is a plausible strategy for rejecting the Harm Claim that is neutral with respect to animalism.
The second strategy for rejecting the Harm Claim appeals to an account of the harm of killing and death that takes psychological connections more seriously than the counterfactual comparative account (e.g., DeGrazia 2005;McMahan 2002). The general idea is that even if a killing or death prevents the individual from receiving various future goods and thereby makes her overall worse off than she would have otherwise been, it harms her only if she would have been, at the future times at which she would have received those goods, psychologically connected to herself as she is before death. Now, the aborted fetus never has any psychology at all (recall that we are not here concerned with late fetuses). Hence, abortion does not harm it.
Unlike the first strategy, this strategy does not leave the denial of the Harm Claim hostage to any disputed claims about personal identityin particular, it is compatible with animalism. However, the view of harm to which this strategy appeals is a rather radical one, which also has few advocates in the general debate on the nature of 4 Even if rejecting animalism is necessary for this strategy to work, it is not clear that it is sufficient. In particular, even if you have never been a fetus, it remains the case that the animal with which you are associated has been a fetus. And isn't it happy whenever you are happy? If it is, abortion would have deprived it of this happiness. Thus, contrary to what Marquis himself (2002Marquis himself ( , 2013, along with many others, apparently thinks, it is not clear that his argument presupposes animalism. See further Johansson (2019). harm. In particular, it is hard to swallow that an event can be entirely harmless even if it makes the individual much worse off than she would have otherwise been and fails to make her well off. While we do not rule out the possibility that this nonstandard view of harm is in the end defensible, surely it is interesting to see if we can deny the Harm Claim without invoking it.
Unlike the two strategies just considered, our own strategy, to which we now turn, is compatible with the conjunction of animalism and the counterfactual comparative account. It is also compatible with both animalism and the counterfactual comparative account being false, and with exactly one of them being true.
IV. The Well-Being Requirement
Our strategy appeals to an attractive and widely endorsed principle that states a necessary condition for harming (this section), and to a specific reason to doubt that this condition is satisfied in the case of abortion (next section).
Here is the principle: The Well-Being Requirement. In order for an event to harm an individual, the individual has to occupy a lifetime well-being level.
Two initial remarks are in order. First, we take it that someone has a lifetime well-being level (that is, is on balance well off to some degree) just in case she has, at some time or another, a temporal well-being level (that is, is well off to some degree at some particular time). How the individual's lifetime well-being level is more specifically related to her various particular temporal well-being levels will not be relevant here. Second, the Well-Being Requirement should not be conflated with the claim that an event harms someone only if she has a temporal well-being level at the time of the occurrence of the event. The latter claim is highly questionable. Arguably, for instance, you did not occupy any temporal well-being level at all several years before your birth. Yet an event that happened then can clearly harm youfor instance, by causing you to be in pain ten years from now. The Well-Being Requirement is fully compatible with this judgment, since you obviously have a lifetime well-being level. The Well-Being Requirement is an implication of all main views in the general debate on the nature of harm. To begin with, if the counterfactual comparative account is true, then in order for an event to harm an individual, it is not sufficient that, for instance, the individual does not actually occupy a positive well-being level but would have done so if it were not for the event. What is required is for the individual to have a lower well-being level than she would have had if the event had not occurred. And in order for her to have a lower well-being level than she would have otherwise had, she must have a well-being level. Hence, the counterfactual comparative account implies the Well-Being Requirement.
Similarly, consider standard versions of the main competitor to the counterfactual comparative account, the causal account of harm (see further Carlson, Johansson, and Risberg 2021a). On one such version, known as the "temporal" view, an event harms someone if and only if it causes her to have a lower temporal well-being level after the event rather than before it (e.g., Foddy 2014). On another, an event harms someone if and only if it causes her negative well-being (e.g., Smuts 2012). On yet another, an event harms someone if and only if it causes the obtaining of a state of affairs that satisfies various conditions, including that she would have had a higher well-being level if that state of affairs had not obtained (e.g., Gardner 2015). The Well-Being Requirement evidently follows from each of these views. 6 Not only is the Well-Being Requirement a consequence of the main theories in the general debate on the nature of harm; various objections and moves in that debate also accord with it. Consider for example two important objections to the counterfactual comparative account (for further discussion, see, e.g., Bradley 2012; Carlson and Johansson 2018;Feit 2015Feit , 2016Johansson 2021;Johansson and Risberg 2019). The first one concerns preemption. Suppose someone punches you in the face, but would have otherwise killed you. Then his actual action seems to harm you, contrary to the counterfactual comparative account. The second objection concerns creation. Suppose we create someone who is bound to be very unhappy, and that if we had not created her, she would have never existed. Again, it seems that our action harms her. Plausibly, though, she would not have had any well-being level at all (and thus would not have been better off) if she had never existed. No matter how fatal these problems may be for the counterfactual comparative account, however, they do not threaten the Well-Being Requirement. For in both these casesthe preemption case and the creation casethe victim does have a lifetime well-being level.
V. Does the aborted fetus have a well-being level?
A crucial question, then, is whether the aborted fetus has a lifetime well-being level. It seems clear that the aborted fetus never occupies a positive or negative well-being level. Not only is this the intuitive thing to say, it is also entailed by all standard substantive views of well-being, such as hedonism or desire satisfactionism. (The aborted fetus never feels pleasure or pain and has no desires.) Of course, this does not by itself rule out the possibility that the fetus occupies a neutral well-being levela well-being level of zero. However, there is a good (though by no means decisive) reason for denying that the fetus has even a neutral well-being level. The reason can be found in the well-being literature, but has to our knowledge not yet been applied to the issue of abortion (see, e.g., Carlson and Johansson 2018;Herstein 2013;Johansson 2021;Luper 2007Luper , 2009. Consider, for a moment, an ordinary computer instead of a fetus, and some scales other than the well-being scalesuch as those of wealth and moral virtue. Clearly, the computer has no positive or negative wealth. Does it have zero wealth, making it poorer than most people and richer than some (those whose debts exceed their assets)? While some would be prepared to accept that it does, surely that is not the most intuitive thing to say. A more natural view is that the computer does not occupy any financial level at all. 7 World poverty is not increased by the coming into being of a standard Dell laptop. Similarly, the computer is neither virtuous nor vicious; it occupies neither a positive nor a negative level on the virtue scale. Does it occupy a zero level, making it more virtuous than a bad person and less virtuous than a good person? Again, that is a 6 It also follows from the view defended in Johansson and Risberg (forthcoming), and from all views considered in Carlson, Johansson, and Risberg (2021b). 7 Some might want to say that sentences like, "The computer occupies some financial level" are meaningless (and similarly with other relevant examples below). If so, sentences like, "The computer occupies no financial level" are also meaningless. While the meaninglessness view is inconsistent with the letter of our strategy, our various claims could easily be reformulated to accommodate it. Thanks to Naomi Korem and an anonymous referee for helpful discussion. possible position to take, but not the most intuitive one. On a more natural view, the computer occupies no level at all on the virtue scale.
If the computer indeed occupies no level on the wealth scale, or the virtue scale, what is the explanation of this? It cannot merely be that the computer does not actually occupy any positive or negative level on the wealth scale, or the virtue scale. After all, that condition can be satisfied also by ordinary humans who occupy a zero level (rather than no level) on the wealth scale, or the virtue scale. Plausibly, the explanation is rather that the computer, unlike such humans, has no capacity for being wealthy or unwealthy, or virtuous or vicious. While it is difficult to analyze the notion of a capacity, surely it has at least largely to do with the object's internal structure. The computer's internal structure, unlike that of an ordinary human, is not even somewhat fit to produce the mental and social states relevant to positive and negative wealth, and to virtue and vice (whatever those states exactly are). Indeed, it seems to us most intuitive to say that even if it were to turn out, surprisingly, that an ordinary computer in the natural course of events will eventually acquire a capacity for positive and negative wealth, and virtue and viceperhaps by acquiring certain mental and social propertiesit still would not occupy any level on the wealth scale, or virtue scale, until it actually got that capacity. Even in such a scenario, no matter how wealthy or virtuous the computer will eventually become, it will not become richer or morally better than it currently is. Currently, it is not even in the competition.
No doubt all this could be questioned, and particularly by someone with other intuitions. What we suggest is merely that the above is sufficiently plausible to be able to constitute the basis of a sensible and interesting strategy for denying the Harm Claim.
Return now to well-being. An ordinary computer apparently has no capacity for positive or negative well-beinga capacity for receiving positive and negative well-being components. Surely, moreover, if its lack of a capacity for positive or negative wealth, or virtue or vice, explains its occupying no level (rather than a zero level) on the wealth scale, or virtue scale, then its lack of a capacity to have a positive or negative well-being level implies that, and explains why, it occupies no level (rather than a zero level) on the well-being scale. The computer fails to be better or worse off than any of us since it is not capable of having whatever constitutes positive or negative well-being. And again, even if it were to eventually acquire this capacity, this would not mean that it currently has a well-being level of zeroand if it never acquires such a capacity, it has no lifetime well-being level at all. Now, a fetus, too, plausibly has no capacity for having positive or negative wellbeing. On the most popular substantive theories of well-being, at least, mental properties are needed for positive or negative well-being. On hedonism, for example, positive well-being consists in pleasure, and negative well-being in pain. On desire satisfactionism, positive and negative well-being consists in the satisfaction and frustration, respectively, of desires. And pluralists about well-being, too, usually include in their lists only items that require mental features: knowledge, rational activity, friendship, etc. 8 Indeed, these items require more in terms of mental sophistication than do pleasure, pain, and desire. Crucially, the fetusat least, the relatively early fetus (see section I)is not capable of having mental features, and hence not capable of having a positive or negative well-being level. It probably will acquire such a capacity if it is not aborted; but if it is aborted, it never has it. So, while an aborted fetus would have had a well-being level if it 8 In section VII, we briefly consider a pluralist view that does not have this feature.
had not been aborted, it never actually has it. Consequently, abortion and fetuses do not satisfy the necessary condition for harming stated in the Well-Being Requirement.
It is important to note that our point is not merely that the fetus has no temporal well-being level at the time of abortion. As we stressed in section IV, an event can harm someone even if she has no temporal well-being level at the time of the occurrence of the event. Indeed, lots of things that happen to a fetus that will not be aborted can harm itsay, by causing it to be in pain after birth, or by causing it to die prematurely at the age of fifty. After all, such a fetus has many temporal well-being levels later in life; as a result, it also has a lifetime well-being level. In the case of a fetus that does get aborted, however, things are different. If our proposal is right, the aborted fetus never has any temporal well-being level, and therefore has no lifetime well-being level. Given the Well-Being Requirement, then, nothing, including abortion, harms it.
Some writers, especially in the Thomistic tradition, insist that the fetus, even from the very beginning of its existence, in one important sense actually does have a capacity for mental statesand indeed for knowledge, rational activity, friendship, etc. (e.g., Beckwith 2007: 142;Finnis 2013;Lee 2004Lee , 2010Lee and George 2005;cf. Kaczor 2011: 30). 9 According to these theorists, the fetus has already from the start a "radical" or "root" capacity for such features, roughly in the sense of having a potential for them: the fetus has the right sort of nature to eventually acquire these features, so long as nothing prevents it from doing so. This is much in the same way that an acorn has a root capacity to grow branches and leaves (cf. Beckwith 2013: 342). Although this view raises large issues that we cannot do justice to in this article, we are inclined to say that it does not, even if correct, undermine our strategy. For while we find somewhat misleading the broad use of the term 'capacity' involved in this view, it seems to us that we can accept it without jeopardizing the substance of our proposal. For the important thing is, of course, whether having a mere root capacity for the relevant features is sufficient for occupying some level on the pertinent scale. It seems to us that it is not. As indicated earlier, even if an ordinary computer were to turn out to have a root capacity for positive and negative wealth, or virtue and vice, intuitively it would still need a more substantially developed capacity in order to occupy any level on the wealth scale, or virtue scale. (Presumably, there will be borderline cases between having a mere root capacity and having a sufficiently developed capacitybut the computer seems to be a clear instance of the former.) Similarly, an early fetus apparently has a root capacity for positive and negative wealth, and for virtue and vice. Nevertheless, it is still plausibly not richer than some of us and poorer than most, or more virtuous than a bad person and less virtuous than a good person. If this is right, surely the same holds for well-being.
Ben Bradley is skeptical of the capacity approach to well-being levels on which our strategy relies. To support his doubts, he supposes for the sake of argument that hedonism is true, and considers the lives of two people: Marsha, who never acquires the capacity to experience pleasure or pain, and Greg, who has that capacity, but never actually experiences any pleasure or pain. If having a well-being level of zero requires having the capacity for receiving positive or negative well-being components, then there is a significant difference between Marsha and Greg: while Greg has a permanent well-being level of zero, Marsha has no well-being level at all. In Bradley's view, however, this "just seems wrong" (Bradley 2009: 103). A more plausible judgment, he suggests, is that both Marsha and Greg occupy a well-being level of zero.
We do not share Bradley's intuitions about the case of Marsha and Greg. That is, we do not find it intuitively wrong to say that Marsha has no well-being level, or that there is a significant difference between Marsha and Greg with respect to well-being. More importantly, however, our strategy does not rely on any intuitions about the well-being levels (or lack thereof) of people like Marshaor, for that matter, any human being. Indeed, our strategy does not even rely on any intuitions about well-being levels. In motivating our strategy, we have suggested that that there is a common and plausible explanation of why a computer, for instance, occupies no level on certain other scales, such as those of wealth and virtuean explanation that, when applied to well-being and fetuses, yields the result that fetuses occupy no well-being level. Unless Bradley's intuition that Marsha has a well-being level is very strong, then, it threatens neither our support for the capacity approach nor the capacity approach itself.
According to current scientific consensus, a fetus acquires the (nonroot) capacity for mental features around week 20. It is worth noting that our approach thus does not rule out the possibility that the Harm Claim is true for fetuses that are aborted later than that. But as pointed out in section I, in this article we are not concerned with such unusually late abortions. The great majority of abortions are performed long before week 20.
In this context, it is also worth noting that our proposal has the attractive feature of ruling out the possibility that abortion harms the early fetus, without ruling out the possibility that infanticide harms the infant. Again, our strategy does not rule out the possibility that late fetuses may be harmed by abortion, since, unlike early fetuses, late fetuses may well have a well-being level. Unsurprisingly, the same goes for infants. Moreover, this way of ruling out the harmfulness of early abortion without ruling out the harmfulness of infanticide does not appeal to the fact that, unlike early fetuses, infants are typically conscious. Thus, our strategy is not vulnerable to the criticism, recently raised by David Hershenov and Rose Hershenov among others, that the presence or absence of consciousness is not in itself relevant to whether an individual can be harmed (Hershenov and Hershenov 2017: 393-95;cf. Kaczor 2011: ch. 3). What is relevant, on our suggestion, is whether the individual meets the condition stated in the Well-Being Requirement. Similarly, although some coma patients may be no more capable of receiving well-being components than are fetuses, our proposal does not imply that such patients cannot be harmed by death. 10 Since, typically, a comatose individual has had the relevant capacities, and hence a temporal well-being level, at many times prior to her becoming comatose, she has a lifetime well-being level, and so meets the relevant condition in the Well-Being Requirement. 11
VI. Two nearby suggestions
Our strategy has some similarities with suggestions from Bradley and Nathan Nobis and Kristina Grob. Let us briefly consider their proposals in turn and point out how they differ from ours. 10 This worry, too, can be found in Hershenov and Hershenov (2017: 402). See also Beckwith (2011) and Kaczor (2011: sect. 2.4). 11 Some writers in the debate on the ethics of abortion (e.g., Beckwith 2007: 135, 137) have wondered how the moral status of, say, killing a comatose individual could be affected by whether the individual has had certain capacities in the past. Our strategy might provide an answer to this question. Plausibly, an action's moral status can be affected by whether the action harms an individual. On our strategy, killing the comatose individual harms her only if she has a lifetime well-being levelsomething that, in turn, requires her to have the capacity for positive or negative well-being at some point (e.g., in the past).
Bradley has recently suggested an account of the badness of an event for an individual, which, when applied to abortion, might seem similar to our suggestion (Bradley 2019). According to Bradley's account, the badness of an event is not determined solely by whether, and to what extent, an individual would have been better off if the event had not occurred; it is also partly determined by the degree to which the individual is a welfare subject: the lower the individual's degree of being a welfare subject, the larger the discount on the badness of the loss of well-being. Because Bradley apparently takes the degree to which an individual is a welfare subject to correspond to the degree to which she is capable of receiving positive and negative well-being components, his account, similarly to our suggestion, rules out the possibility that abortion is bad for the fetus on the grounds that the fetus lacks such a capacity. Even so, and even disregarding (as we shall do here) potential discrepancies between badness and harm, our strategy for rejecting the Harm Claim is importantly different from the one provided by Bradley's account.
While our strategy appeals to the fact that fetuses lack well-being levels, the Bradley strategy does notindeed, as his judgment in the case of Marsha and Greg (section V) indicates, Bradley himself seems not to deny that fetuses and other beings who lack mental capacities have well-being levels like the rest of us. 12 Instead, it appeals to a very specific view of harm, which is incompatible with the counterfactual comparative account. Our strategy, by contrast, relies on only the Well-Being Requirement, which, again, is entailed by every standard view of harm (including the counterfactual comparative account). Moreover, just like proponents of the second strategy for denying the Harm Claim considered in section III, adherents of the Bradley strategy are committed to denying the natural idea that any event that makes an individual much worse off than she would have otherwise been, and does not make her well off, is harmful to the individual. On Bradley's view, such an event harms the individual only if she is also a welfare subject to a non-zero degree. By contrast, our strategy allows its adherents to accept that if an event makes someone much worse off than she would otherwise have been and does not make her well off, then the event harms her. 13 Nobis and Grob present their approach as follows: If a being is and has always been completely unconscious, that being cannot be harmed, which requires a 'turn for the worse' for that being. But there is no 'for that being' for early fetuses yet, so things can't get worse for them. So killing them doesn't harm them or make them worse off, compared to how they were, since they never 'were' in a conscious way. (Nobis and Grob 2019: 48) While this, too, is reminiscent of our suggestion, there are at least two important differences between our strategy and Nobis and Grob's. First, as the quote indicates, Nobis and Grob apparently suggest that, in order for an event to harm an individual, it has to make her worse off than she was before the event took place. In other words, they appeal to a temporal account of harm (see section IV). We, by contrast, do not: while the temporal account presupposes the Well-Being Requirement, the reverse obviously does not hold. Because such a temporal account of harm is vulnerable to a host of serious problems that do not threaten the Well-Being Requirement, this is an advantage of our strategy over Nobis and Grob's. 14 Second, Nobis and Grob say that no being that is always unconscious can be harmed. By contrast, we are only committed to saying that insofar as consciousness is required for receiving well-being components, no being that lacks the capacity to have consciousness has a well-being level.
VII. Concluding remarks
We shall close by mentioning five different lines of response available to our critics. First, a critic might want to reject the idea that some things occupy no level at all on certain scales. For instance, the critic might insist that an ordinary computer does occupy a levelmost likely, a zero levelon the scales of wealth, virtue, and wellbeing. 15 Presumably, this would be because not occupying a positive or negative level of some scale is, after all, sufficient for occupying a zero level on that scale. Appearances to the contrary, the critic might say, can be explained away.
Second, the critic might grant that many things occupy no level at all on certain scales, but deny that this is because they lack the capacity for occupying a positive or negative level on those scales. Some alternative explanation is preferable, the critic might suggestand maybe that alternative explanation does not yield the result that a fetus lacks a well-being level.
Third, as we have pointed out, our strategy for rejecting the Harm Claim is compatible with standard substantive views of well-being: hedonism, desire satisfactionism, etc. However, the critic might want to appeal to some alternative view of well-being, on which positive and negative well-being does not require mental states. Naturally, not just any such view would do, even if correct. For instance, some might want to say that a human being who spends her entire and long life in a vegetative state occupies a negative well-being levelperhaps because of something to do with lack of dignity or personal relations. 16 Surely any sensible version of such a view will have to deny that such factors are present in the case of the fetus. For instance, an early fetus is not lacking in dignity for lacking mental states. However, a suggestion that might be more adequate to our critics' needs is that, say, being healthy or being alive is a positive well-being component. 17 If that is right, the fetus does occupy a well-being level, and so does, after all, meet the Well-Being Requirement. It should be noted, though, that on such a view a typical early fetus already has a positive temporal well-being levela surprising, if not indefensible, claim.
Fourth, the critic might try to argue that even given hedonism and other standard accounts of well-being, even an early fetus does have a capacity for positive or negative well-being. The fetus actually has a capacity for pleasure and pain, for example. This strategy might, but need not, involve the claim that we are also wrong to suggest that an ordinary computer has no capacity for occupying a positive or negative level on 14 The creation problem for the counterfactual comparative account (see section IV), for instance, is also a problem for the temporal account of harm. Unless the individual had a well-being level before she even existed, creating her cannot make her worse off than she was before. For further problems for the temporal account, see Carlson, Johansson, and Risberg (2021a). 15 Feit (2016: 145) embraces this sort of claim with regard to the well-being scale. See further Carlson and Johansson (2018). 16 Thanks to an anonymous referee for helpful discussion here. 17 The appeal to being alive might seem to be a natural one to make for those who think, as many in the pro-life literature do, that human life is intrinsically valuable (e.g., Lee 2010: 157-58;cf. Marquis 2011: 16-19). It is not clear, however, what implications the latter view has for well-being in particular. the scales of wealth, virtue, and well-being. And the strategy might, but perhaps need not, involve saying that a mere "root" capacity for positive and negative well-being is after all (despite our remarks in section V) sufficient for occupying a well-being level.
Fifth, the critic might reject the Well-Being Requirement. While, as we have pointed out (section IV), the Well-Being Requirement is common ground for all standard views in the debate on the nature of harm, the critic might argue that these views are all wrong. Of course, the falsity of these views does not entail the falsity of the Well-Being Requirement. But the critic might want to defend some view about harm (which need not amount to a complete view of the nature of harm) that does entail the falsity of the Well-Being Requirement. The critic might, for instance, suggest that any event that causes an individual's death thereby harms her, whether or not she ever occupies any well-being level (see, e.g., Harman 2009: 139). This claim, in this unqualified form, would also entail the Harm Claim. On the other hand, the claim seems unreasonable in this unqualified form. In particular, it does not seem attractive to say that even an individual who not only never actually occupies any well-being level, but also would not have done so even if she had continued to live, can be harmed by being caused to die. A better approach for our critic to adopt might be to say that an action can harm someone by preventing her from occupying a positive well-being level, even if she never actually occupies any well-being level. 18 This, too, seems to imply the Harm Claim. On the face of it, however, this approach also seems to yield the result that the following claim is true: if an ordinary computer never occupies any well-being level, but we could somehow cause it to occupy a positive one, then an action that prevents us from doing so harms the computer. This does not seem like an appealing implication. Those who defend the approach should try to find a way to block it, or to show that it is acceptable after all.
As pointed out in section I, our main aim in this article is to draw attention to a strategy for rejecting the Harm Claim that has been previously overlooked and has some noteworthy advantages. It is simple, it invokes no nonstandard view of harm or personal identity, and its individual components are natural and commonsensical. Whether our strategy should in the end be accepted can be settled only by further investigation. Showing the correctness of one or several of the five lines of response mentioned here (or some sixth line of response) might not be a hopeless task for our critics. But it does strike us as a difficult taskand in any case, it is a task that they need to undertake. 19 Competing Interests. The authors declare there are no competing interests. | 9,179.8 | 2021-12-06T00:00:00.000 | [
"Philosophy"
] |
Vice Dean-Quality and Development Head of Quality and E-Leaning units
Cloud computing is basically altering the expectation for how and when computing, storage and networking assets should be allocated, managed and devoted. End-users are progressively more sensitive in response time of services they ingest. Service Developers wish for the Service Providers to make sure or give the ability for dynamically assigning and managing resources in respond to alter the demand patterns in real-time. Ultimately, Service Providers are under anxiety to build their infrastructure to facilitate real-time end-to-end visibility and energetic resource management with well grained control to decrease total cost of tenure for improving quickness. What is required to rethink of the underlying operating system and management infrastructure to put up the on-going renovation of data centre from the traditional server-centric architecture model to a cloud or network centric model? This paper projects and describes a indication model for a network centric data centre infrastructure management heap that make use of it and validates key ideas that have enabled dynamism, the quality of being scalable, reliability and security in the telecommunication industry to the computing engineering. Finally, the paper will explain a proof of concept classification that was implemented to show how dynamic resource management can be enforced to enable real-time service guarantee for network centric data centre architecture.
INTRODUCTION
The random demands of the Web 2.0 era ingrouping with the desire to better utilize IT resources aredriving the need for a more dynamic IT infrastructure that canrespond to speedily changing requirements in real-time.This need for real-time enthusiasm is about to basically alter the data centre background and alter the IT infrastructure as we know it.In the cloud computing era, the computer can no longer be assumed in standings of the physical insertion i.e. the server or box, which households the processor, memory, storage and related components that establish the computer.In its place the "computer" in the cloud perfectly includes a group of physical work out resources i.e. work stations, retention, network bandwidth and storage, possibly circulated physically across server and geographical borders which can be planned on demand intoa dynamic consistent entity i.e. a "cloud computer", that can develop or shrink in real-time in order to promise the desired levels of potential sensitivity, performance, scalability, consistency and safety to any application that runs in it.What is really supporting this alteration today is virtualization technology, more precisely hardware assisted server virtualization.At an ultimate level, virtualization technology allows the abstraction or decoupling of the request payload from the original physical resource.What this typically means is that the physical resources can then be carved up into logical or virtual resources as needed.This is acknowledged as provisioning.By introducing a suitable management infrastructure on top of this virtualization functionality, the provisioning of these logical resources could be made dynamic i.e.the logical resource could be made bigger or smaller in accordance with demand.This is known as dynamic provisioning.To enable a true "cloud" computer, each single computing element or resource should be proficient of being enthusiastically provisioned and succeeded in real-time.Currently, there are numerous holes and areas for development in today's data centre infrastructure before we can attain the above vision of a cloud computer.
A. Server useful systems and virtualization
Whereas networks and storage resources appreciates to advances in network facility management and SANs have already been proficient of being virtualized for a while, only now with the broader acceptance of server virtualization, do we have the complete basic foundation for cloud computing i.e. all computing properties can now be virtualized.Subsequently, server virtualization is the catalyst that is now motivating the transformation of the IT infrastructure from the traditional server-centric computing architecture to a network centric cloud computing architecture.When server virtualization is done, we have the capability to generate whole logical (virtual) servers that are free of the fundamental physical infrastructure or their physical position.We can postulate the computing, network and storage resources for all logical server (virtual instrument) and even transfer workloads from one virtual machine to another in real time (live migration).
All of this has aided deeply to convert the cost structure and competence of the data centre.Despite the many assistances that virtualization has allowed, we are still to realize the complete potential of virtualization with respect to the cloud computing.This is because: 1) Usual server centric operating systems were not planned to manage collective spread resources: www.ijacsa.thesai.org The Cloud computing example is all about optimally involving a set of scattered computing resources while the server-centric computing example is about devoting resources to a specific application.The server-centric example of computing fundamentally ties the application to the server.The work of the server operating system is to commit and guarantee to obtain ability of all accessible computing resources on the server to the application.If another application is installed on the same server, the operating system will once again manage the entire server resources to confirm that each application remains to be checked as if it has access to all available resources on that server.This model was not designed to allow for the "dial-up" or "dial down" of resource allocated to an application in response to change workload demands or business priorities.This is the reason that load-balancing and clustering was introduced.
2) Current hypervisors do not supply sufficient division between application management and physical supply management:
Today's hypervisors have just interposed themselves one level down below the operating system to enable multiple "virtual" servers to be hosted on one physical server.While this is great for consolidation, once again there is no way for applications to manage how, what and when resources are assigned to themselves without having to concern about the management of physical resources.It is our observation that the current generation of hypervisors which were also born from the era of server-centric computing does not define hardware management from application management much similar the server operating systems themselves.
3) Server virtualization does not yet allow contribution of scattered resources:
Server virtualization currently permits a single physical server to be structured into multiple logical servers.However, there is no way for example to generate a analytical or computer-generated server from resources that may be physically placed in separate servers.It is true that by virtue of the live migration capabilities that server virtualization technology enables, we are intelligent to move application loads from one physical server to another possibly even geographically distant physical server.However, moving is not the similar as sharing.It is our contention that to enable a truly distributed cloud computer, we must be able efficiently to share resources, no problem where they exist in purely based on the potential constraints of applications or services that consume their sources.
B. Storage set of connections & virtualization
Before the production of server virtualization, storage networking and storage virtualization permitted many improvements have been done in the data centre.The key improvement was the introduction of the FibreChannel (FC) protocol and Fibre Channel-based Storage Area Networks (SAN) which delivered great speed of storage connectivity and dedicated storage solutions to allow such profits as server-less backup, point to point reproduction, HA/DR and presentation optimization outside of the servers that run applications.However, these pay backs have come with improved management complication and costs.
C. System virtualization
The virtual networks now applied inside the physical server to switch between all the virtual servers to provide a substitute to the multiplexed, multi-patched network channels by trucking them nonstop to WAN transport, thus shortening the physical network infrastructure.
D. Function creation and binding
The existing method of exhausting Virtual Machine images that contain the application, OS and loading disk images is once again born of a server-centric computing model and does not provide itself to enable supply across mutual resources.In a cloud computing pattern, applications should preferably be built as a collection of facilities which can be integrated, disintegrated and distributed on the fly.Each of the services could be measured to be individual procedure of a larger workflow that establishes the application.In this way, individual services can be arranged and provisioned to improve the overall performance and potential requirements for the application.
II. PLANNED SUGGESTION STRUCTURAL DESIGN MODEL
If we were to purify the above interpretations from the previous section, we can realize that a couple of key subjects emerging.That is: A. The next generation architecture for cloud computing must entirely decouple physical resources management from virtual resource management.
B. Supply the proficiency to intervene between applications and resources in real time.
As we stressed in the earlier section, we are still to attain perfect decoupling of physical resources management from virtual resource management but the outline and improved acceptance of hardware assisted virtualization (HAV) as a significant and essential step towards this objective.Thanks to HAV, a next generation hypervisor will be capable to achieve and truly guarantee the identical level of access to the fundamental physical resources.Moreover, this hypervisor should be proficient of handling both the resources situated locally inside a server as well as any resources in other servers that may be situated somewhere else physically and linked by a network.Once the controlling of physical resources is decoupled from the virtual resource management.The necessity for a mediation layer that mediates the distribution of resources between various applications and the shared distributed physical resources becomes obvious.www.ijacsa.thesai.org---------------Infra. -------------------------------
III. INFRASTRUCTURE PROVISION FABRICS
This layer includes two pieces.Together with the two components allow a computing resource "dial-tone" that delivers the basis for provisioning resource fairly to all applications in the cloud.
A. Scattered services mediation
This is a FCAPS based (Fault, Configuration, Accounting, Performance and Security) abstraction layer that enables autonomous self-management of every individual resource in a network of resources that may be distributed geographically.
B. Virtual supply mediation layer
This gives the ability to create logical virtual servers with a level of service guarantee those assurances resources such as number of CPUs, memory, bandwidth, latency, IOPS (I/Ooperations per second), storage through put and capacity.
C. Circulated services Assurance Platform
This layer will allow for creation of FCAPS-managed virtual servers that pack and host the desired choice of OS to allow the loading and execution of applications.Since the virtual servers implement FCAPS-management, they can give automated mediation services natively to guarantee fault management and reliability (HA/DR), performance optimization, accounting and security.This describes the management dial-tone in our orientation architecture model.
D. Scattered Services Delivery Platform
This is basically a workflow engine that implements the application which we described in the previous section, is preferably composed as business workflow that organizes a number of distributable workflow elements.This describes the services dial tone in our reference architecture model.
E. Scattered Services Creation Platform
This layer gives the tools that developers will utilize to generate applications defined as group of services which can be composed, decomposed and scattered on the fly to virtual servers that are automatically shaped and run by the distributed services assurance platform.
F. Legacy Combination Services Mediation
This is a layer that gives addition and support for existing or legacy application in our reference architecture model.
IV. DEPLOYMENT OF THE SUGGESTION MODEL
Any generic cloud service platform requirements must address the needs of four categories of stake holders: 1) Infrastructure suppliers, 2) Service suppliers.
Below we explain how the reference model described will affect, benefit and are set up by each of the above stake holders.
A. Infrastructure suppliers
These are vendors who give the underlying computing, network and storage resources that can be fixed up into logical cloud computers which will be dynamically forced to deliver extremely scalable and globally interoperable service network infrastructure.The infrastructure will be utilized by both service creators who develop the services and also the end users who use these services.
B. Service suppliers
With the employment of our innovative reference architecture, service providers will be capable to promise both service developers and service users that resources will be obtainable on demand.They will be capable effectively to determine and meter resource utilization end-to-end usage to allow a dial-tone for computing service while management Service Levels to meet the availability, performance and security needs for each service.The service provider will now handle the application's link to computing, network and storage resource with suitable SLAs.
A. Facility developers
They will be able to develop cloud based services using the management services API to configure, monitor and manage service resource allocation, availability, utilization, performance and security of their applications in real-time.Service management and service delivery will now be integrated into application development to allow application developers to be able to specify run time SLAs.
C. End users
Their demand for selection, mobility and interactivity with sensitive user interfaces will continue to rise.The managed resources in our reference architecture will now not only permit the service developers to generate and distribute services using logical servers that end users can dynamically provision in immediate to respond for changing needs, but also provide service providers the ability to charge the end-user by metering correct resource handling for the required SLA.
V. CONCLUSIONS
In this paper, we have explained the needs for implementing a real dynamic cloud computing infrastructure which contains a group of physical computing resources i.e. processors, memory, network bandwidth and storage, potentially dispersed physically through server and geographical limits which can be controlled on demand into a dynamic reasonable entity i.e. "cloud computer", that can develop or reduce in size immediately in order to give surety about the desired levels of latency, sensitivity, performance, scalability, consistency and security to any application that runs in it.We worked out few key areas of shortage of current virtualization and management technologies.Particularly we explained detail importance of sorting out physical resource management from virtual resource management and why current operating systems are not designed and hence it was suitable to provide this ability for the distributed shared resources especially of cloud deployment.We also painted the need for FCAPS-based (Fault, Configuration, Accounting, Performance and Security) service "mediation" to give global administration functionality for all networked physical resources that include a cloud, irrespective of their allocation across a lot of physical servers in different geographical locations.We then projected an indication architecture model for a distributed cloud computing mediation (management) platform which will outline the foundation for making the possibility of next generation cloud computing infrastructure.We proved how this infrastructure will involve as well as advantage key stake holders such as the Infrastructure providers, service providers, service developers and end-users.
Description in this paper is considerably different from most current cloud computing solutions that are nothing more than hosted infrastructure or applications accessed over the Internet.The proposed architecture in this paper will significantly change the current setting by enabling cloud computing service providers to give a next generation infrastructure platform which will recommend service developers and end users exceptional control and enthusiasm in real-time to assure SLAs for service latency, availability, performance and security.
Figure 2
Figure 2 Position Architecture Model for Next Generation Cloud Computing Infrastructure | 3,455.2 | 2012-01-01T00:00:00.000 | [
"Computer Science"
] |
Corrosion inhibition of carbon steel in 1 M H2SO4 solution by Thapsia villosa extracts
Ethyl acetate extract (EAE) and butanolic extract (BE) of Thapsia villosa were investigated as corrosion inhibitors of Carbon Steel (CS) in 1 M H2SO4 using electrochemical impedance spectroscopy (EIS) techniques, potentiodynamic polarization and weight loss measurements. The effect of temperature on the corrosion behavior of CS was studied in the range of 20–40 °C. The experimental results show that EAE and BE are good corrosion inhibitors and the protection efficiency increased with increasing concentration of the extracts, but decrease with rise in temperature. The EAE and BE act as a mixed types inhibitors. The adsorption of extracts on CS surface follow Langmuir isotherm. The apparent energies, enthalpies and entropies of the dissolution process were discussed.
Introduction
Metals and alloys react electrochemically with the environment to form stable compounds, in which the loss of metals occurs. Metallic structures are exposed to conditions that facilitate corrosion processes. Furthermore, hydrochloric and sulfuric acids are widely used for pickling and de-scaling of carbon steel which promote the acceleration of metallic corrosion, causing ecological risks and economic consequences in term of repair, replacement and product losses [1]. Therefore, the prevention of the corrosion is vital not only for the protection of metals but also in decreasing the dispersion of the toxic compounds into the environment [2]. One of the best-known methods for corrosion protection is the use of inhibitors [3]. Organic compounds having functional groups such as -OR, -COOH, -SR and/or NR 2 have been reported to inhibit corrosion of metals in acid solutions [4]. The presence of oxygen, sulfur, nitrogen atoms and multiple bonds in organic compounds enhances their adsorption ability and corrosion inhibition efficiency [1]. However, most of these compounds are expensive, toxic and not biodegradable [5]. Therefore, alternative sources of products are preferred. Investigation of plant extracts as corrosion inhibitors is interesting because they are ecologically acceptable and not expensive. Extracts of some plants such as bupleurum lancifolium [6], Limonium thouinii [7], and Punica granatum [8] have been reported to inhibit the corrosion of metals in acid solutions.
Thapsia villosa, which belongs to the family Apiaceae, grow over a wide area in the West Mediterranean region, including Portugal, Spain, the south of France and the North West of Africa. Thapsia villosa has been used in folk medicine as a purgative [9]. Thapsia villosa is found to contain phenylpropanoid; 2,3-dlhydroxy-2-methylbutyrlc & M. Benahmed<EMAIL_ADDRESS>acids [9], sesquiterpenes [10], terpenes [11] and essential oils [12]. However, it has never been studied for the purpose of corrosion inhibition. The aim of this work is to investigate the inhibitory effects of ethyl acetate and n-butanol extracts as corrosion inhibitors for carbon steel in acidic solution using the weight loss, potentiodynamic polarization curves and electrochemical impedance measurements.
Experimental
Material ASTM A179 low carbon steel composed of (wt%): C 0.11 %, Mn 0.52 %, P 0.024 %, S 0.030 % and Fe balance was used in the present study. The steel specimens were taken from the Seamless cold-drawn tube of heat exchanger for petroleum refining. Each sheet was mechanically press cut into specimens in sizes of 3 9 3 9 0.2 cm were used for gravimetric measurements, whereas specimens used for polarization and EIS measurements were imbedded in epoxy resin leaving a working area of 1.0 cm 2 .
Preparation of plant extracts
Air-dried aerial parts of Thapsia villosa were macerated in methyl alcohol (70 %) at room temperature. The hydro alcoholic solutions were concentrated under reduced pressure to dryness and the residue was dissolved in hot water and kept in cold overnight. After filtration, the residue was successively treated with ethyl acetate and n-butanol. Then, the solvents were removed to afford ethyl acetate and nbutanol extracts [13,14]. The ethyl acetate extract (EAE) and n-butanol extracts (BE) were then used directly in the experiments.
Solution
The aggressive solution of 1 M H 2 SO 4 was prepared by H 2 SO 4 98 % (Merck) with distilled water. The concentration range of the EAE and BE employed varied from 100 to 800 ppm.
Electrochemical measurements
The Electrochemical experiments were carried out in the conventional three-electrode cell consisting of a CS as working electrode, a platinum rod as counter electrode and a saturated calomel electrode (SCE) as a reference electrode. Before measurement the working electrode was immersed in test solution at open circuit potential (OCP) for 30 min to ensure OCP to reach steady state.
Electrochemical impedance spectroscopy (EIS) was carried out at the OCP of each sample was immersed for 30 min over a frequency range of 100 kHz-10 mHz with a signal amplitude perturbation of 10 mV. Inhibition efficiency (g R %) was estimated using the following relation: where R p and R 0 p are polarizations resistors in the absence and presence of the inhibitor, respectively.
The potential of potentiodynamic polarization curves was started from cathodic potential of -250 mV to anodic potential of ?250 mV vs. OCP at a sweep rate of 1.0 mV s -1 . Inhibition efficiency (g p %) was defined as [6]: where i corr and i 0 corr represent corrosion current density values in absence and presence of inhibitor, respectively.
All electrochemical measurements were performed using a computer-controlled instrument, Voltalab-PGZ 301 with Voltamaster (ver 7.0.8) software. The above electrochemical tests were conducted for each concentration of Thapsia villosa extracts at different temperatures. Each experiment was repeated at least three times to check the reproducibility.
Weight loss measurements
CS specimens were abraded with a series of SiC paper, washed with distilled water, degreased with acetone and dried with a cold air stream. Experiments were realized under total immersion in stagnant aerated condition at 20-40°C. The specimens were weighed and suspended in beakers. After 7 h, these coupons were taken out, washed, dried and weighed accurately. From the weight loss data, the corrosion rate (CR) was calculated from the following equation [15]: where W is the average weight loss, A is the total area of the specimen and t is immersion time (7 h). The inhibition efficiency (g w ) was calculated as follows: where CR and CR 0 represent the values of corrosion rate in absence and presence of inhibitor respectively.
Results and discussion EIS measurements Figure 1 shows the EIS response of CS in 1 M H 2 SO 4 solution without and with various concentrations of EAE and BE at 20°C, represented via Nyquist plots. Only one capacitive loop at the higher frequency range is observed which means that the corrosion of CS is controlled by the charge transfer process [16,17]. The increasing diameter of loop obtained in 1 M H 2 SO 4 in the presence of EAE and BE indicated the corrosion inhibition and the strengthening of inhibitor film [18]. These loops are not perfect semi circles which can be attributed to the frequency dispersion effect as a result of the roughness and inhomogeneous of metal surface [19,20]. Due to non-ideal frequency response the capacitance is usually replaced by a constant phase element (CPE) [19], whose impedance is given by [21]: where Q is the magnitude of the CPE, x is the angular frequency (x ¼ 2pf , where f is the AC frequency), j is the imaginary unit, and n is the deviation parameter of the CPE: 0 B n B 1, for n = 1, Eq. (5) agrees to the impedance of an ideal capacitor, where Q is identified with the capacity. A simple electrical equivalent circuit (EEC) has been proposed to model the experimental data. The EEC depicted in Fig. 2 is employed to analyze the impedance spectra, where R 1 represents the solution resistance, R 2 denotes the charge-transfer resistance, and a CPE instead of a pure capacitor represents the interfacial capacitance. The values of the interfacial capacitance C dl can be calculated from CPE parameter and polarization resistor according to the following equation [22,23]: where R p is the polarization resistor. The values of parameters such as R p , Q, n and v 2 , obtained from fitting the recorded EIS as well as the derived parameters C dl are listed in Table 1. The Chi-squared (v 2 ) is used to evaluate the precision of the fitted data. Inspection of Table 1 reveals that the v 2 values are low, which indicates that the fitted data have good agreement with the experimental data. It is observed that R p values increased and the C dl values decreased with increasing inhibitors concentration. The increase in R p values can be attributed to the adsorption of the inhibitors on the metal surface leading to the formation of protective film on the metal surface and thus decreases the extent of the dissolution reaction [24]. The decrease in the C dl values may be due to the increase in the thickness of the electric double layer [1]. The inhibition efficiency (g p %) was achieved at (60 %) and (80 %) for EAE and BE, respectively. 20°C were shown in Fig. 3. The electrochemical parameters including corrosion potential (E corr ), corrosion current density (i corr ), anodic and cathodic Tafel slopes (b a and b c ), surface coverage values (h) and inhibition efficiency (g p ¼ h  100) are presented in Table 2. It is clear from Fig. 3 and Table 2 that, the addition of both EAE and BE to the acid solution causes a remarkable decrease in the corrosion rate predominantly shifts the cathodic curves to lower values of current densities; it may be due to the adsorption of organic compounds present in the extracts at the active sites of CS surface, retarding both metallic dissolution and hydrogen evolution reactions and consequently slowed down the corrosion process [25]. The structure and functional groups of the inhibitors play prominent roles during the adsorption process [1]. Inspection of Table 2 showed that both anodic and cathodic Tafel slopes do not change remarkably upon addition of EAE and BE, which indicates that the extracts act as a mixed type inhibitor for the corrosion of C steel. The values of inhibition efficiency ðg p %Þ determined using potentiodynamic
Weight loss measurements
Effect of concentration and temperature on corrosion rate and inhibition efficiency The weight loss expressed as the corrosion rate (CR) for the CS specimens in 1 M H 2 SO 4 solution containing different concentrations of Thapsia villosa extracts (EAE and BE) as a function of inhibitor concentration in the temperature range of 20-40°C is showed in Fig. 4. Inspection of the plots revealed that CR decreases noticeably with increase in both of EAE and BE concentrations, indicating that the addition of plant extracts retard the dissolution process of CS.
In similar experimental conditions, the influence of temperature on CR was studied. The results presented in the Table 3 and Fig. 4 show that the CR increases with temperature both in uninhibited and inhibited solutions, and goes up more rapidly at the higher temperature; the rise in temperature usually accelerates the corrosion reactions which results in higher dissolution rates of the metal. The variation of inhibition efficiency (g w %) with temperature and plant extracts concentrations is shown in Table 3 and Fig. 5. It is clear from Fig. 5 that g w % increases with the increase in EAE and BE concentration, while it decreased with increase in temperature. This can be attributed to increased rate of desorption of phytochemical compounds from the surface of CS with increasing temperature because these two opposite processes are in equilibrium [26,27]. Several authors have reported similar observation and the plant extracts were believed to be physically adsorbed on the CS surface [26,28,29].
At the EAE concentration of 800 ppm, the maximum EI % in 1 M H 2 SO 4 is 60 % at 20°C; 57 % at 30°C; and 37 % at 40°C. While at the same concentration of BE, the maximum EI % in 1 M H 2 SO 4 is 74 % at 20°C; 67 % at 30°C; and 56 % at 40°C. The results indicate that both extracts are good inhibitors for CS in 1 M H 2 SO 4 solution and the maximum inhibition efficiency was achieved using BE.
Adsorption isotherm
The decrease in CR by addition of EAE and BE is attributed to either adsorption of the plant component on the CS surface [30]. To evaluate the adsorption process of phytochemical components on the CS surface, Langmuir, Temkin and Freundlich isotherms were obtained according to following equations: where: C is the concentration of inhibitor, K ads is the adsorption equilibrium constant, h is the surface coverage, a is the adsorbate parameter The correlation coefficient (r 2 ), presented in the Table 4, was used to choose the isotherm that best fit experimental data. Best results from the plots were obtained for Langmuir adsorption isotherm, that suggests monolayer adsorption of both EAE and BE on the CS surface at all temperatures. Figure 6a, b show the straight lines of C/h versus C, deviate from unity for EAE at 20-40°C, indicates that the interaction force between phytochemical compounds on the CS surface cannot be neglected [28,31], and each molecule occupies more than one adsorption site on the metal surface [32]. A modified Langmuir adsorption isotherm could be applied to this phenomenon, which is given by following equation [33]: On the contrary, for the BE, the slope almost equals to unity, which suggests that the interaction of adsorbed species is negligible [34].
The adsorptive equilibrium constant (K ads ) listed in Table 5 was estimated from intercept of the Langmuir isotherm plot. The values of equilibrium constant decrease with rise in temperature, which may be attributed to desorption of inhibitor components at higher temperature [35,36]. Effect of the temperature The effect of temperature on the rate of the CS corrosion process using electrochemical measurements was studied in 1 M H 2 SO 4 alone and in the presence of EAE and BE. Corresponding data are given in Tables 6 and 7. It was found that the corrosion current density (i corr ) increased but the polarization resistance (R p ) and the inhibition efficiency decreased with increasing temperature. The decrease in inhibition efficiency reveals that the film formed on the metal surface is less protective at higher temperatures, since desorption rate of the inhibitor is greater at higher temperatures [37]. The activation parameters were calculated from Arrhenius equation: where CR is corrosion rate, E a is the apparent activation energy of the CS dissolution and A is the Arrhenius preexponential factor. The apparent activation energy was calculated from the plots of logarithm of CR versus 1/T (Fig. 7) and shown in Table 8. It can be seen in the Table 8 that E a is higher in the presence of the inhibitors than in their absence and increased with the increase in concentration of EAE and BE, which indicate a strong adsorption of the inhibitor molecules at the CS surface [1].
An alternative form of Arrhenius equation is the transition-state equation [38,39]: where h is the plank's constant, N A is Avogadro's number, gS a the entropy of activation and gH a is the enthalpy of activation. Figure 8 Table 8. For both ethyl acetate and n-butanol extracts, the positive signs of enthalpies reflect the endothermic nature of the dissolution process [39,40]. It is evident from data listed in Table 8 that, the values of E 0 a are larger than corresponding values of gH 0 a indicating the corrosion process involved a gaseous reaction, simply the hydrogen evolution reaction, associated with a decrease in total reaction volume [39,40]. Moreover, the difference value E 0 a À gH 0 a is 2.74 kJ/mol, which is approximately equal to the average value of RT. Therefore, this shows that the corrosion process is a unimolecular reaction as it is characterized by the equation: Investigation of Table 8 reveals that the sign of gS a is negative in free acid solution, whereas it becomes positive with the addition of both extracts, this suggest that the adsorption of organic inhibitor molecules is accompanied by desorption of water molecules from the steel surface [41]. Hence, the gain in entropy is attributed to the increase in solvent entropy and to more positive water desorption enthalpy [26]. The positive values of the entropy, related to substitutional adsorption, can be attributed more to the increase of adsorbed inhibitor molecules rather than the decrease of water molecule desorption [41].
Conclusion
It can be concluded as follows: • EAE and BE of Thapsia villosa act as good inhibitors for the corrosion of CS in 1 M H 2 SO 4 solution. • The inhibition efficiency of all electrochemical tests and weight loss measurements were in good agreement. | 3,903.2 | 2016-08-27T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
A System of Interaction and Structure III: The Complexity of BV and Pomset Logic
Pomset logic and BV are both logics that extend multiplicative linear logic (with Mix) with a third connective that is self-dual and non-commutative. Whereas pomset logic originates from the study of coherence spaces and proof nets, BV originates from the study of series-parallel orders, cographs, and proof systems. Both logics enjoy a cut-admissibility result, but for neither logic can this be done in the sequent calculus. Provability in pomset logic can be checked via a proof net correctness criterion and in BV via a deep inference proof system. It has long been conjectured that these two logics are the same. In this paper we show that this conjecture is false. We also investigate the complexity of the two logics, exhibiting a huge gap between the two. Whereas provability in BV is NP-complete, provability in pomset logic is $\Sigma_2^p$-complete. We also make some observations with respect to possible sequent systems for the two logics.
There are two ways to put this paper into perspective. First, it is the third paper in a series of five papers: the first two [Gug07,Tiu06] introduce a logic called system BV, prove cut elimination for it, and show that deep inference is necessary for having a cut-free deductive system for BV; and the last two [SG11,GS11] study system NEL, a conservative extension of BV with the exponentials of linear logic. The second way to look at this paper is that it finally answers the longstanding open question whether BV and pomset logic [Ret21] are the same. Pomset logic has been discovered by Christian Retoré [Ret93,Ret97a] through the study of coherence spaces which form a semantics of proofs for linear logic [Gir87], by observing 1 that next to the two operations (tensor or multiplicative conjunction) and (par or multiplicative disjunction) on coherence spaces there are two other operations ⊳ and ⊲, which are non-commutative, obey A⊳B = B⊲A, and are self-dual, i.e., A ⊳ B ⊥ = A ⊥ ⊳ B ⊥ . 2 Furthermore, there are linear maps A B ⊸ A ⊳ B and A ⊳ B ⊸ A B. From this semantic observation, Retoré derived a a proof net syntax, i.e., a graph-like representation of proofs together with a combinatorial correctness criterion that distinguishes actual proofs among the considered space of graphs, together with a cut elimination theorem. However, he could not provide a sound and complete cut-free sequent calculus for this logic. System BV was found by Alessio Guglielmi [Gug07,Gug99] 3 through a syntactic investigation of the connectives of pomset logic and a graph theoretic study of series-parallel orders and cographs. The difficulty of presenting this combination of commutative and noncommutative connectives in the sequent calculus triggered the development of the calculus of structures [GS01], the first proper deep inference proof formalism. 4 This leads to the strange situation that we have two logics, pomset logic and BV, which are both conservative extensions of multiplicative linear logic with mix (MLL+mix) [FR94] with a noncommutative connective ⊳ such that A B ⊸ A ⊳ B ⊸ A B, and which both obey a cut elimination result-the only difference being that pomset logic only had a proof net syntax but no deductive proof system, and BV only had a deductive proof system but no proof nets. This naturally led to the conjecture that both logics are the same [Str03b].
In this paper we show that this conjecture is false. It can easily be shown [Str03b,Str03a,Ret99b] that that every theorem in BV is also a theorem of pomset logic, and for the sake of clarity, we also give a proof in this paper (in Section 3.1). However, the converse is not true, and we give an example of a formula that is a theorem of pomset logic but not provable in BV (in Section 3.2).
This naturally leads to the question of complexity. Do both logics have the same or different complexity? It has been observed in [Kah08] that provability in BV is NPcomplete. The reason is that NP-hardness is inherited from MLL+mix, and containment in NP follows from the fact that the size of every proof in BV is polynomial in the size of the conclusion, and the correctness of such a proof can be checked in time which is linear in the size of the proof (see Section 4.2 for more details). However, provability in pomset logic is Σ p 2 -complete. Even though the size of a pomset logic proof net is polynomial in the size of its conclusion, checking correctness of such a proof net is coNP-complete. The details for these results are discussed in Sections 4.3-4.5.
This complexity result explains why it is impossible to give a deductive proof system (in the sense of Cook and Reckhow [CR79]) for pomset logic. Nonetheless, there is a recent proposal by Slavnov [Sla19] for a decorated sequent calculus for pomset logic. We look at this calculus in Section 5.4 and relate it to our complexity result. Before that, we show in Section 5.1 that the sequent calculus with cut, that Retoré proposed for pomset logic is in fact a sound and complete sequent calculus for BV. 5 In summary, the paper makes the following contributions: • In Section 2, we give a gentle and easy accessible introduction to pomset logic and BV.
We unify the notation and terminology, and present some important properties, that will be needed in later sections. • In Section 3, we are showing that BV is properly contained in pomset logic, by showing that every theorem of BV is a theorem of pomset logic, but not vice versa. The results of this section have already been presented in [NS22], of which this article is an extended version. • In Section 4, we are discussing the complexity of pomset logic and BV. More precisely, we are recalling Kahramanoğulları's result on the NP-completeness of BV, and we show 3 The initial ideas of the inference rules have also been present in [Ret99b] even though not formulated as a proof admitting cut-elimination. 4 An introductory survey on deep inference can be found in [TS19]. 5 In fact, in various publications on pomset logic, Retoré proposes different sequent calculi (one in his PhD thesis [Ret93, Chapitre 8], one in [Ret97a,Section 7], and more recently another on in [Ret21]); we work here with the one in [Ret21]. that checking correctness of a pomset logic proof is coNP-complete and that provability in pomset logic is Σ p 2 -complete. • Finally, in Section 5, we come back to the sequent calculus, discussing the difficulties that the two logics pose, and how they can or cannot be overcome.
Preliminaries on Pomset Logic and BV
In this section we will introduce the the two logics, BV and pomset logic, together with some basic underlying graph theoretical and proof theoretical concepts. Even though pomset logic was discovered through the study of coherence semantics, we will here only discuss its syntax, as coherence spaces are not needed for the results of this paper.
2.1. Formulas, Duality and Sequents. The formulas of pomset logic and BV are in this paper denoted by capital Latin letters A, B, C, . . . and are generated from propositional variables a, b, c, . . ., their duals a ⊥ , b ⊥ , c ⊥ . . . and the unit I via the three binary connectives tensor , par , and seq ⊳. We shall also be led to consider terms built with those connectives whose leaves are taken in arbitrary sets, which justifies the following more general definition.
Definition 2.1. The generalized formulas over the set X are generated by the grammar A, B :: We fix a countable set V = {a, b, c, . . .} of propositional variables. To each variable a we injectively associate a dual a ⊥ ; we write V ⊥ = {a ⊥ | a ∈ V}, and require that V ∩ V ⊥ = ∅.
An atom is either a variable (positive atom) or the dual of a variable (negative atom). A formula is a generalized formula over the set of atoms V ∪ V ⊥ . A generalized formula over X is linear when it contains at most one occurrence of each x ∈ X. In particular, when X is the set of atoms, a linear formula contains at most one positive occurrence a and at most one negative occurrence a ⊥ of each propositional variable a ∈ V.
Definition 2.2. The size of a generalized formula A over the set X, denoted by |A|, is the number of occurrences of elements of X in A.
For better readability of large formulas, we use here different kinds of parentheses for the different connectives. 6 In the following, we omit outermost parentheses for better readability.
Definition 2.3. We define the relation ≡ on generalized formulas to be the smallest congruence generated by the rules of Figure 1. Those correspond to associativity of , , ⊳, commutativity of , , and unit equations (I behaves as unit for all three connectives).
As usual in the linear logic tradition, negation is defined not as a connective but as a mapping from formulas to formulas. Note that this does not make sense for generalized formulas.
Definition 2.4. The involutive (linear) negation or duality (−) ⊥ is extended from propositional variables to formulas by taking De Morgan's laws as its inductive definition: The first clause means that linear negation defined a fixed-point-free involution on the set of atoms. The last clause is what we mean when we say that seq is self-dual; note that the right-hand side is indeed A ⊥ ⊳ B ⊥ and not B ⊥ ⊳ A ⊥ . 7 We will also need the notion of sequent in pomset logic. While traditional sequent calculi use multisets 8 of formulas as sequents, pomset logic is thus named because its sequents are partially ordered multisets of formulas. While Retoré's early work [Ret93,Ret97a] involved arbitrary partial orders, we consider here the simplified version from [Ret21] where sequents are equipped with series-parallel orders. We shall define those orders in the next subsection (see Proposition 2.31 and Remark 2.32); for now, let us just say that those orders admit a syntactic description that we give here.
It is then natural to define a collapse operation 9 from sequents into formulas. But since sequents are equivalence classes modulo ≡, the relation that we get is not functional. Definition 2.9. We say that a formula A corresponds to a sequent Γ when: • either Γ = A; • or, inductively, there exist B and C that correspond respectively to ∆ and Λ such that -either A = B C and Γ = [∆, Λ]; -or A = B ⊳ C and Γ = ∆; Λ . We shall also say that Γ corresponds to A in this case.
For instance, a a ⊥ and a ⊥ a both correspond to [a, a ⊥ ], but any formula also corresponds to itself as a singleton sequent. What is important is that if a formula corresponds to a sequent, then one is provable if and only if the other is -this will clearly hold for all the systems in this paper for which this property can be stated.
Dicographs, Relation Webs and Series-Parallel Orders.
In [Ret97a], Retoré presents proof nets for pomset logic as RB-digraphs, that is, directed graphs equipped with perfect matchings, extending his reformulation of MLL+mix proof nets as undirected RBgraphs [Ret03]. We recall these notions below.
Definition 2.10. A digraph G = (V G , R G ) consists of a finite set of vertices V G and a set of A labeled digraph is a digraph G equipped with a map ℓ : V G → L assigning each vertex v of V G a label ℓ(v) ∈ L in the label set L. If L is the set V ∪ V ⊥ of atoms, we speak of an atom-labeled digraph.
Definition 2.11. An isomorphism between two digraphs G and H is a bijection on vertices As usual, if such an f exists, G and H are said to be isomorphic. Furthermore, if G and H are endowed with the vertex labelings ℓ G and ℓ H respectively, and ℓ G = ℓ H • f , then we say that f is an isomorphism of labeled digraphs.
Definition 2.12. For a given digraph G = (V G , R G ) we define the following four sets: Note that these set can be seen as binary relations and as sets of edges in the graph. We will use them interchangeably in both settings.
Remark 2.14. Conversely, a set V and four binary relations R , (2) R and R are symmetric, Notation 2.15. Following [Ret97a], when drawing digraphs, we use (red/regular) arrows to denote edges in R ⊳ G (and R ⊲ G ), and arrow-free edges for R G . For R G we use no lines at all, as in the following five examples: This allows us to see (V G , R G ) and (V G , R G ) as undirected graphs.
Let us now relate (generalized) formulas and digraphs.
. We can define the following operations that correspond to the connectives of pomset logic and BV: Definition 2.17. The mapping · from linear generalized formulas over a set X to digraphs with vertices in X is defined inductively as follows: where ∅ is an abbreviation for the empty graph (∅, ∅) and • x = ({x}, ∅) is the unique digraph that has x as its single vertex. Note that linearity is required to fulfill the disjointness assumption of the previous definition.
Definition 2.18. If A is a generalized formula over a set X that is not assumed to be linear, we may translate it to a digraph as follows. Choose a linear generalized formula A ′ , a set Y and a map ℓ : Y → X such that A is obtained from A ′ by the substitution induced by ℓ.
Then the labeled digraph ( A ′ , ℓ) corresponds to A. We may write it A , keeping in mind that it is only defined up to isomorphism of labeled digraphs.
Proposition 2.19. For every linear generalized formula
Example 2.20. The last digraph in Equation The point of the map · is to give an intrinsic representations of (generalized) formulas modulo ≡. This is shown in [Gug07], where the result is stated in terms of "structures" and "relation webs". The former are just formulas modulo ≡ while the latter are digraphs defined through a quadruple of relations as in Remark 2.14. Further discussion of relation webs will take place later in this subsection (Definition 2.27). For now, we give a formulation using our previous definitions. The next interesting question is how we can characterize the graphs that are translations of formulas or sequents. In fact, they form the class of directed cographs (which we shall abbreviate as "dicograph" as in [Ret21], cf. Definition 2.26); we refer to [BG18, Section 11.6] for a survey. It admits three characterizations that have been found independently in [BdGR97,Gug07,CP06]. They all rely on the notion of induced subgraph.
In this case we also say that H is an induced subgraph of G and denote that by H ⊑ G. If additionally V H V G then we write H ⊏ G.
We give detailed statements for the characterizations from [BdGR97,Gug07] below, for the sake of completeness, and to show that they are very similar.
Definition 2.25. An undirected graph is P 4 -free if it does not contain a P 4 (shown on the left below) as induced subgraph, and a directed graph is N-free if it does not contain an N (shown on the right below) as induced subgraph. (2.2) obeying the four conditions in Remark 2.14 together with the following conditions: (5) the relations E ⊳ and E ⊲ are transitive, (6) Triangular property: for any R 1 , R 2 , R 3 ∈ {R ⊳ ∪ R ⊲ , R , R }, it holds: if (u, v) ∈ R 1 and (v, w) ∈ R 2 and (w, u) ∈ R 3 then R 1 = R 2 or R 2 = R 3 or R 3 = R 1 , (7) Square property: (V, R ) and (V, R ) are P 4 -free, and (V, R ⊳ ) is N-free.
Theorem 2.28. Let G be a digraph. Then the following are equivalent: The equivalence 1 ⇐⇒ 2 has been shown in [BdGR97, Section 5], while the equivalence 1 ⇐⇒ 3 can be found in [Gug07, Theorems 2.2.4 and 2.2.7]. A direct proof of 2 ⇐⇒ 3 is a straightforward exercise, that we encourage our readers to do by themselves.
Let us also briefly mention the characterization of dicographs from [CP06], that uses forbidden induced subgraphs as its only condition.
Theorem 2.29 ([CP06, Theorem 2]). There exists a set F of 8 isomorphism classes of digraphs such that the class of digraphs whose induced subgraphs are not in F coincides with the class inductively generated by the operations of Definition 2.16 starting from singlevertex graphs.
It follows from the definitions that this inductively generated class corresponds to the first item of Theorem 2.28, so this is indeed a characterization of dicographs.
Perfect Matchings, RB-digraphs and Alternating Elementary Cycles.
We have just seen representation of formulas as graphs. In this subsection and the next one, we recall how to graphically represent pomset logic proofs as well. This requires a few graph-theoretic definitions first.
Recall that an undirected graph is 1-regular if every vertex is incident to exactly one edge, and a perfect matching of some undirected graph is a 1-regular spanning subgraph (so the notion of perfect matching is usually defined relatively to some ambient graph). As a slight abuse of language, we use "perfect matching" in this paper to designate a directed counterpart to 1-regular graphs.
Definition 2.34. We say that a digraph G = (V G , E G ) is a perfect matching when (1) any vertex has exactly one outgoing edge in E G and exactly one incoming edge in E G , i.e., for every This means, in particular, that in the previous item, v = w.
is a perfect matching. We call the edges in B G the matching edges or B-edges, and those in R G the non-matching edges or R-edges. We say that G is an Following Definition 2.11, we define an isomorphism between G and another RB-digraph This extends in the only sensible way to a notion of isomorphism between labeled RBdigraphs.
Notation 2.36. In all figures representing RB-digraphs, we will (following [Ret99b]) draw the matching edges bold and blue, while the non-matching edges will be drawn regular and red. Definition 2.38. Let G = (V G , E G ) be a digraph and n ∈ N with n ≥ 2. An elementary path of length n in G is a sequence of vertices u 0 , . . . , u n ∈ V G without repetitions such that (u i , u i+1 ) ∈ E G for all i ∈ {0, . . . , n − 1} (so the length counts the number of edges, not of vertices). An elementary cycle is defined in the same way except there is the single repetition u n = u 0 (so the length of an elementary cycle is both its number of vertices and its number of edges).
An alternating elementary path (or ae-path) in an RB-digraph (V G , R G , B G ) is an elementary path u 0 , . . . , u n in the digraph (V G , R G ∪ B G ) such that 10 : • either (u i , u i+1 ) ∈ R G when i is odd and (u i , u i+1 ) ∈ B G when i is even, 10 One could be tempted to simplify this definition by saying that for any two consecutive edges, one is in RG and the other in BG ; the issue, however, is that RG and BG are not required to be disjoint.
• or (u i , u i+1 ) ∈ R G when i is even and (u i , u i+1 ) ∈ B G when i is odd. An alternating elementary cycle (or ae-cycle) in (V G , R G , B G ) is an elementary cycle of even length in (V G , R G ∪ B G ) that satisfies the above alternation condition. Morally, the parity condition ensures that the cycle also alternates between (u n−1 , u n ) and (u 1 , u 2 ), or equivalently that the "change of base points" u ′ i = u i+k mod n sends ae-cycles to ae-cycles. Example 2.39. The first and the fifth graph in Example 2.37 do not contain an ae-cycle. In all other graphs in that example, the four vertices form an ae-cycle.
The existence or non-existence of ae-cycles in RB-dicographs will play a central role in this paper. We note in passing that in the classical theory of matchings in undirected graphs, the absence of ae-cycles admits alternative characterizations and entails deep structural properties, which have been applied to MLL+mix proof nets [Ret03,Ngu20].
Definition 2.40. Let u 0 , . . . , u n−1 , u n = u 0 be an elementary cycle in a digraph G = (V G , E G ). An edge (v, w) ∈ E G is a chord for this cycle when v = u i and w = u j for some i, j ∈ {0, . . . , n − 1} such that i ≡ j + 1 mod n and j ≡ i + 1 mod n. That is, both vertices v and w occur in the cycle but the edges (v, w) and (w, v) are not part of the cycle. A cycle is chordless if it does not admit any chord in G. A chordless ae-cycle in an RB-digraph (V G , R G , B G ) is an ae-cycle which has no chord in the total digraph (V G , R G ∪ B G ). Note that since B G is a perfect matching, if an ae-cycle admits a chord, then this chord is necessarily Example 2.41. In Example 2.37, the ae-cycles in the second and sixth graph admit chords. The ae-cycles in the third and fourth graph are chordless.
Pomset Logic, Proof Nets and Balanced Formulas.
A proof in multiplicative linear logic (MLL) is given by its conclusion (a formula or a sequent) and an axiom linking. This can be drawn as a graph, which consists of the formula tree (or sequent forest) with additional edges representing the axiom links, i.e., connecting those leaves of the tree which are matched by an axiom in the proof. In order to distinguish the actual proofs in the set of all such graphs, a so-called correctness criterion is employed, and the graphical structures that obey this criterion are called proof nets.
In [Ret97a], Retoré generalized this idea from the formulas/sequents of MLL to those that we introduced in Section 2.1. There exist many different equivalent correctness criteria for MLL, and one of Retoré's main achievements was to figure out which criterion allows for a generalization to include the seq connective ⊳. There are in fact two such criteria, and their versions for MLL and MLL+mix has been presented in [Ret03]. They are both based on RB-graphs, and we are now going to give an exposition of their extension to RB-digraphs, as presented in [Ret97a,Ret99b,Ret21].
First, the notion of axiom linking is the same as for MLL.
Definition 2.42. A pomset logic pre-proof of a formula or a sequent is an involution ℓ on its set of atom occurrences such that an atom is always mapped to its dual. This ℓ is also called an axiom linking.
Example 2.43. The sequent [a ⊥ a ⊥ , a a] has two possible axiom linkings.
Throughout this paper, the class of formulas for which there is a single possible choice of axiom linking will play an important role.
Definition 2.44.
A formula is balanced if every propositional variable that occurs in A occurs exactly once positively and exactly once negatively. (Equivalently, a formula is balanced when it is linear and its set of occurring atoms is closed under duality.) Using the correspondence of Definition 2.9, this extends to a notion of balanced sequent. A balanced formula A uniquely determines an axiom linking on A, that we denote by ℓ(A). Similarly, we write ℓ(Γ) for the unique axiom linking on a balanced sequent Γ. 11 As we have seen in the previous subsections, one can represent a formula as a dicograph, where the atom occurrences are the vertices and the edges are determined by the connectives. More traditionally, one can also associate a syntax tree to a formula. Each of these two graphical representations has an associated correctness criterion, based on ae-cycles in RBdigraphs. We now present those two criteria, starting with dicographs.
Definition 2.45. Let Γ be a sequent and ℓ be an axiom linking for Γ. The cographic RB-prenet of Γ and ℓ, denoted by ρ(Γ, ℓ), is the RB-dicograph G = (V G , E G , B G ) where (V G , E G ) = Γ , and we have (x, y) ∈ B G iff the atom occurrences in Γ that correspond to x and y are mapped to each other by the axiom linking ℓ.
Example 2.46. The sequent [ a; a , a ⊥ ; a ⊥ ] admits two possible axiom linkings, and the two corresponding cographic RB-prenets are the forth and the fifth graph in Example 2.37.
Proof. Let G = (V G , R G , B G ) be given. We can label the vertices in V G with distinct atoms, such that two atoms are dual if and only if they are matched by B G . Then A exists by Theorem 2.28, since (V G , R G ) is a dicograph.
Definition 2.49. A cographic RB-prenet is correct if it does not contain any chordless ae-cycle. A correct cographic RB-prenet is also called a cographic RB-net.
Example 2.51. The last two graphs in Example 2.37 are pomset logic proofs for the balanced formulas The following theorem says that pomset logic is a conservative extension of MLL+mix. The second correctness criterion is more in the tradition of other known correctness criteria for MLL+mix as it works on a structure that is directly derived from the formula trees. More precisely, we define inductively for each formula C its RB-tree, denoted as T RB (C), as shown in Figure 2. Technically speaking this not a tree in the graph-theoretical sense, but we use the name as it carries the structure of the formula tree. Figure 2. Inductive definition of RB-trees (which are not quite trees in the sense of graph theory, though they resemble the syntax trees of formulas). The root vertex is at the bottom. Figure 3. Two tree-like RB-prenets for the formula a ⊳ a a ⊥ ⊳ a ⊥ . The left one is a correct proof net, while the right one contains an ae-cycle involving the 4 topmost matching edges.
If we have a sequent Γ, then T RB (Γ) is obtained from the RB-trees of the formulas in Γ which are connected at the roots via the edges corresponding to the series-parallel order of the sequent structure (see the discussion following Proposition 2.31). In order to obtain an RB-digraph, we need to add the B-edges corresponding to the linking ℓ. We denote this RB-digraph by τ (Γ, ℓ) and call it the tree-like RB-prenet of Γ and ℓ.
Example 2.53. The two graphs in Figure 3 show the two tree-like RB-prenets, corresponding to the two axiom linkings for a ⊳ a a ⊥ ⊳ a ⊥ .
Below we give an alternative, more formal definition of τ (Γ, ℓ).
Definition 2.54. Let A be a formula. We define two unfoldings of A, denoted by A ♭ and A ♯ , to be flat sequents which are obtained as follows. For each non-atomic subformula occurrence B of A, we introduce a fresh propositional variable z B . If A is atomic, then and A ♯ is defined inductively as follows: Let Γ be a sequent, and let A 1 , . . . , A n be the formula occurrences of Γ.
It is important to note that in Definition 2.54 every subformula occurrence gets a fresh variable, in particular, in Equation (2.3) every occurrence of I in A is assigned a fresh z I , and these variables have to be indexed accordingly. Then the graph T RB (Γ) is in fact Γ ♭ equipped with a B-edge for every fresh z-z ⊥ pair.
Definition 2.56. Let Γ be a sequent and ℓ an axiom linking for Γ. We define the linking ℓ ♭ for Γ ♭ to be the linking obtained from ℓ by mapping each fresh z A to z ⊥ A , and vice versa. Then the tree-like RB-prenet of Γ and ℓ is defined as τ (Γ, ℓ) = ρ(Γ ♭ , ℓ ♭ ).
The correctness criterion for tree-like RB-prenets is exactly the same as for cographic RB-prenets, except that in a tree-like RB-prenet every ae-cycle is automatically chordless. Therefore we have the following definition.
Definition 2.57. A tree-like RB-prenet is correct iff it does not contain any ae-cycle. A correct tree-like RB-prenet is also called a tree-like RB-net.
Example 2.58. In Figure 3, the left RB-prenet is correct, while the one on the right is not.
In [Ret99b], Retoré has shown the equivalence of the two correctness criteria. In the remainder of this paper we will use the term pomset logic proof net, or short proof net for correct prenets of both kind, i.e., for cographic RB-nets and tree-like RB-nets. This is justified, as the two can be trivially transformed into each other in linear time.
Pomset logic proof nets can easily extended with cut. A cut in a sequent is a formula of the shape C C ⊥ . Then the cut elimination theorem can be stated as follows: Theorem 2.60 ([Ret97a, Theorem 7]). Let Γ be a sequent formed by the formulas A 1 , . . . , A n , C 1 C ⊥ 1 , . . . , C k C ⊥ k , and let Γ ′ be obtained from Γ by removing the formulas C 1 C ⊥ 1 , . . . , C k C ⊥ k . If there is a pomset logic proof net for Γ, then there is also one for Γ ′ .
2.5. System BV and the Calculus of Structures. In [Gug99, Gug07] Guglielmi introduces system BV, which is a deductive system for formulas defined in Section 2.1. It is defined in the formalism called the calculus of structures, and it works similar to a rewriting system, modulo the equational theory defined in Figure 1. The inference rules of system BV and its symmetric version system SBV are shown in Figure 4. These rules have to be read a rewriting rule schemes, meaning that (1) the variable a can be substituted by any atom, and the variables A, B, C, D can be substituted by any formula, and that (2) the rules can be applied inside any (positive) context. More formally, an context S{·} is a formula which contains exactly one occurrence of the hole {·} in place of an atom: Given a context S{·} and a formula A, we write S{A} to denote the formula that is obtained from S{·} by replacing the hole {·} with A.
is an inference rule and S{·} a context, then S{A} r S{B} is an instance of the rule.
A (proof) system is a set of inference rules. We write there is a derivation from A to B using only rules from the system S, and that derivation is named δ. If in that situation A = I, then we write it as S δ B or simply as ⊢ δ S B and call δ a proof of B. In this case we say that B is provable S. Figure 5 shows an example for a proof in BV. We now recall some basic properties of BV and SBV. First, observe that the rules ai↓ (called atomic interaction down or axiom) and ai↑ (called atomic interaction up or cut) are in atomic form. Their general forms Definition 2.62. An inference rule r is derivable in a system S iff for every instance A r B there is a derivation A ⊢ S B. An inference rule r is admissible for a system S iff for every proof ⊢ S∪{r} A there is a proof ⊢ S B.
Proposition 2.63. The rule i↓ is derivable in BV, and the rule i↑ is derivable in SBV.
The proof is standard an can be found in many papers (e.g., [Gug07,GS01,TS19]). We are now going to state the cut elimination property for BV.
Definition 2.64. Two system S 1 and S 2 are equivalent if they prove the same formulas. A proof of this result can be found in [Gug07] and in [Str03b]. This is an immediate corollary of Proposition 2.63 and Theorem 2.65. Finally, the relation between BV and SBV can be strengthened to the following statement: Corollary 2.67. For any two formulas A and B, we have 2.6. Unit-Free Versions of BV and SBV. One of the main reasons to study cut-free systems is to have a deductive system that is suitable for doing proof search. However, due to the versatility of the unit I in formulas, proof search in plain BV as it is shown in Figure 4 is not feasible. In order to reduce the non-determinism in BV, Kahramanoğulları proposed in [Kah04] a unit-free version of BV which is better suited for proof search. As this system is also easier to handle for some results we show in this paper, we introduce BVu below. For didactic reasons, we also introduce its symmetric version SBVu.
The formulas for BVu are the same as defined in Section 2.1, except that we do not allow any occurrence of the unit I. This means that we have to restrict the equivalence defined in Figure 1 to the unit-free formulas. We define the relation ≡ ′ to be the smallest congruence generated by The inference rules for BVu and SBVu are then shown in Figure 6. Note that the rule ai • ↓ has no premise. It is an axiom that is used exactly once in a proof which in BVu or SBVu Figure 6. System BVu (first three columns) and system SBVu (all five columns) is a derivation without premise (as the unit I is not present and cannot take this role). Likewise, the rule ai • ↑ has no conclusion. It is used exactly once in a refutation, which is a derivation with empty conclusion. We have the following immediate results.
Proposition 2.68. Let A and B be unit-free formulas. We have
Proof. If we have a derivation A ⊢ SBVu B, then we immediately have a derivation A ⊢ SBV B, as every rule in SBVu (except for ai • ↓ and ai • ↑) is derivable in SBV. Conversely, assume we have a derivation A ⊢ δ SBV B. Then, in δ, the unit I can occur. Let δ ′ be obtained from δ by deleting the unit I everywhere. Then every instance of the rule ≡ becomes an instance of ≡ ′ ; every instance of q↓ becomes an instance of q 2 ↓ or q L 3 ↓ or q R 3 ↓ or q 4 ↓ or trivial (i.e., premise and conclusion of the rule instance become equal); and similarly for s and q↑. However, an instance of ai↓ can become an instance of ai ↓ or ai ⊳ L ↓ or ai ⊳ R ↓ (which are in SBVu), or ai ↓ which is shown on the left below. Similarly, an instance of ai↑ can become an instance of ai ↑ or ai ⊳ L ↑, or ai ⊳ R ↑, or ai ↑ which is shown on the right below: These rules are not in SBVu, but they can be derived with {ai ↓, s 2 } and {ai ↑, s 2 }, respectively.
Proof. First, if we have a proof ⊢ BVu A then we can simply replace the top instance of ai • ↓ by ai↓ and have a proof of BV. Conversely, a proof in BV can be transformed into a proof of BVu replacing the to axiom ai • ↓ by a ai↓ and then follow the same procedure as in the previous proof.
From these propositions we can immediately obtain the cut elimination results for BVu via the corresponding results for BV: Corollary 2.70. The systems BVu and SBVu are equivalent.
Corollary 2.71. For any two unit-free formulas we have that ⊢ BVu
Corollary 2.72. The general cut rules are admissible for BVu.
Remark 2.73. Our version of BVu is slightly different from the one by Kahramanoğulları [Kah04].
In [Kah04] the rule s 2 is absent, and instead the rule ai ↑ shown in Equation (2.6) is part of the system. We chose this variation because it is better suited for the results of this paper (e.g. Theorem 2.77 and the proofs in Section 3). But it is easy to see that the two variants of BVu are equivalent: first, as we have mentioned above, the rule ai ↑ is derivable in {ai ↓, s 2 }, and second, the rule s 2 is admissible if ai ↑ is present. This can be seen by an easy induction on the size of the derivation. However, note that the same trick does not work for the rule q 2 ↓. This rule cannot be shown admissible, as the formula Remark 2.74. The logical rules of SBVu, i.e., the bottom two lines in the Figure 6, have already been studied by Retoré in [Ret99b], as a rewrite system on digraphs to generate theorems of pomset logic. We will study the relation between pomset logic and BV/BVu in the next section.
In some sections of this paper we also need a variant of BVu that we call BVû and that is obtained from BVu by restricting rules q 2 ↓ and s 2 to cases where neither A nor B has a as main connective, i.e., we replace q 2 ↓ and s 2 byq 2 ↓ andŝ 2 , respectively: where A ≡ ′ C D and B ≡ ′ C D for any formulas C and D.
SBVu and Dicograph Inclusions.
To wrap up these long preliminaries, we recall here some useful results that also provide some motivation for the rules of SBV and SBVu. The starting observation is that the non-interaction rules preserve the atoms of a formula, so they can be seen as digraph rewriting rules preserving the set of vertices. The following results, due to Béchet, de Groote and Retoré [BdGR97], elucidate the combinatorial meaning of this rewriting system. First, let us consider the case of unit-free formulas without the tensor connective . Recall that according to Proposition 2.31, such formulas modulo ≡ ′ correspond to seriesparallel orders on their atom occurrences.
Equivalently, this is also the non-interaction non-tensor fragment of SBVu, since all the rules of SBVu that are not in BVu contain tensors.
Proof. Let us explain how to connect this reformulation to the original statement in [BdGR97]. In the latter, the inclusion R A ⊇ R B is characterized by a rewriting system on seriesparallel orders whose rules are listed in [BdGR97, Definition 3.1]. Observe that, among those rules: • (e) and (h) express the reflexivity and transitivity of the rewriting relation, which corresponds to the fact that a derivation is a sequence of zero, one or more inference rules; • the remaining rules, namely (f) and (g), express the contextual closure of the rules, whose counterpart in our setting is deep inference, i.e. the possibility of instantiating a rule in a context S{·}. There remains a subtlety: we must use the ≡ ′ rule to turn formulas into equivalent ones on which other rules may be applied, whereas this is left implicit in the setting of [BdGR97] (since ≡ ′ on formulas corresponds to equality of dicographs). The fact that ≡ ′ suffices is due to the fact that "the algebraic representation of any [series-parallel] order is unique modulo the associativity of ⊳, and the commutativity of " (a quote from [BdGR97,§2] where we adapted the notations for the connectives).
Next, we turn to unit-free formulas that may contain tensors; their associated graphs are all dicographs. One would then expect the non-interaction fragment of SBVu to characterize dicograph inclusion. However, this is not quite so, since one rule must be added.
Theorem 2.77 (reformulation of [BdGR97, §5]). Let A and B be two unit-free linear generalized formulas over a set X. Then the inclusion of edges R A ⊇ R B holds if and only if
A ⊢ B in the fragment of SBVu without the interaction rules, plus the following additional weak switch rule: Remark 2.78. In [BdGR97,§5], it is suggested that the proof of Theorem 2.76 can be carried over to give a proof of Theorem 2.77. This is not immediately true. Consider for Clearly R A ⊇ R B , so there must be a derivation from A to B. To construct this derivation, the proof in [BdGR97] proceeds by induction on V A = V B and makes a case analysis on the main connectives of A and B. In our case it is ⊳ for both. But the argument that works for series-parallel orders does not work for dicographs in general. In our example, we have to go through a ⊳ b ⊳ c. However with some adjustments, the proof does go through. Since this paper is already quite long, we refrain from giving the details, as they are quite straightforward, once the aforementioned problem is observed.
Remark 2.79. We shall see in Section 3.1 that the rules of BV preserve pomset logic correctness. As Retoré noticed [Ret99b, §5], this is not the case for the weak switch rule, and this provides one justification for excluding it from SBV. Another argument, which is more intrinsic to BV, is that BV + ws does not admit cutelimination, or equivalently, that SBV + ws is not conservative over BV + ws. The exclusion of the weak switch for this reason is explained as a deliberate design choice by Guglielmi in the discussion concerning "conservation laws" at the beginning of [Gug07, §3].
Remark 2.80. By analogy with the previous subsection, we could show that the noninteraction fragment of SBVu+ws is equivalent over unit-free formulas to {≡, q↓, q↑, ws} with units (indeed, the usual switch is an instance of the weak switch when one of the formulas is set to a unit). Then, Theorem 2.77 above becomes equivalent to [Gug07, Conjecture 3.3.3], which states that {≡, q↓, q↑, ws} characterizes dicograph inclusion. In other words, that conjecture had been proved before it was stated.
Comparing BV and Pomset Logic
In this section we investigate the relation between the two logics. We have already seen in Section 2.1 that every formula uniquely determines a dicograph. Furthermore, by inspecting the rules of BV in Figure 4, one can see that the rule ≡ does not change that dicograph, and that the rules s and q↓ only change the set of edges but not the set of vertices of the corresponding dicograph. Additionally, every instance of ai↓ removes one pair of dual atoms, and in a proof of BV, every atom occurring in the conclusion has to be removed by exactly one instance of ai↓ in the proof.
This means that every BV proof δ uniquely determines an axiom linking ℓ(δ) for its conclusion, and hence by definition also a pomset logic pre-proof, which in turn, by Definition 2.45, determines a cographic RB-prenet.
In Section 3.1 we are going to show that every cographic RB-prenet that is obtained from a BV proof in such a way is indeed correct, and therefore every theorem of BV is also a theorem of pomset logic.
Then, in Section 3.2 we show that the converse does not hold, i.e., there are theorems of pomset logic that are not theorems of BV. We do this by presenting a formula that is provable in pomset logic but not in BV.
BV is Contained in Pomset Logic.
In this section we do not only show that every theorem of BV is also a theorem of pomset logic, but also that every proof in BV uniquely determines a pomset logic proof net with the same conclusion.
The proof that we present here uses the basic idea from [Str03b] that has also been used in [Str03a]. In [Ret99b], Retoré presents an alternative method.
To begin, let δ be a BV proof of a formula A. We denote by ( δ ) = ρ(A, ℓ(δ)) the cographic RB-prenet generated from δ as described above (see Definition 2.45). Then the main result of this section is the following.
In order to prove it, observe first that every RB-dicograph uniquely determines a balanced formula, up to renaming of variables and equivalence under ≡. This gives us immediately the following proposition. Proof. Let B be the conclusion of δ. Then A is obtained from B by renaming all variable occurrences such that the result is balanced and the linking is preserved. Definition 3.3. Let A and B be generalized formulas over a set X. We call B a pseudosubformula of A, written as B ⊑ A, if it is equivalent under ≡ to some A ′ that can be obtained from A by replacing some occurrences of elements of X (in the case of usual formulas, some atom occurrences) in A by I. If B ⊑ A and B ≡ A, then we say that B is a proper pseudo-subformula of A, and write it as B ⊏ A.
Example 3.4. We have the pseudo-subformula relation coming from the equivalence The following proposition explains our choice to denote both pseudo-subformulas and induced subgraphs (Definition 2.24) by ⊑.
To conclude, apply Theorem 2.21 to handle the equivalence ≡ that appears in the definition of pseudo-subformula.
Lemma 3.6. Let A be a balanced formula (Definition 2.44) and B be a balanced pseudo- Proof. Let δ be the proof of A in BV, and let δ ′ be obtained by replacing all atoms that do not occur in B in every line of δ by I. Then δ ′ is a valid derivation of B in BV.
Definition 3.7. Let H be a balanced formula and H = ( H ). We say that H is a (balanced) cycle when the RB-digraph H admits a chordless ae-cycle that visits all vertices 12 and either Proposition 3.8. A formula H is a balanced cycle if and only if there are pairwise distinct atoms a 1 , . . . , a n for some n ≥ 1, such that Proof. This follows almost immediately from the definitions. Definition 3.9. We say that a balanced formula A contains a cycle if it has a pseudosubformula B ⊑ A that is a cycle (or equivalently, if ( A ) contains a chordless ae-cycle).
We are now ready to state and prove the central lemma to this section. Proof. By Proposition 3.8 we have that with all a i being pairwise distinct. We proceed by case analysis on the rule r. First observe that by Proposition 3.8 the rules ai ↓, ai ⊳ L ↓, ai ⊳ R ↓ cannot be applied to P (seen bottom up), and if r = ≡ ′ , then Q trivially contains a cycle, whose size is equal to |P |. Now assume r is Without loss of generality, assume that A = a ⊥ n and B = a 1 and : Without loss of generality, we assume that A = a ⊥ n and B = a 1 and C = As before, without loss of generality, we assume that A = a ⊥ n and B = a 1 There are two subcases: : This case is analogous to the caseq L 3 ↓ above. •
A B s 2 A B
: This case is analogous to the caseq 2 ↓ above.
In all cases the size of the cycle in Q is strictly smaller than |Q| = |P |.
Lemma 3.11. Let P be a balanced formula that contains a cycle. Then P is not provable in BV.
Proof. Let H be the cycle in P , and let n = |H| be its size. We proceed by induction on n.
Note that n has to be even. For n = 2, we have that H ≡ a ⊥ ⊳ a or H ≡ a ⊥ a for some atom a. By way of contradiction, assume P is provable in BV. By Lemma 3.6, H is also provable in BV, which is impossible. For the inductive case let now n > 2. As before, we have by Lemma 3.6 that H is provable in BV. By Proposition 2.69 and Proposition 2.75, H is provable in BVû. Let δ be that proof in BVû. Let now Q be the premise of the bottommost rule instance r of δ that is not a ≡ ′ (i.e., the conclusion of r is H ′ ≡ ′ H and Q ≡ ′ H). By Lemma 3.10, Q contains a cycle whose size is smaller than n. By induction hypothesis Q is not provable in BV, and therefore also not provable in BVû, which is a contradiction to the existence to δ.
We can now complete the proof of Theorem 3.1.
Proof of Theorem 3.1. Let δ be a proof in BV. By Proposition 3.2, there is a balanced formula P , such that ( P ) = ( δ ), and such that P is provable in BV. Now assume, by way of contradiction, that ( δ ) is incorrect. That means that ( δ ) contains a chordless aecycle, or equivalently, that P contains a cycle. By Lemma 3.11, P is not provable in BV. Contradiction.
Pomset Logic is not Contained in BV.
In this section we present a formula that is provable in pomset logic, i.e., has a correct pomset logic proof net, but that is not provable in BV. From what has been said in the previous section, it follows that if such a formula exists then there is also a balanced such formula. The formula we discuss in this section is the formula Q shown below: 13 Since the formula Q (resp. the sequent Γ Q ) is balanced, there is a unique axiom linking and therefore a unique cographic RB-prenet and a unique tree-like RB-prenet. In Figure 7, we show the tree-like RB prenet for Γ Q , and on the left of Figure 8 we show the cographic RB-prenet, which is the same for Q and Γ Q . To see that these are provable in pomset logic, we have to show that the RB-prenets do not contain chordless ae-cycles. For this we focus on the tree-like RB-prenet, because in tree-like RB-prenets all ae-paths (and therefore also all ae-cycles) are chordless. Hence, it suffices to show that there are no ae-cycles.
Observe that the B-edges corresponding to the roots of the formulas in Γ Q cannot participate in an ae-cycle because they have no adjacent R-edge at the bottom. We can therefore remove each of these B-edges, together with the two adjacent R-edges at the top. The resulting graph is shown on the right of Figure 8. Another simplification we can do without affecting the ae-cycles in the graph is replacing the two B-edges labeled a ⊳ b and c ⊳ d, together with the connecting R-edge by a single Bedge, and similarly for the two B-edges g ⊳ h and e ⊳ f . The result is shown on the left of Figure 9.
Finally, observe that there is no ae-cycle that passes trough the two B-edges labeled b and a. The reason is that the directed R-edge between them has the opposite direction of the two adjacent R-edges on the other endpoints of these B-edges. Thus, we can collapse these two edges (and the adjacent "triangle") to a single vertex. The same can be done for the pairs c/d and g/h and e/f . The result of this operation is shown on the right of Figure 9. Proof. In the paragraphs above, we have argued that the tree-like RB-prenet in Figure 7 has an ae-cycle if and only if the RB-digraph on the right of Figure 9 has an ae-cycle. Now it is easy to see that this graph has no ae-cycle. Hence, tree-like RB-prenet for Γ Q is correct.
Let us now show that the formula Q is not provable in BV. To do so we will show that whenever a BV inference has as conclusion Q then its premise defines an incorrect RB-prenet in pomset logic, and is therefore not provable in pomset logic. Since by Theorem 3.1 all BV proofs induce correct pomset proof nets, we can conclude that those premises are not BV-provable, therefore there is no way to build a BV-proof of Q.
The main difficulty here is to make sure that we do not overlook any case when checking all possible inferences that have Q as conclusion. Since the unit I can make these kind of arguments difficult to check, we use here BVû. Now observe that Q has no subformula of the form x x ⊥ . This means we only have to consider the non-axiom rules of BVû.
To cut down the number of cases to consider, we take advantage of the symmetries of Q. Let us first look at the automorphisms, i.e., permutations of the variables that results in a formula Q ′ with Q ′ ≡ Q, which means ( Q ′ ) = ( Q ). The following are automorphisms: The action of these automorphisms on the subformulas of Q of the form x ⊥ ⊳ y ⊥ is transitive: Another useful symmetry is not quite an automorphism: it is the following antiautomorphism: that sends Q to its "conjugate" Q † defined inductively as follows: Note that the reversal of the arguments only matters for the non-commutative connective ⊳, and ( Q † ) is the same as ( Q ), except that all directed R-edges have the opposite direction. Thus, conjugacy preserves provability both in pomset logic (reversing the direction of all cycles in the correctness criterion) and in system BVû (the inference rules are closed under conjugacy, withq L 3 ↓ andq R 3 ↓ being swapped). We will now go through all the rules of BVû and check all possible applications. Using a similar argument as in the proof of Lemma 3.10, we will see that in each case there is a cycle in the resulting premise.
Because of the action of the automorphisms α/β, we can without loss of generality assume that A = a ⊥ and B = h ⊥ . There are three subcases: As before, because of the symmetries of Q, we only need to consider the case where A = a ⊥ and B = h ⊥ . There are now five subcases of how to match C: A ⊳ B C : Similar toq L 3 ↓, by conjugacy. Here we get the cycle d ⊳ g g ⊥ ⊳ d ⊥ in the premise. The case A = e ⊳ f g ⊳ h and B = a ⊳ b c ⊳ d is symmetric to the this one via the automorphism β. Otherwise, either A or B (or both) have the form x ⊥ ⊳ y ⊥ . It suffices to treat all the cases R = x ⊥ ⊳ y ⊥ . This is because conjugation exchanges the roles of A and B in the q 2 ↓-rule, and Q is equal to its own conjugate up to the variable renaming performed by γ. We may also without loss of generality assume that A = a ⊥ ⊳ h ⊥ ; as before, this relies on the transitive action of the automorphisms of Q on the x ⊥ ⊳ y ⊥ that it contains. There are now five cases for B: We get the cycle a ⊥ ⊳ a in the premise.
: There are two possibilities to match A B: Due to the commutativity of , we have four possibilities to match A and B. Due to the symmetries discussed above, we only need to consider the case where A = a ⊳ b and B = c ⊳ d. There are now five cases how to match C: We get the cycle (e ⊥ d) g ⊥ ⊳ d ⊥ (e g) in the premise.
We get the cycle c ⊥ c in the premise.
: This case is already subsumed by the case forq 2 ↓.
In this way, we have completed the proof of the following proposition.
Proposition 3.13. The formula Q shown in Equation (3.1) is not provable in BV.
Theorem 3.14. The theorems of BV form a proper subset of the theorems of pomset logic.
Proof. This follows immediately from Propositions 3.12 and 3.13.
Remark 3.15. Let us end this section by some explanation of how the formula Q has been found. Our starting point was the so-called medial rule from system SKS [BT01], a formulation of classical logic in the calculus of structures: is, of course, not a theorem of linear logic. This can be immediately seen by inspecting the RB-prenet for the formula , which is shown in Figure 10a, and which contains several (chordless) ae-cycles. Then, on the right of that "medial RB-prenet", in Figure 10b, we replace the B-edges corresponding to the atoms by a pair of B-edges connected by an (undirected) R-edge. This does not affect provability, as no ae-cycles are added or removed. Then, in Figure 10c, we give these new R-edges a direction. By choosing the right direction, we can break all ae-cycles, which means the result becomes correct with respect to the pomset logic correctness criterion. But the resulting formula (or sequent) remains unprovable in BV. To simplify the proof of non-provability in BV, we added further R-edges, as shown in Figure 10d, that do not break provability in pomset logic. It is easy to see that the RB-prenet in Figure 10d is an intermediate step between the one in Figure 7 and the one on the right in Figure 8. In Example 4.18 we shall see that Figure 10c is also related to a construction that we use for complexity-theoretic purposes; more than that, we shall explain how complexity considerations allowed us to restrict the search space for a formula separating pomset logic from BV.
Complexity of Provability
After having established that BV and pomset logic are not the same, the next natural question is whether they have the same or different provability complexity. It had already been established before that BV is NP-complete. We recall the proof here and establish a slightly more general result. Then we discuss the complexity of pomset logic and show that already checking the correctness of a pomset logic proofnet is coNP-complete. Based on this observation, we can then show that provability in pomset logic is Σ p 2 -complete.
Note that the prenet is still not correct.
This modification validates the pomset logic correctness criterion, but the resulting sequent is not provable in BV.
(d) Adding more R-edges does preserve provability in pomset logic, but showing that the resulting sequent is not provable in BV is easier now, as every possible rule application breaks pomset correctness. , and we shall also be concerned with its second level which contains the classes Σ p 2 and Π p 2 . These are dual in the same way that NP and coNP are: a decision problem is in Π p 2 if and only if its negation is in Σ p 2 . To show that a problem is in Σ p 2 , the most convenient way is perhaps to use the definition in terms of oracle machines.
Definition 4.1 ([AB09, Section 5.5]). Σ p 2 is NP extended with an NP oracle; this is usually written as Σ p 2 = NP NP . In other words, a decision problem is in Σ p 2 if and only if it can be solved by an NP algorithm that can call constant time subroutines for problems in NP. Similarly, Π p 2 is coNP extended with an NP oracle: Π p 2 = coNP NP . Conversely, to show hardness results, we use complete problems involving Boolean formulas. 14 We consider a fixed set of (Boolean) variables. A literal is either x or ¬x for some variable x; a clause is a finite set of literals; a conjunctive normal form (CNF) is a finite set of clauses. The idea is that a CNF represents a Boolean formula, as in the following example: Consistent with this interpretation, a clause is said to be satisfied by some assignment from variables to Booleans in {true, false} if it contains some literal l such that for some variable x, either l = x and x is set to true, or l = ¬x and x is set to false; and an assignment is said to satisfy a CNF when it satisfies all its clauses. The celebrated Cook-Levin theorem states that finding a satisfying assignment for a CNF is NP-complete. We recall its generalization to the first two levels of the polynomial hierarchy.
Definition 4.2. The problem cnf-sat consists in deciding, given a CNF as input, whether it admits a satisfying assignment. It is generalized by ∀∃-cnf-sat, which takes as input • a finite set of universal variables X = {x 1 , . . . , x n }, • a finite set of existential variables Y = {y 1 , . . . , y m }, disjoint from X, • and a CNF whose variables are included in X ∪ Y , the question being whether every partial assignment X → {true, false} can be extended to some assignment X ∪ Y → {true, false} that satisfies the input CNF. . cnf-sat is NP-complete and ∀∃-cnf-sat is Π p 2complete.
As an example, the above-mentioned CNF {{x, y, z}, {¬x, y}, {¬y, ¬z}}, with the universal variables {x, y} and the existential variable {z}, is a negative instance of ∀∃-cnf-sat since the corresponding quantified Boolean formula (where the quantifiers range over {true, false}) is false: for x = true and y = false, there is no choice of z that satisfies the clause ¬x ∨ y. Let us conclude these preliminaries by mentioning that coNP and Σ p 2 admit complete problems involving formulas in disjunctive normal forms.
Remark 4.4. There is a standard reduction from cnf-sat to the case where all variables occur at least once positively and at least once negatively, which goes as follows. You can detect in polynomial time for which variables this is not the case, e.g. x has only positive occurrences and y has only negative occurrences. Then if you delete all clauses in which either x or ¬y appear (or both), this does not change whether the set of clauses is satisfiable (because if you set x = true and y = false you satisfy all deleted clauses and do not lose any degree of freedom for the remaining clauses). After this deletion maybe some other variable occurs with only one polarity, so you need to iterate that procedure. But as the number of iterations is bounded by the number of variables, this reduction is polynomial time.
Thus, this restriction of cnf-sat is NP-complete. For similar reasons, the instances of ∀∃-cnf-sat in which no atom appears with a single polarity are also Π p 2 -complete.
BV is NP-Complete (and how to Generalize Membership in NP).
Let us first recall that the complexity of proof search in BV is already known.
The NP-hardness part already applies to provability for MLL+mix [Kah08, Corollary 4.5], which BV conservatively extends; we shall not dwell on this here. We will merely remark that while Kahramanoğulları proves that MLL+mix is NP-hard using a syntactic analysis of the calculus of structures, one could presumably adapt the more traditional phase semantics methods used to study the hardness of other variants of linear logic in order to get an alternative proof.
Membership in NP is more interesting for us, since it demonstrates a complexity gap with the Σ p 2 -complete pomset logic (unless NP = coNP), as we already said. We recall here the proof that provability in BVu is in NP (the unit-free version is slightly more convenient), giving more details than [Kah08].
The first main argument is a bound on the length of proofs, relying on the following property.
Definition 4.6. We say that a rewriting system on unit-free formulas is dicographmonotone when ≡ ′ ⊆ and for any A B such that A ≡ ′ B: • either B A is an instance of an interaction rule of BVu (i.e., one of ai • ↓, ai ↓, ai ⊳ L ↓, ai ⊳ R ↓); • or E A E B for a suitable identification of atom occurrences of A and B ensuring that we can consider the vertex sets V A and V B to be equal. Proof. This is a direct consequence of the easy direction ("if") of Theorem 2.77.
As discussed in Section 2.7, the rules of SBV are originally derived from a characterization of dicograph inclusion. Dicograph-monotonicity is therefore a fundamental aspect of the design of SBV, BV and their variants. We will discuss another example of this notion in Remark 4.12. Proof. Since ≡ ′ is transitive by definition, if C D E with C ≡ ′ D ≡ ′ E, then C E directly (by the assumption ≡ ′ ⊆ ). Thus, in the path of minimum length between A and B, at least half of the rewrites C i C i+1 satisfy C i ≡ ′ C i+1 . Therefore, up to a factor of two (that gets absorbed in the O(·) notation), it suffices to bound the number of rewrites The definition of dicograph-monotonicity then gives us two cases. We claim that in both cases, , then this is immediate. Otherwise, C i is inferred from C i+1 by an interaction rule, and then C i+1 can be identified with an induced subgraph of C i with two fewer vertices, resulting in a strict inclusion between the complement graphs as we wanted.
Therefore, the natural number |V 2 C i \ E C i | strictly decreases at each rewriting step such that C i ≡ ′ C i+1 . So, starting from A, the number of such steps is bounded by |V 2 A \ E A | = O(|A| 2 ). (We see that our definition of dicograph-monotonicity is slightly stronger than what we truly need from the point of view of complexity.) From this proposition, we immediately get: Here "size" refers to the complexity-theoretic notion of the number of bits it takes to write out the proof, hence the additional factor of |A| accounting for the size of each intermediate formula in the derivation. Now that we know that we have short proofs, it remains to show that they can be checked efficiently. Proof. It suffices to show that the validity of each inference rule can be verified in polynomial time.
For instances of the ≡ ′ rule, to check that A ≡ ′ B, one can apply a generic recipe for terms over associative and possibly commutative binary operators: compute hereditarily flattened and possibly sorted representations of A and B in polynomial time, and compare them, as sketched in the introduction of [Bas94] for example.
Let us now consider any other rule r of BVu. There exist two formulas A r and B r such that the instances of these rules are precisely the inferences of the form S{A r [a 1 := C 1 , . . . , a k : where S{·} is a context, {a 1 , . . . , a k } is the set of propositional variables that appear in either A or B (often both), and [a 1 := C 1 , . . . , a k := C k ] denotes a parallel substitution of those variables by formulas. For instance, for r = q 2 ↓, we may take k = 2, A q 2 ↓ = a 1 ⊳ a 2 and B q 2 ↓ = a 1 a 2 . Given two formulas A and B, our task is to decide in polynomial time whether one may infer B from A using the rule r. To do so, we first enumerate in polynomial time all the pairs (S{·}, A ′ ) such that S{A ′ } = A (there are O(|A|) many). For each of these pairs, we match A ′ against the pattern A r in linear time; if this succeeds, we get a substitution such that A r [a 1 := C 1 , . . . , a k := C k ] = A ′ . To conclude, we just have to test the equality S{B r [a 1 := C 1 , . . . , a k := C k ]} = B.
Remark 4.11. In this proof, we have exploited the fact that the ≡ ′ rules appear explicitly in our formal proofs. This differs from some traditional presentations of logics in the calculus of structures, which consider that the inference rules work over equivalence classes of formulas modulo ≡ or ≡ ′ (those classes are called structures in [Gug07]). In such a presentation with ≡ ′ kept implicit, it is less obvious that proofs are still polynomial-time checkable.
Together, Proposition 4.9 and Proposition 4.10 tell us that BVu has polynomially bounded proofs that can be checked in polynomial time. This entails that provability is in NP. To extend the result from BVu to BV, note that for any formula A with units, there is a unit-free formula A ′ such that A ≡ A ′ , which is computable from A in polynomial time, and then ⊢ BV A ⇐⇒ ⊢ BVu A ′ .
S{A}
One interpretation put forth for the results of [Tiu06] is that proof systems that can be translated into shallow systems, such as traditional sequent calculi, fail to capture BV and pomset logic 15 because of the lack of deep inference. We claim that in the case of pomset logic, there is an obstruction unrelated to the shallow vs deep distinction. Indeed, the definition for shallow systems in [Tiu06,§6] also enforces the condition that we called dicograph-monotonicity: observe that the relation A ≺ B given in [Tiu06, Definition 6.1] is equivalent to E A ⊂ E B . (Being able to state this is one reason for talking abstractly about rewriting systems instead of working directly in the calculus of structures.) Therefore, by Proposition 4.8, any shallow system in this sense has polynomially bounded proofs; and if those proofs are also polynomial-time checkable then provability is in NP. Together with the result of the next section, this shows that these systems cannot capture pomset logic unless NP = coNP.
Correctness of Pomset Logic Proof Nets is coNP-Complete.
As an intermediate step towards our eventual hardness result for provability in pomset logic, we first study the correctness problem: given a prenet (either tree-like or cographic), does it satisfy the correctness criterion, that is, is it an actual proof net? This can be seen as the special case of provability for balanced formulas, as we remarked before. Let us state our result right away.
More precisely, given a sequent Γ and a pre-proof (or linking) ℓ, it is coNP-complete to check the correctness of its cographic RB-prenet ρ(Γ, ℓ) or of its tree-like RB-prenet τ (Γ, ℓ) (those two conditions being equivalent by Theorem 2.59).
Proof. Let us show an equivalent reformulation: it is NP-complete to decide whether a preproof is incorrect. Membership in NP is immediate: one can build τ (Γ, ℓ), whose ae-cycles provide witnesses for incorrectness by definition, in polynomial time; the size of those cycles is bounded by the number of vertices, and they can be checked in polynomial time. As for the proof of NP-hardness, it is done in two steps.
• We first show, via a polynomial time reduction, that incorrectness is as hard as finding ae-cycles in arbitrary RB-digraphs (Proposition 4.16 and Theorem 4.17). • We then prove in the next subsection that the existence of ae-cycles for general RBdigraphs is NP-hard (Theorem 4.19).
Remark 4.14. Assuming that P = NP, this refutes [Ret97a, Proposition 5] which claims that a "standard breadth search algorithm" can decide pomset proof net correctness in polynomial time. (The issue with this kind of argument is discussed in [Ngu20, §8.1].) This complexity claim was meant to justify that "the proof net syntax is a sensible syntax by itself" [Ret97a, §3]: it is indeed part of the Cook-Reckhow definition of proof system [CR79], as already mentioned in Proposition 4.10, which says that the verification of BV derivations is in P.
We are now going to fill the steps outlined above to prove the NP-hardness part of Theorem 4.13. The reduction step extends the "proofification" construction from [Ngu20, Section 3.2] that sends undirected perfect matchings to MLL+mix pre-proof nets. For the sake of clarity, we present our new reduction, with the same name, as a map from arbitrary RB-digraphs to balanced pomset logic sequents. Since the procedure makes arbitrary choices, the result is defined only up to atom renaming and modulo ≡, but correctness and provability are indeed invariant for these equivalences.
The following definition (which is illustrated by Figure 11) makes use of the notation • for u ∈ V G , C u = a u,w 1 . . . a u,wn where w 1 , . . . , w n is a non-repeating enumeration of the neighborhood of u, such that {(u, w 1 ), . . . , (u, , and -for each {u, v} ⊆ V G such that (u, v) ∈ R G (equivalently, (v, u) ∈ R G ), we give the names a u,v and a v,u to a fresh pair of dual atoms; -for each (u, v) ∈ R ⊳ G (equivalently, (v, u) ∈ R ⊲ G ), we generate two fresh atoms a u,v and a v,u that are not dual; ,u , using the above-defined atoms. The ideas that were already present in [Ngu20] are the use of par ( ) to represent vertices, tensor ( ) for matching edges, and dual atoms for non-matching edges. The novelty here is how the seq (⊳) connective of pomset logic serves to encode edge directions through the formulas D (u,v) .
Proposition 4.16. Proofifications can be computed in polynomial time.
Proof. Immediate from the definition.
Since a balanced sequent (or formula) uniquely defines a (tree-like or cographic) prenet, we can define the correctness of such a sequent (or formula), to be the correctness of the corresponding prenet. Figure 11. An RB-digraph (left) and its proofification (below) together with the corresponding cographic RB-prenet (right). Note that a w,x and a x,w are defined to be dual atoms.
Theorem 4.17. Let G be an RB-digraph and Π(G) be its proofification. Then G admits an ae-cycle if and only if Π(G) is incorrect.
Proof. By slightly adapting and then extending the reasoning found in [Ngu20, Proposition 3.9], one could show that the the ae-cycles in G are in bijection with those in the tree-like RB-prenet of Π(G), and the theorem statement would then follow immediately. However, we find it more convenient here to work with the cographic RB-prenet ( Π(G) ) = ρ(Π(G), ℓ(Π(G))) (see Figure 11 for an example). First, by unfolding the definitions, one can write the set of non-matching edges of ( Π(G) ) as the disjoint union R 1 ∪ R 2 where Next, suppose that there exists an ae-cycle u 0 , . . . , u n−1 , u n = u 0 in G, assuming without loss of generality that (u i , u i+1 ) ∈ B G when i is even and (u i , u i+1 ) ∈ R G when i is odd. Let us write a[u, v] = a u,v for readability. The aforementioned ae-cycle can be turned into the cycle a[u 0 , u n−1 ] → a[u 1 , u 2 ] → * a[u 2 , u 1 ] → · · · → a[u n−1 , u 0 ] → * a[u 0 , u n−1 ] in ( Π(G) ), where → * denotes a non-matching edge and → * denotes either a matching edge or an ae-path of length 3 starting and ending with matching edges. The fact that it is an ae-cycle is immediate. To show that (Π(G), ℓ(Π(G))) is incorrect, we must also check that it is chordless, that is, we must rule out the possibility that any edge in R 1 ∪ R 2 is a chord.
• Let e = (a u,v , a x,y ) ∈ R 1 , which means that (u, x) ∈ B G . Suppose that a u,v is part of the cycle in ( Π(G) ) (otherwise, e is not a chord). Then u is part of the original cycle in G, i.e. u = u i for some i ∈ {0, . . . , n−1}. Since the cycle is G is alternating, x = u i−1 or x = u i+1 (with indexing modulo n) depending on the parity of i, and since it is elementary, this is the only time x is visited. So there exists a unique z ∈ V G such that a x,z belongs to the cycle in ( Π(G) ). We then have either (a x,z , a u,v ) ∈ R 1 or (a u,v , a x,z ) ∈ R 1 . A case analysis depending on whether z = y then shows that e cannot be a chord. • The endpoints of edges in R 2 are not incident to any other non-matching edges, so R 2 cannot provide any chord.
Conversely, one can show that any ae-cycle in ( Π(G) ) induces in a canonical way an alternating cycle in G, and if one starts from a chordless ae-cycle, then the resulting cycle is elementary.
Before moving to the last missing ingredient for the proof of Theorem 4.13, let us draw a connection with some previous material in this paper.
Example 4.18. Consider the digraph with perfect matching on the right of Figure 9. Since it has no ae-cycle, its proofification below is a provable formula in pomset logic.
This is very close to the counterexample presented in Section 3.2. In fact, we have already seen an abridged drawing for the corresponding proof net in that section, in Figure 10c. In general, our complexity results imply that unless NP = coNP, there must exist some proofification that is provable in pomset logic but not in BV, and this provides an explicit example. Furthermore, if we forget the edge directions in this graph (right of Figure 9) and then take its proofification, we get the "linear medial" of Remark 3.15 / Figure 10a We shall prove this by a many-one polynomial time reduction from the cnf-sat problem described in Section 4.1. A more concise proof would have been possible by relying on a result on edge-colored digraphs [GLMM13, Theorem 5] (this is actually how we discovered the theorem, and this proof can be found in [Ngu19]). Nevertheless, our direct reduction, which is heavily inspired by 16 [GLMM13], has two advantages: it shortens the chain of dependencies, and lends itself to being adapted into the Σ p 2 -hardness proof 17 of our next subsection. In a nutshell. Let us first give a rough idea of the proof, illustrated by figures on the cnf-sat instance (x ∨ y ∨ z) ∧ (¬x ∨ y) ∧ (¬y ∨ ¬z).
We first build a digraph G cl (Figure 12a) with two distinguished vertices s and t such that paths from s to t are in bijection with choices of one literal per clause. To make this work, the graph contains one vertex for each literal occurrence.
Next, we build on the same set of vertices a digraph G var (Figure 12b) such that paths from t to s correspond bijectively to variable assignments in the following way: the path traverses all the literal occurrences set to false. The point is that if we manage to go from s to t with a path in G cl , and then go back from t to s with a path in G var avoiding all vertices that were already visited, then the first path will only select literals set to true by the second path. This means that cycles visiting both s and t yield satisfying assignments and vice versa.
Finally, the tricky part is to reduce finding such a cycle with two prescribed vertices to finding an ae-cycle in an RB-digraph. This is done by a generic construction (Figure 13) that morally "superimposes" in some way these two graphs G cl and G var . It requires a few additional conditions, among which the acyclicity of both G cl and G var .
Proof details. For the remainder of this subsection, we fix an instance of cnf-sat, presented formally as in Section 4.1: it is a finite ordered set of clauses {C 1 , . . . , C n }; each clause is a finite ordered set of literals C i = {l i,1 , . . . , l i,m(i) }; finally, each literal is either x or ¬x for some variable x ∈ X = {x 1 , . . . , x p }. Given this instance, we consider a set of vertices V occ = {v i,j | (i, j) ∈ I} with one vertex for each literal occurrence (thus, , plus two auxiliary vertices s and t (outside the set V occ ).
Recall that a digraph is said to be acyclic when it contains no cycles. An observation to keep in mind for the three following lemmas is that a path in an acyclic digraph is always elementary.
Lemma 4.20 (see Figure 12a). From the given cnf-sat instance, one can build in polynomial time a directed graph G cl = (V occ ∪ {s, t}, E cl ) such that: visiting exactly the corresponding vertices.
Proof. As illustrated in Figure 12a, we take: It is straightforward to check that the required properties hold. For instance, the absence of cycles in G cl is a consequence of the following fact: for all Lemma 4.21 (see Figure 12b). From the given cnf-sat instance, assuming without loss of generality that each variable has at least one positive and one negative occurrence (see Remark 4.4), one can build in polynomial time a directed graph G var = (V occ ∪ {s, t}, E var ) such that: • G var is acyclic, t has no incoming edges and s has no outgoing edges (note that the roles of t and s are reversed compared to G cl ); • for each path from t to s, the set of intermediate vertices in G var that it visits is of the } for a unique set of variables Y ⊆ X; • conversely, every such subset of variables corresponds to a (unique) path from t to s. In the last two items above, one should see such a Y ⊆ X as an assignment . So the vertices traversed correspond to the literals set to false.
Proof. Let us first describe what the paths starting from t will look like once we have defined the digraph. First, we have to choose l 1 ∈ {x 1 , ¬x 1 } and go to its first occurrence (first for the order induced by the clauses). Then as long as we are on an occurrence of l 1 which is not the last one, there is a single outgoing edge, and it leads to the next occurrence. Finally, once the last occurrence of l 1 is reached, we may go to the first occurrence of l 2 for some choice l 2 ∈ {x 2 , ¬x 2 }. And so on, until the last occurrence of either x p or ¬x p which finally allows us to arrive at s.
To enforce this, we define E var (as shown in Figure 12b) to consist of all the edges: Figure 12 because the drawing of their "superposition" would be unreadable.) The pair of paths (s → u → v → t, t → w → s) on the left corresponds to an ae-cycle on the right, whereas the "short-circuit" (s → u → w, w → s) -which we want to exclude -corresponds to a cycle that is not alternating (the consecutive edges u ⊖ → w ⊕ → s 2 are both outside the matching).
, C i is the last clause containing a literal equal to l i,j while C i ′ is the first clause containing l i ′ ,j ′ ; • (t, v i,j ) and (t, v i ′ ,j ′ ), where l i,j = x 1 , l i ′ ,j ′ = ¬x 1 and C i , C i ′ are the first clauses in which those literals appear respectively; • (v i,j , s) and (v i ′ ,j ′ , s) for the last occurrences l i,j , l i ′ ,j ′ of x p , ¬x p . Figure 13). Let G 1 = (V, E 1 ) and G 2 = (V, E 2 ) be two directed graphs with the same vertex set V . Let s, t ∈ V . Assume that G 1 and G 2 are acyclic and that s (resp. t) has no incoming edge in G 1 (resp. G 2 ) and no outgoing edge in G 2 (resp. G 1 ). Then one can build in polynomial time an RB-digraph H whose ae-cycles are in bijection with the pairs (P 1 , P 2 ) where:
Lemma 4.22 (see
• P 1 is a path from s to t in G 1 ; • P 2 is a path from t to s in G 2 ; • P 1 \ {s, t} and P 2 \ {s, t} are vertex-disjoint. Proof. Our construction for H = (V H , R H , B H ) associates to each original vertex a matching edge: The non-matching edges are obtained from the original edges: writing E i for the set of so that B, R ′ 1 and R ′ 2 are pairwise disjoint. Given a pair of paths (P 1 , P 2 ) with P 1 = s → u 1 → · · · → u r → t and P 2 = t → v 1 → · · · → v q → s, as specified in the lemma statement, we can build an ae-cycle in H, where '⇒' denotes a matching edge: It is alternating by construction, and it is elementary because P 1 and P 2 themselves are necessarily elementary (they are paths in acyclic digraphs).
Conversely, we want to extract such a pair of paths (P 1 , P 2 ) from any ae-cycle in H. First, we claim that the RB-digraph (V H , R ′ 1 , B) does not contain any ae-cycle. This is because any ae-path of length ≥ 2 in it is strictly increasing for the following order: the lexicographic product of the transitive closure of E 1 -which, by acyclicity assumption on G 1 , is a partial order -with the order on {⊕, ⊖} defined by ⊕ ≤ ⊖. Likewise, (V H , R ′ 2 , B) does not admit any ae-cycle either. Therefore, an ae-cycle in H must contain two edges e 1 ∈ R ′ 1 and e 2 ∈ R ′ 2 . Given such an ae-cycle, let π i be the directed subpath starting with e i and ending with e 3−i . Then: Since v 2 is the target of an edge in E 1 , either v 2 = t 2 or v 2 = v ⊕ for some v ∈ V . In the latter case, we have v 3 = v ⊖ , which is impossible for the source of an edge in E 2 . Therefore (v 2 , v 3 ) = (t 1 , t 2 ).
• π 2 contains a subpath u 1 Similarly to the previous case, we conclude that (u 2 , u 3 ) = (s 2 , s 1 ). To recapitulate: the cycle must switch at some point from edges in R ′ 1 to edges in R ′ 2 , and it must also switch back at some point; it can only do the former by crossing (t 1 , t 2 ) ∈ B and the latter by crossing (s 2 , s 1 ) ∈ B. Therefore, this ae-cycle decomposes into an ae-path P 1 from s 1 to t 1 in (V H , R ′ 1 , B) and an ae-path P 2 from t 2 to s 2 in (V H , R ′ 2 , B), glued together by (t 1 , t 2 ) ∈ B and (s 2 , s 1 ) ∈ B. These paths are vertex-disjoint because the ae-cycle that they form is, by definition of ae-cycle, elementary; they can be lifted to yield the desired pair of paths in G 1 and G 2 .
We can now combine these ingredients to reduce cnf-sat to the directed ae-cycle problem.
Proof of Theorem 4.19. We apply the construction of the previous lemma to G cl and G var (with V = V occ ). An ae-cycle in the resulting RB-digraph corresponds to a path P cl from s to t in G cl plus a path P var from t to s in G var , that are vertex-disjoint except at s and t. We have to show that the existence of the latter is equivalent to that of an assignment that satisfies all the clauses.
Suppose that we are given such an assignment. First, there exists a unique path P var in G var that visits all literal occurrences set to false (Lemma 4.21). Since the assignment is satisfying, we may choose in each clause a literal set to true. This corresponds by Lemma 4.20 to a path P cl in G cl . If some vertex of V occ were to appear in both P var and P cl , it would mean that the corresponding literal is set both to false and to true simultaneously.
The converse direction proceeds by a similar reasoning.
Pomset Logic Provability is Σ p 2 -Complete.
We are now in a position to treat our main complexity result: Theorem 4.23. The provability problem of pomset logic is Σ p 2 -complete. As in the case of correctness, we start with the membership part whose proof is easier than the hardness part.
Proposition 4.24. Pomset logic provability is in Σ p 2 . Proof. The size of any pre-proof (i.e. axiom linking) is bounded by a polynomial in the size of its conclusion, and the correctness criterion is in coNP. Therefore, there is a Σ p 2 = NP NP algorithm that consists in first guessing non-deterministically a pre-proof whose conclusion is the input formula, then calling a NP oracle for incorrectness on this guess, and finally accepting whenever the oracle returns false.
To show the Σ p 2 -hardness of provability, we proceed analogously to the coNP-hardness proof for correctness, using a sequence of reductions Boolean formula problem −→ graph-theoretic problem −→ pomset logic problem Here, this pattern is instantiated with the Π p 2 -complete ∀∃-cnf-sat problem (cf. Section 4.1) to the left and unprovability in pomset logic to the right. For the auxiliary step in the middle, we use a problem formulated using "switchings" of "paired graphs", as in the standard Danos-Regnier correctness criterion for MLL+mix [DR89,FR94].
Definition 4.25.
A paired digraph is a directed graph G equipped with a set P of unordered pairs of edges, such that the pairs are disjoint (if p, p ′ ∈ P, then p ∩ p ′ = ∅) and paired edges have the same source (if {(u 1 , v 1 ), (u 2 , v 2 )} ∈ P, then u 1 = u 2 ). An edge is unpaired if it is not an element of any pair in P. A switching 18 of (G, P) is a spanning subgraph of G that contains all unpaired edges and exactly one edge in each pair.
A paired RB-digraph is a tuple (V G , R G , B G , P) such that (V G , R G , B G ) is a RB-digraph and (V G , R G , P) is a paired digraph. Consistently with the above, a switching of (V G , R G , B G , P) is an RB-digraph of the form (V G , R ′ , B G , P) where R ′ ⊆ R G contains all unpaired edges from R G and exactly one edge in each pair.
The reader familiar with proof nets might find it strange that we are mixing perfect matchings and paired graphs together: the former already play the role of expressing the correctness criterion, so what are the latter good for? The answer is that switchings will morally correspond to choices of axiom linkings: the possible proof nets whose conclusion is a given arbitrary (not necessarily balanced) formula only differ by which pairs of dual atoms are joined by axiom links. Without further ado, let us present our reductions.
x ¬x y ¬y z ¬z x ∨ y ∨ z ¬x ∨ y ¬y ∨ ¬z s t Figure 14. The paired G var construction from the proof of Lemma 4.26, on the ∀∃-cnf-sat instance ∀x ∀y ∃z (x ∨ y ∨ z) ∧ (¬x ∨ y) ∧ (¬y ∨ ¬z) (compare with Figure 12b). The dashed edges represent the choice of a switching in which these edges are erased. The specific choice made here gives us three bits of information that can be interpreted as x = false, (x = true) =⇒ (y = true) and (x = false) =⇒ (y = false) -recall that visiting a literal corresponds to setting it to falsewhich entails x = y = false. The thick colored path materializes this assignment of the universal variables, and it is a prefix of all paths from t to s in this switching. Proof. We proceed by polynomial time reduction from ∀∃-cnf-sat (cf. Section 4.1). Consider an instance of this problem whose universal variables are x 1 , . . . , x m (we will not need to directly manipulate the CNF or the existential variables). We reuse the constructions G cl and G var of Lemmas 4.20 and 4.21, applying them to the CNF part of the input. Strictly speaking, the digraph G var depends on the order of variables, and here we shall consider for convenience that the universal variables come before the existential ones, with the i-th variable being The key idea is to distinguish a set P of disjoint edge pairs in G var (see Figure 14 for an example). Let t and s be its distinguished source and sink vertices. For i ∈ {1, . . . , m}, we write: • u i (resp. v i ) for the vertex that corresponds to the first (resp. last) occurrence of x i ; • u ¬ i (resp. v ¬ i ) for the vertex that corresponds to the first (resp. last) occurrence of ¬x i . (Here "first" and "last" mean the same thing as in the construction of G var (Lemma 4.21): they are defined with respect to an arbitrary order on the set of clauses. To build G var , each variable must occur at least once positively and once negatively; this can be assumed without loss of generality for instances of ∀∃-cnf-sat. It is possible that u i = v i , but for i = j, we have u i = v j .) We take The reader may check that all edges that appear in P are indeed edges in G var . Furthermore, it follows from the definition that the pairs are disjoint and any two paired edges have the same source. Therefore, (G var , P) is a paired digraph in the sense of Definition 4.25.
The point is that, as illustrated in Figure 14, for any switching S of (G var , P), exactly one of v m and v ¬ m is reachable from s, and the path is unique -let us call it π S . Each switching thus induces an assignment {x 1 , . . . , x m } → {true, false}, which assigns x i to false when π S visits the vertices associated to x i (among which are u i and v i ), and to true otherwise (in which case the path necessarily visits u ¬ i and v ¬ i ); and the paths from t to s in the switching correspond to extending this assignment with values for the other (existential) variables. Moreover, this map from switchings to assignments of the universal variables is surjective (more precisely, each of the 2 m assignments is induced by exactly 2 m−1 switchings among the total of 2 2m−1 possible switchings).
Let us consider next the correspondence given by Lemma 4.21 between the paths from t to s and the assignments of all variables (both universal and existential). One can see that given such a path ρ and a switching S, the following are equivalent: • the edges of ρ exist in S; • π S is a prefix of ρ; • the assignment that corresponds to S (by the above discussion) is the restriction to the universal variables of the one that corresponds to ρ (by Lemma 4.21).
From these observations, one can deduce, by a similar reasoning to the proof of Theorem 4.19, that the given ∀∃-cnf-sat instance is positive if and only if, for every switching S of (G var , P), there exists a path from s to t in G cl and a path from t to s in S that do not share any intermediate vertex.
To conclude, it suffices to reduce this to the desired problem on paired RB-digraphs by a suitable adaptation of Lemma 4.22, whose precise formulation is left to the reader.
Theorem 4.27. Unprovability in pomset logic is Π p 2 -hard. Proof. We reduce the problem shown to be Π p 2 -hard by Lemma 4.26 to pomset logic unprovability by extending proofification to handle edge pairs. An example is given in Figure 15.
Let (V G , R G , B G , P) be a paired RB-digraph (we write G = (V G , R G , B G )). We assume without loss of generality that for every edge pair {(u, v), (u, w)} ∈ P, the opposite edges (v, u) and (w, u) are not present in the graph (otherwise, break (v, u) up into a path of length three involving two new vertices, add the middle edge of this path to the matching, and similarly for (w, u)). Let us define the following flat sequent, which is not balanced in general (the description deliberately mirrors that of Π(G) in Definition 4.15): where (using again the notation of Definition 2.12): • {e 1 , . . . , e k } is the set of unpaired edges, {(u 1 , v 1 ), (v 1 , u 1 and {p 1 , . . . , p l } = P, all these enumerations being non-repeating; • the atoms involved in Γ are created as follows: -for each {u, v} ⊆ V G such that (u, v) ∈ R G -according to the assumption w.l.o.g. we made above, both (u, v) and (v, u) are then unpaired -we give the names a u,v and a v,u to a fresh pair of dual atoms; -for each edge (u, v) ∈ R ⊳ G which is unpaired, we generate two fresh atoms a u,v and a v,u that are not dual; -for each pair p = {(u, v), (u, w)} ∈ P, we create two fresh atoms a u,p , b p and we define a v,u = a w,u = b p (observe however that we do not define a u,v nor a u,w ); • for u ∈ V G , C u =v a u,v`p a u,p where v (resp. p) ranges over the vertices (resp. pairs) such that a u,v (resp. a u,v ) is defined according to the above; This sequent Γ that we just defined contains, for each pair p = {(u, v), (u, w)} ∈ P, exactly two occurrences of b p , designated as a v,u and a w,u , and exactly two occurrences of b ⊥ p in D ′ p , that we will also name: D ′ p = a ⊥ u,p ⊳ c p⊳ c p . So each pre-proof of Γ induces a bijection between {a v,u , a w,u } and {c p⊳ , c p }. Conversely, such a choice of bijection for each edge pair uniquely determines a corresponding axiom linking.
Let ℓ be a pre-proof of Γ. Let s, t : P → V G be defined by ℓ(c p ) = a t(p),s(p) for all p ∈ P. We define the RB-digraph S ℓ to be the switching where the edge (s(p), t(p)) has been deleted in each pair p. We can reformulate what we observed in the previous paragraph as the fact that ℓ → S ℓ is a bijection from axiom linkings for Γ to switchings of our paired RB-digraph.
In the cographic RB-prenet ρ(Γ, ℓ), the vertex for c p has no incident non-matching edge, so it cannot be involved in any ae-cycle. Since it is matched with a t(p),s(p) , the latter also cannot occur in any ae-cycle. So the witnesses of incorrectness in ρ(Γ, ℓ), i.e. its chordless ae-cycles, must be entirely contained in the induced sub-RB-digraph that excludes c p and a t(p),s(p) for all p ∈ P. Thanks to the correspondence between pseudo-subformulas and induced subgraphs (Proposition 3.5), one can write (an isomorphic copy of) this sub-RBdigraph as ρ(Γ ′ , ℓ ′ ) where Γ ′ is obtained from Γ by substituting the aforementioned atom occurrences by I, and ℓ ′ is a restriction of ℓ; so (Γ, ℓ) is incorrect iff (Γ ′ , ℓ ′ ) is.
The key property, that one can check from the definitions, is that Γ ′ is balanced and equal up to atom renaming to the proofification Π(S ℓ ) (note for instance that if (u, v) ∈ p ∈ P and (u, v) ∈ R S ℓ then the formula D ′ p in Γ becomes D (u,v) in Π(S ℓ ) after substituting c p by I). Thus, by Theorem 4 .17, (Γ, ℓ) is incorrect if and only if S ℓ contains an ae-cycle. By definition of provability, and using the surjectivity of ℓ → S ℓ , it follows that Γ is unprovable if and only if every switching of the initial RB-graph admits an ae-cycle. According to Lemma 4.26, it is NP-hard to test the latter condition given the input Γ. Since Γ can be computed from our original paired RB-digraph in polynomial time, this concludes the proof.
Back to the Sequent Calculus
In this section we come back to the problem of finding a sequent calculus for BV and pomset logic, as this problem was the starting point for much of the research done on these two logics. The current state of the art on this topic is: (1) Retoré [Ret93] presented a sequent calculus for pomset logic, for which he could show soundness, but not completeness and not cut elimination. 19 (2) This difficulty motivated Gugliemi [Gug07] to develop system BV, which has a cut elimination proof in deep inference, but not in the sequent calculus. Then Tiu [Tiu06] has shown that "deepness" is necessary for BV and therefore there cannot be a sequent calculus for BV (see also Remark 4.12).
(3) However, the formulas used by Tiu [Tiu06] to defy the sequent calculus can be proved in the cut-free version of Retoré's [Ret93] sequent calculus, using the entropy rule. (4) In Remark 4.12 we also observed that our complexity results pose a much harder obstacle to a sequent calculus for pomset logic than the mere need of "deepness". (5) But recently, Slavnov [Sla19] presented a cut-free sequent calculus that is sound and complete for pomset logic.
This is, of course, confusing, as it seems contradictory. Does (3) mean that Tiu's [Tiu06] result is wrong? Does (5) mean that Retoré [Ret93] just did not try hard enough and the problem is solved? Does it mean that our complexity result about the Σ p 2 -completeness of pomset logic is wrong or that NP = coNP?
The answer to all these questions is, of course, "No". And the purpose of this section is to clarify the confusion and work out the subtleties of possible sequent calculi for pomset logic and BV. More precisely we present the following results: • We show that Retoré's sequent calculus with cut (the variant presented in [Ret21]) is equivalent to SBV. It is therefore a sequent calculus for BV and not for pomset logic. • We present a formula 20 that is provable in BV but not in the cut-free version of Retoré's [Ret93] sequent calculus. Therefore, that sequent calculus does not admit cut elimination. • We refine Tiu's [Tiu06] argument about the need for "deepness" in BV such that Retoré's entropy rule is no longer enough to prove the formulas used in the argument. • We use our results from the previous section to make a complexity-theoretic argument to show that a "standard" sequent calculus for pomset logic is impossible. • Finally, we discuss Slavnov's [Sla19] sequent calculus and exhibit how he circumvents this complexity-theoretic obstacle.
Retoré's Sequent Calculus with Cuts is Equivalent to BV.
Recall that a pomset logic sequent can be seen as a -free generalized formula over a set of formula occurrences (see Proposition 2.8). Hence, we can define the graph of a sequent Γ as in Definition 2.17. Let G Γ = (V Γ , R Γ ) and G Γ ′ = (V Γ ′ , R Γ ′ ) be the graphs of sequents Γ and Γ ′ , respectively. Then both are series-parallel orders (see Proposition 2.31), and we define i.e., both sequents contain the same formula occurrences, and the order induced by Γ ′ is contained in the order induced by Γ. We are now ready to discuss Retoré's sequent calculus, shown in Figure 16. 21 The -introduction rule and the ⊳-introduction rule simply express the correspondence between the structural/sequent level connectives and the logical/formula level connectives. Note that the -introduction rule and the cut rule can only be applied in flat contexts (but Γ and ∆ can be non-flat sequents). The standard mix-rule is derivable using the dimix-and entropy-rules.
Before embarking on the equivalence of this sequent calculus with BV, let us illustrate the added power of cuts. For this purpose, we recall an example sequent considered by Retoré and the second author: The corresponding formula of the sequent above has been shown to be provable in BV in Figure 5, and its cographic RB-prenet may be found in [Ret21, Figure 10]. where and The reason for this lies in the fact that in order to have a cut-free sequent proof, the -rule has to be applied to one of the two -formulas eventually. But the -rule is too weak to split the context correctly.
However, with cut, the sequent is provable, as shown in Figure 17. We have therefore shown the following.
Corollary 5.2. Cut-elimination does not hold for Retoré's sequent calculus.
Let us now establish the equivalence between SBV and the sequent calculus in Figure 16. For this, we resort again to SBVu (see Proposition 2.68). We begin by showing that every formula that is provable in SBVu is also provable in Retoré's sequent calculus with cut. For the derivation on the right, the validity of the entropy rule depends on an inclusion between series-parallel orders which can be mechanically verified. Proof of Lemmas 5.4 and 5.5. We prove that the following holds for any formula A: We prove the corresponding sequent (for the switch rule, the derivation involves only flat sequents and can therefore be carried out in MLL+mix): This argument applies to all inference rules (except ai • ↓, which has no premise, and therefore does not fit the pattern in the lemma statement). For each of these rules, it therefore suffices to treat the case with atomic formulas and trivial context.
For the ≡ ′ -rule, we can assume without loss of generality that each instance is an application of exactly one of the equalities in (2.5) (as each general instance of the ≡ ′ -rule can be replaced by a finite sequence of these special instances).
We have thus reduced the desired conclusion to a bounded search for cut-free proofs that we leave to the reader.
Theorem 5.7. If a formula is provable in SBVu then it is also provable in Retoré's sequent calculus with cuts.
Proof. A proof of a formula A in SBVu must have the form is provable in Retoré's sequent calculus by Lemma 5.6. Furthermore, a a ⊥ also has a sequent calculus proof (an axiom rule followed by a -intro). By composing all these sequent proofs with the cut rule, one gets a proof of A.
For the converse, observe that the axiom, dimix and /⊳-intro rules are easy to simulate in BVu. The treatment of -intro is as usual (see, e.g., [Str03b,Section 3.3.] or [Gug07,§5]), and cut is a -intro followed by i↑ (which is admissible in SBVu). Furthermore, it is an immediate consequence of Theorem 2.76 that SBVu can also simulate the entropy rule. This is enough to prove the following theorem: Theorem 5.8. If a sequent Γ is provable in Retoré's sequent calculus with cuts, then a formula A that corresponds (see Definition 2.9) to Γ is provable in SBVu.
Corollary 5.9. Retoré's sequent calculus with cuts is equivalent to BV.
A Refinement of Tiu's Argument.
In [Tiu06], Tiu presents a sequence or formulas S 0 , S 1 , S 2 , . . . with the following properties: (1) For each n, the formula S n is provable in BV.
(2) In order to prove S n , a subformula at depth 2n has to be accessed first. From this it follows, that there can be no shallow (i.e., all inference rules have have a fixed maximum depth) cut-free proof system equivalent to BV. As most standard sequent calculi are shallow in that sense, the argument can be used to claim that there cannot be a cut-free sequent calculus for BV. However, Tiu's formulas also have the property (3) For each n, the formula S n is -free. Since every S n contains only the and ⊳ connectives, it can be proved in Retoré's calculus by first applying the and ⊳ rules to transform the formula into a sequent with only atomic formulas. This sequent can then be derived from the axioms by only dimix instances and a single instance of the entropy rule. This is not a contradiction, as the entropy rule is a deep rule (i.e., not shallow) in the sense above. However, it raises the question whether we can have a refinement of Retoré's sequent calculus that is complete for BV and obeys cut elimination, as this is no longer ruled out by Tiu's argument.
What we will show next is a sequence or formulas R 0 , R 1 , R 2 , . . ., following the spirit of Tiu's construction, having properties (1) and (2) above, but not (3). Consequently, for proving them, a -subformula has to be accessed at arbitrary depth, without the possibility of globally splitting the context. This entails that a proper deep inference system is indeed needed, and a proof system in the sequent calculus layout is insufficient.
We start with the index set I = {0, 1} * , i.e., the set of all finite words with the symbols 0 and 1. Then, our formulas are build from the propositional variables {a, b, c, y, z} × I, written as a i with i ∈ I.
For an index i ∈ I and two formulas A and B, we define now inductively the formula ξ n (i, A, B) for each n ∈ N: We now define R n = ξ n (0, I, I). 22 These formulas have the following properties: Claim 5.10. For each n, the formula R n is provable in BV.
Proof. For every n ∈ N and i ∈ I, we have a derivation . This is constructed in the same way as in the proof of Lemma 7.4 in [Tiu06], using a derivation that is similar to the one in Figure 5.
Claim 5.11. The formulas R n are not provable in Retoré's sequent calculus without cuts.
Proof. The formula R 0 is a variation of the corresponding formula of the sequent (5.2), and the same argument applies.
Finally, we have: Claim 5.12. A deep proof system is needed to prove the formulas R n .
Proof. This is proved by the same argument as in [Tiu06]. In order to prove R n , a at depth 2n has to be accessed first, in order to remove the variable y i (or z i ) with an atomic interaction.
This strengthens Tui's argument by using formulas involving also a , showing that rules like entropy are not enough to obtain a cut-free sequent style calculus for BV.
A Complexity-Theoretic Obstacle to Sequent Calculi for Pomset Logic.
Let us now turn to pomset logic. We propose here a similar (and somewhat informal) complexitytheoretic explanation for why a cut-free sequent calculus for pomset logic is difficult to find, similar to Remark 4.12.
As described in [Ret21, Section 5], the origins of pomset logic in the coherence space semantics of linear logic suggest that ⊳ is meant to be a multiplicative connective, just like the multiplicative conjunction and the multiplicative disjunction (which give their names to the fragments MLL and MLL+mix of linear logic). Therefore, we would like to leverage some proof-theoretic property related to multiplicative connectives. Recall that the standard sequent calculus rules for and for its additive counterpart & correspond to two possible introduction rules for the conjunction ∧ in classical logic, respectively: The difference is that the multiplicative rule splits the context of the conclusion into disjoint parts among the premises while the additive rule copies the same context in both premises. The same kind of management of the context occurs in the rules for the so-called "generalized multiplicative connectives" in the French-Italian linear logic tradition [DR89,AM20]. This leads us to the following definition.
Definition 5.13. A multiplicative introduction rule for an n-ary connective Φ is a rule whose instances obey the following property: there exist formulas A 1 , . . . , A n and a multiset of formulas M such that • the multiset of formulas occurring in the premises -i.e. the sum of the multisets of formulas in each premise -is equal to 23 {A 1 , . . . , A n } + M ; • the multiset of formulas occurring in the conclusion equals Φ(A 1 , . . . , A n ) + M .
A multiplicative structural rule is one in which the premises (taken together) and the conclusion have the same formulas, taking multiplicity into account; this is an equality of multisets similar to the above.
A sequent calculus is multiplicative when all its rules are either an axiom rule, a multiplicative introduction rule or a multiplicative structural rule.
For this to make sense, there must be a way to associate to a sequent its multiset of formulas. This works both for flat sequents -which are already multisets -and for ordered sequents, and other kinds of generalized sequents are conceivable. The standard MLL+mix sequent calculus, as well as Retoré's calculus in the previous section and Slavnov's calculus in the next one, are all multiplicative.
The relevance of this notion to us is a consequence in the spirit of proof complexity: Claim 5.14. In a proof in a multiplicative sequent calculus, the total number of introduction rules is at most the total size of the formulas in the sequent being proved.
In many cases, there will also be a bound on the structural rules in proofs. For instance, the number of uses of multiplicative structural rules that require two or more premises, such as the mix rule, is also linearly bounded by the size of the conclusion formulas. Thus the following informal principle: a multiplicative sequent calculus with "reasonable" structural rules admits proofs with a polynomial number of inference rules.
We also expect the correctness of these proofs to hinge only on the local validity of their inferences (unlike proof nets, whose global correctness criterion makes them hard to check). According to a similar reasoning as Remark 4.12, then, if such a multiplicative calculus were to capture pomset logic, it would be impossible to verify its inference rules in time polynomial in the size of the formulas, unless NP = coNP. This is arguably a strong restriction on the design of sequent calculi for pomset logic, such as the calculus in the next section.
A Reconstruction of Slavnov's Calculus.
Let us now revisit the sound and complete proof system of [Sla19] for pomset logic in light of our above remarks. It uses decorated sequents, which are flat sequents (multisets of formulas) endowed with additional "decorations" (analogously to how Retoré's sequents can be seen as flat sequents decorated with series-parallel orders). The point is that in Slavnov's sequents, these decorations (defined in Definition 5.15 below) take up most of the space; their size may be exponentially bigger than the number of formulas in the sequent. This provides a good reason for inference rule checking to be superpolynomial in the total size of formulas -this is indeed necessary since this sequent calculus is, as we shall see, multiplicative and "reasonable" in the above sense.
For the sake of completeness, we describe briefly here Slavnov's system, specialized to pomset logic. The paper [Sla19] also introduces and deals with a related but different extension of MLL called "semicommutative MLL", and it derives its sequent calculus for pomset logic from the one for semicommutative MLL. Our exposition avoids this detour. We also aim at giving a high-level idea of what makes the system work. We decompose our presentation of the system into three "levels" (inspired by the two-level treatment of [Sla19]).
First level: preproofs on flat sequents. First, consider the "old-fashioned" calculus on flat sequents whose rules are as follows: In other words, this system extends the usual cut-free sequent calculus for MLL+mix with an introduction rule for ⊳ which is exactly the same as for . Let us call Slavnov pre-proofs the derivation trees using these rules. Obviously, some formulas that are not valid in pomset logic, such as (a b) (a ⊥ ⊳ b ⊥ ), may admit Slavnov pre-proofs. An important observation is that there is a canonical map from Slavnov pre-proofs to tree-like RB-prenets that preserves the conclusion sequent. It can be defined inductively by interpreting each of the inference rules as an operation on proof nets in the obvious way; for instance the -intro rule corresponds to taking the union of two RB-prenets and adding a gadget for the newly added connective (cf. Figure 2).
The goal of the "second level" of the proof system will then be to filter out Slavnov pre-proofs that translate into correct tree-like RB-nets.
(For MLL (resp. MLL+mix) sequent calculus proofs, i.e. Slavnov pre-proofs without the ⊳-intro and mix rules (resp. the ⊳-intro rule), it is well-known that the result of the translation is a correct MLL (resp. MLL+mix) proof net: in those cases, it corresponds to a "desequentialization" operation whose idea goes back to Girard's original paper [Gir87].) Second level: decorations with "multi-reachability" information. At this stage, it is natural to add data that keeps track of paths in RB-prenets, in order to reflect the correctness criterion. To each Slavnov pre-proof π of a flat sequent Γ, we associate a decoration d π -and thus the decorated sequent S π = (Γ, d π ) -as follows: first translate it into a tree-like RB-prenet G, and then let the set { (A 1 , B 1 ), . . . , (A n , B n )} ∈ d π whenever there exists a family of pairwise disjoint ae-paths (P i ) i∈{1,...,n} such that for each i the path P i goes from the conclusion vertex of G corresponding to A i to the vertex corresponding to B i .
The key property is now: Claim 5.16. The decorated sequent corresponding to a Slavnov-pre-proof is entirely determined by the last rule and the decorated sequents corresponding to the sub-pre-proofs of the premises.
For instance, there exists a function F such that given a proof π equal to we have S π = F (S π 1 , S π 2 , A, B). 24 To see why this claim holds, observe that a family of disjoint ae-paths between conclusions in the tree-like RB-prenet corresponding to π consists of the union of: • such a family in the prenet for π 1 , not touching A; • such a family in the prenet for π 2 , not touching B; • either the empty set, or a singleton containing one of the following: -an ae-path between (the vertex for) some formula occurrence in Γ and some other in ∆ going through the RB-tree gadget for A B; -an ae-path from some vertex from either Γ or ∆ to the conclusion vertex for A B; -the reverse of either of the above two possibilities. An explicit expression for F can be obtained by reasoning along these lines. Similar (simpler) analyses can be carried out for the other connectives, and for the axiom and mix rules.
This allows us to lift each of the previous inference rules on flat sequents to a rule on decorated sequents, for example the decorated version of -intro would be where A (resp. B) is a formula occurrence in S (resp. S ′ ) These decorated inference rules can be read as a proof system, and we have: Claim 5.17. The derivation trees generated by those rules are exactly the ones that can be obtained in the following way: start from a Slavnov pre-proof (with flat sequents) and for each node, replace its value by S π where π is the sub-pre-proof rooted at that node.
Let us call these derivation trees decorated pre-proofs.
Third level: side condition using the decorations. At this point, we have obtained a new proof system that, in the end, proves the same flat sequents as the former Slavnov pre-proofs. So it is still unsound with respect to pomset logic. To remedy that, it remains to leverage the additional "multi-reachability" information provided by the decorations (in fact, we use only reachability by a single ae-path between two formulas). Proof. In the inductive translation of Slavnov pre-proofs to prenets, if the last rule is any other than ⊳-intro, then the sub-pre-proofs for its premises are all mapped to correct nets if and only if the translation of the whole pre-proof is itself a correct net. In fact, this is precisely why the desequentialization of MLL+mix sequent proofs into proof nets is sound, a well-known fact. However, for a ⊳-intro rule (using the above notations), the corresponding operation on tree-like RB-prenets may create new ae-cycles. From the shape of the RB-tree gadget associated to ⊳ (cf. Figure 2) one can see that such new cycles can only be composed of the directed edge of this gadget from A to B plus an ae-path from B to A in the prenet for the premise. The existence of the latter is precisely equivalent to {(B, A)} ∈ d.
Note that the assumption in the above claim only consists in purely local "side conditions" on inference rules, hence: Definition 5.19. A decorated proof is a derivation tree in the system whose inference rules are those of decorated pre-proofs except that the ⊳-intro rule is subject to a side condition: it can only be applied to ([Γ, The "if" part of Claim 5.18 can then be rephrased as the soundness of the decorated proof system for pomset logic. Let us also sketch briefly a completeness argument. Given a correct pomset logic proof net G, first consider a copy G ′ where every ⊳ has been turned into . This is still a correct net (replacing ⊳ by removes an edge, so it can destroy ae-cycles, not create them), in fact G ′ is an MLL+mix proof net. There exists an MLL+mix sequent proof whose inductive translation (desequentialization) is G ′ -this is the sequentialization theorem (a result involving non-trivial combinatorics, originating in [Gir87]; see [Ret03,Ngu20] for a discussion of its "equivalence" with earlier results in mainstream graph theory). Next, in this sequentialization, replace the relevant occurrences of by ⊳; we obtain a Slavnov pre-proof (since in this system the rule for ⊳ is the same as the MLL+mix rule for ) which lifts uniquely to a decorated pre-proof. Finally, one must check that the latter satisfies the side conditions; this comes from the correctness of the pomset logic proof net G that we started with, plus the "only if" part of Claim 5.18.
Conclusion
In the first paper of this series [Gug07], Guglielmi announced the task of the present one in the following way: It is still open whether the logic in this paper, called BV, is the same as pomset logic. We conjecture that it is actually the same logic, but one crucial step is still missing, at the time of this writing, in the equivalence proof. This paper is the first in a planned series of three papers dedicated to BV. [. . . ] In the third part, some of my colleagues will hopefully show the equivalence of BV and pomset logic, this way explaining why it was impossible to find a sequent system for pomset logic. Surprisingly, the hoped-for equivalence turned out to be false; in Section 3, we exhibited an explicit formula provable in pomset logic, but not in BV. What first led us to seek such a counter-example was the discovery of the complexity-theoretic hardness results of Section 4, according to which the conjectured equivalence would have implied NP = coNP. This, plus Slavnov's recent system [Sla19], put into question the established narrative about the impossibility of sequent calculi for pomset logic, so we revisited this topic in Section 5 (and showed in passing that an old sequent calculus with cuts was in fact equivalent to BV).
6.1. Related topics. As we hope that this paper may serve as a reference for readers who wish to get acquainted with BV and pomset logic (hence the lengthy Section 2), we will broadly survey here some works that are connected to these two systems, without limiting ourselves to provability or complexity-theoretic aspects.
Applications and semantics of BV and pomset logic. One of the first applications of self-dual non-commutativity was Reddy's Linear Logic Model of State [Red93]. This work, whose ultimate goal is to study mutable state in programming languages, introduces an extension LLMS of intuitionistic 25 linear logic with some connectives, one of which is ⊳ -LLMS comes with a semantics in coherence spaces where the interpretation of ⊳ coincides with that for pomset logic. The proof system for LLMS is a sequent calculus similar to Retoré's one ( §5.1). It also admits a semantics in Dialectica categories [dP14, §4].
As for the proof nets of pomset logic, they have no known notion of categorical semantics; in fact, in light of the connections between categorical logic and deep inference [Hug04], it might be argued that any presentation of pomset logic as an "initial something-category" would amount to giving a deductive proof system for it. However the same connections make modeling BV categorically a straightforward matter, as has been done in [BPS12] where a new concrete semantics (probabilistic coherence spaces) is also given as an example of BV-category. There have been recent works relating BV and BV-categories to quantum causality [BGI + 14, SK22].
Finally, let us mention that Retoré and his collaborators have applied pomset logic to mathematical linguistics (see [Ret21,Section 7]). This provides an alternative to the usual approach in categorial grammars [MR12] which relies on the another kind of noncommutative logic that we shall cover next.
Other non-commutative variants of linear logic. Linearity and non-commutativity first appeared in the study of typed λ-calculi in the Lambek calculus [Lam58], whose introduction was motivated by the aforementioned linguistic applications. We might thus consider it retrospectively as the first non-commutative logic, even though the formulasas-types correspondence 26 between typed λ-calculi and constructive logics was not known at the time. In the Lambek calculus, the order of arguments of a function matters: thus In a classical linear logic framework, where A ⊸ B may be defined as A ⊥ B, this translates into A B ≡ B A -a non-commutativity in the literal sense. This entails the non-commutativity of its dual connective and we have (A B) ⊥ = B ⊥ A ⊥ . (In contrast, pomset logic keeps , commutative while adding the new connective ⊳, and the self-duality of ⊳ does not permute its arguments.) The standard system with those properties is cyclic 27 25 In the context of linear logic, "intuitionistic" means that the sequents are two-sided with the right side being limited to a single formula. The sequents in [Red93] are of the form Γ ⊢ A where Γ is what we call an ordered sequent.
linear logic; see [Yet90] for its sequent calculus and proof nets, [DG04] for a deep inference system and [AM19] for pointers to more recent work on cyclic MLL. On λ-terms or proof nets, non-commutativity corresponds to a planarity condition; to our knowledge, this was first remarked by Girard in [Gir89, Section II.9] just after his discovery of linear logic [Gir87]. For more recent works pursuing such topological ideas, see e.g. [APR05,Abr07,Mel18]. In particular, renewed interest in the non-commutative linear λ-calculus has come from the discovery of a deep connection with the combinatorics of planar maps [ZG15], including bijective and enumerative aspects.
The aforementioned works consider proofs or λ-terms as static combinatorial objects, but they can also be seen as programs. In this perspective, unexpected computational consequences of non-commutativity in the λ-calculus have recently been uncovered in an automata-theoretic setting [NP20].
Finally, let us mention Abrusci and Ruet's logic [AR99,Rue00] where commutative and non-commutative versions of the connectives and coexist.
Proof nets vs denotational semantics. Pomset logic comes from trying to extract a syntactic correctness criterion from the coherence space semantics: the interpretation of a pre-proof net in coherence spaces can be defined by means of so-called experiments, and we want the result of the experiments to be a valid member of the semantics. (To be precise, we want the set of points obtained by experiments to form a clique.) For MLL+mix proof nets, Retoré showed that this condition is equivalent to correctness [Ret97b], and the correctness criterion for pomset proof nets was designed to extend this correspondence (this is discussed in [Ret97a]).
Pagani has applied a similar methodology to MELL (Multiplicative-Exponential Linear Logic) pre-proof nets: he shows in [Pag06] that the validity of coherence space experimentsusing a certain "non-uniform" interpretation of the exponentials -is equivalent to a certain graph-theoretical condition, visible acyclicity, which is weaker than the usual correctness criterion for MELL+mix. This is later extended to differential interaction nets in [Pag12]; since coherence spaces are not a semantics of differential linear logic, the result of [Pag12] is formulated with respect to Ehrhard's finiteness spaces instead.
A similarity between correctness for pomset proof nets and visible acyclicity is that both involve directed edges and cycles. Thanks to this, it is straightforward to to show that visible acyclicity is coNP-hard, by adapting the proof for pomset logic; however we do not know whether, conversely, it is in coNP.
Let us also mention Tranquili's hypercorrectness criterion for MALL (Multiplicative-Additive) pre-proof nets, coming from their semantics in hypercoherences [Tra08]. Here again the condition obtained is weaker than the usual correctness criterion -so there are hypercorrect MALL pre-proof nets that are not sequentializable. (Coherence spaces would allow even more non-sequentializable pre-proof nets, for instance a version of Berry's famous "Gustave function".) Extensions of BV. Given that BV and pomset logic are "multiplicative" logics, it is natural to make extensions with other primitives, like the additives and exponentials of linear logic, or other modalities or quantifiers. This has indeed been done, but so far only for BV. The first such extension was adding the exponential of linear logic to BV, leading to the logic NEL, which is studied in the fourth and fifth paper of this series [SG11,GS11].
In [Rov16], Roversi adds a self-dual binder to BV, in order to establish a correspondence to the linear λ-calculus, in the spirit of the formulas-as-types paradigm.
The next natural extension was adding the additives, leading to the logic MAV [Hor15], which has then been extended by nominal quantifiers (and standard first-order quantifiers) [HTAC19,HT19] in order to simulate private names in process algebras, as for example the π-calculus. This is in the line of research by Bruscoli [Bru02] who used BV to simulate reductions in CCS, following the formulas-as-processes paradigm.
Beyond formulas. The formulas-as-processes paradigm has recently motivated another line of research. The restrictions on digraphs, that define dicographs which correspond to formulas (see Definitions 2.26, 2.27 and Theorem 2.28) and that therefore make proof theory possible in the first place, are also an obstacle to the formulas-as-processes paradigm because there are processes that do contain the forbidden configurations in (2.2) in Definition 2.25, and do therefore not correspond to formulas. This suggests to define a proof system directly on the graphs instead of the formulas, and use the modular decomposition tree instead of the formula tree. This idea (first briefly mentioned in [NS18]) has been explored in [AHS20b,AHS20a] and [CDW20] for undirected graphs and in [AHMS22] for digraphs. It turns out that if we drop the cograph/dicograph condition, there is a much larger space of possible proof systems, that still waits to be explored. 6.2. Open problems. For a long time, it was believed that there was a canonical extension of MLL+mix with the connective ⊳ that had both a deductive proof system (BV) and proof nets with a simple correctness criterion (pomset logic). Now that we have refuted this, several questions arise: • We might want to design truly well-behaved deductive proofs for pomset logic -given the obstructions that we have seen in this paper, this looks challenging. Slavnov's sequent calculus is a start, but it is not clear for us whether a cut-elimination procedure can be defined directly on it without going through a translation into proof nets. And even without insisting on the proof system being deductive, the requirement of tractable proof checking rules out proof nets by themselves (Remark 4.14). • Our results might also be interpreted as suggesting that of the two logics, BV was the "right" one all along. Then it would be desirable to have a system of proof nets for BV.
Perhaps it suffices to extend the correctness criterion of pomset logic so that it excludes more pre-proofs. If that were the case, then the problem of "BV-correctness" of pre-proof nets would be equivalent to the BV-provability problem for balanced formulas. This also raises the question of the complexity of the latter: NP-completeness would rule out coNP critera of the sort "there does not exist some bad structure (e.g. some kind of cycle) in the pre-proof net". Alternatively, the right notion of proof net could involve not just the formula tree and axiom linking, but some extra structure too (maybe an order on the axiom linking?); there are some precedents for this in the theory of MLL proof nets with units (with many variants, recapitulated in [Hug12, Table 1]). • More generally, now that we have two logics that (i) are built from the connectives , ⊳, , (ii) are conservative over MLL+mix, and (iii) admit cut elimination, the question arises whether these are the only two or whether there is a hierarchy of such logics with increasing proof complexity.
During the research for this paper, another interesting question arose. We conjecture the following generalization of the construction of the formula in Section 3: given a balanced tautology of classical logic, one can always "make the axiom links directed" in some way (cf. Remark 3.15) to get a provable formula in pomset logic. | 32,433.8 | 2022-09-16T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Car painting process scheduling with harmony search algorithm
Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.
Introduction
Developments in the industrial world increasingly experienced an increase in terms of quality and quantity of production. Starting from using skilled human labor to use the machine in conducting production activities so that the possibility of errors caused by human error (human error). In developed countries almost all industries do production activities using the power of the engine, so that humans only operate the machine in its production activities.
Although already using the power of machinery in conducting its production activities, every industrial company keep trying to make its production activities produce an optimum result. Scheduling is one of the efforts of industrial companies to produce optimum results. Production scheduling is attempted to obtain an effective job assignment at each work station, so that no job stacking occurs so as to reduce idle time or waiting time for subsequent processing [1].
Car industry factories in developed countries already use the machine in the process of painting the car body. In the painting process has not done scheduling optimally, where in this case there is still waiting time in the process of painting the next stage, if the painting process is done more than one variation of the form of the car in the same painting machine.
Flow shop scheduling is a scheduling model where the job to be processed entirely flows in the same product direction / path. In other words, jobs have routing work together. Scheduling problems 2 1234567890''"" often arise if there are n jobs that will be processed on the machine m, which must be specified which must be done first and how to allocate jobs on the machine so that a scheduled production process is obtained [2].
Problem identification
In the process of painting cars with different car shape variations in the same painting machine, there are still a problem in waiting time. Therefore, an optimal solution is required to achieve the effective and efficient scheduling of car sequences in the painting process. How to get effective and efficient car order in the painting process, thus obtaining minimum makepan value in solving flow shop scheduling problems.
Previous Research
Scheduling flow shop in the search for the best optimization, has been a lot to solve various kinds of problems in everyday life. From small problems to complex problems with various methods in the settlement.
In solving the flow shop scheduling problem about the time of movement of a job that sometimes still has a free time when moving from one machine to another machine [3]. In this case [3] uses ant colony algorithm to solve the problem in finding the smallest makespan time.
Soukhal [4] in his research solve the problem of flow shop scheduling by using polynomial-time algorithm. Problems examined by Soukhal is about the truck carrying goods that will be delivered to the customer by considering the capacity of truck and transport time.
Boukef [5] conducts research on the flow shop scheduling problem in terms of minimizing the amount of prices for the production process and the delivery process undertaken on pharmaceutical drugs and the food industry. Boukef uses the genetic algorithm method to solve the problem.
Aulia [6] researches the problem of flow shop scheduling with permutation constraints on harmony search algorithm. The result of this algorithm can solve the problem well compared with the lower bound.
Methodology
HSA is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find opitmal in the optimization process. The search optimization process can be compared to the process of improvising jazz music. On the one hand, the perfection of harmony is determined by aesthetic sound standards. A musician always expects to produce a song with perfect harmony. On the other hand, an optimal solution for an optimization problem will be the best solution for objective and limited issues. Both of these processes will produce the best or optimum solution [7].
Figure 1. Improvisasion of Music Analogy
The analogy between musical improvisation and optimization techniques can be illustrated in figure 1: Each music player (saxophonist, double bassist and guitarist) can be analogous to the variables (x1, x2, x3) and the tone levels of each musical instrument (saxophone = {Do, Re, Mi}; double bass = {Mi, Fa, Sol}; And guitar = {Sol, La, Si}) are analogous to the variables (x1 = {100, 200, 300}; x2 = {300, 400, 500}; and x3 = {500, 600, 700}). If the saxophonist produces a Re sound, the double bassist emits a sound of the Mi and the guitarist emits a Si sound, so all three will simultaneously create a new harmony (Re, Mi, Si). If this new harmony is better than the previous harmony, then the new harmony will be used. Likewise a new vector solution (200mm, 300mm, 700mm) will be used as well if it is better than the previous one for objective function value conditions [8].
In accordance with the above concept, HSA consists of five stages, namely:
c) Pitch Adjustment Rate (PAR)
PAR is a continuous value used as a harmonized improvisation parameter after the HMCR criteria are met. The value of this parameter is 0 ≤ PAR ≤ 1.
d) Stop Criteria
Stop Criteria is the value used to stop the repetition of the new harmony improvisation.
B. Harmony Memory Initialization
In the initialization phase of Harmony Memory, solution vectors will fill the memory harmony according to the number of HMS. The solution vector is generated from randomly generated decision variables. The solution vector that is randomly generated is a sequence of jobs in the flow shop. Then each vector solution will be calculated its mask value. So it will produce the solution vector with the value makespannya respectively. An example of data that will be solved based on flow shop problems is shown in table 1. The use of memory consideration is very important, this is the same as choosing the best individual in the genetic algorithms. Memory consideration will ensure the best harmony that will be brought to the last new harmony memory.
B) Pitch Adjustment
The pitch adjustment is fixed by a pitch band-width brange and a pitch adjusting rate rpa. Although in music, pitch adjustment is a tool to change the frequency, it is suitable for connecting a slight difference solution inside the HSA. In theory, patterns can be arranged linearly or nonlinear, but in practice linear adjustment is used. So it can be concluded xnew = xold + brange * Є where xold is the lifestyle or solution of harmony memory and xnew is a new pattern after pitch adjusting action.
C) Random Selection
Random selection is useful for expanding the diversity of solutions. Although pitch adjustment has the same role, but pitch adjustment is limited to certain local pitch adjustments and so is local search. Random selection can run on a more advanced system to examine a variety of solutions to find the optimal global. In this research, the improvisation process is described as follows: The random number generated is a1 = 0,432. Then the random number is adjusted to HMCR = 0.9. Because a1 <HMCR, then the decision variable is chosen randomly. Suppose the selected x1 '= 2. Next generate a random number to be adjusted with PAR = 0.3. Suppose that the random number generated is a2 = 0.512. Because a2> PAR, then the decision variable x1 '= 2 is retained.
Iteration 2
The random number generated is a1 = 0.925. Then the random number is adjusted to HMCR = 0.9. Because a1> HMCR, then the decision variable is chosen randomly from X1. Suppose the selected x1 '= 5.
Iteration 3
The randomized number generated from the previous iteration, e.g. a1 = 0.276, will be compared with the value of HMCR = 0.9 . Because the condition a1 < HMCR is fulfilled, the decision variable will be selected randomly, e.g. x1' = 3. The next step is generating random value to be adjusted by comparing the value with PAR = 0.3 . For example, if the value, defined by variable a2, is 0.114, and the rule a2 < PAR is fulfilled, then the decision variable x1' = 3 will be adjusted to the next variable, which is x2' = 4.
Iteration 4
The random number generated is a1 = 0.456. Then the random number is adjusted to HMCR = 0.9. Because a1 < HMCR, then the decision variable is chosen randomly. Suppose the selected x1 '= 6. Next generate a random number to be adjusted with PAR = 0.3. Suppose the random number generated is a2 = 0.212. Since a2 <PAR, then the decision variable x1 '= 6 is adjusted to the adjacent variable, ie x2' = 7.
Iteration 5
The random number generated is a1 = 0.941. Then the random number is adjusted to HMCR = 0.9. Because a1> HMCR, then the decision variable is chosen randomly from X1. Suppose the selected x1 '= 6.
Iteration 6
The random number generated is a1 = 0.672. Then the random number is adjusted to HMCR = 0.9. Because a1 <HMCR, then the decision variable is chosen randomly. Suppose the selected x1 '= 1. Next generate a random number to be adjusted with PAR = 0.3. Suppose the random number generated is a2 = 0.322. Because a2> PAR, then the decision variable x1 '= 1 is retained.
Iteration 7
The random number generated is a1 = 0,731. Then the random number is adjusted to HMCR = 0.9. Because a1 <HMCR, then the decision variable is chosen randomly. Suppose the selected x1 '= 3. Next generate a random number which will be adjusted with PAR = 0.3. Suppose that the random number generated is a2 = 0.843. Because a2> PAR, then the decision variable x1 '= 3 is retained.
From the above improvisation process obtained a new vector solution, namely: X = [2 5 4 7 6 1 3] Furthermore from the sequence of the car, it will be calculated its objective function to know the time.
D. Harmony Memory Update
After the improvisation process is completed it will get a new car sequence. Then from the sequence the car is calculated makespan. Having obtained the value of makespan, then the value of makespan is compared with the value of makespan contained in HM. If the makespan obtained at the improvisation stage is smaller than the makespan contained in HM, then the sequence of car and the makespan value in HM will be replaced by the sequence of the car and the new makespan. However, if the makespan obtained at the improvisation stage is not smaller than the makespan contained in HM, the sequence of cars and makespan in HM will be maintained.
E. Stop Criteria
Criteria for quitting is a condition in which the improvisation process stops if it meets the stipulation of the stop criteria. However, if the determination of the stop criteria has not been met, then the improvisation process continues to be done repeatedly until it meets the stop criteria.
F. Objective Functions
The objective function of this research is using makespan, which is the time period of completion of a job which is the sum of all the processing time of a machine. The smallest makespan is the best use of machine and job combinations and ensures the job from start to finish is completed. Makespan is calculated using the following equation: The calculation method of make span value is described in table 2. | 2,921.8 | 2018-02-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Evidence of a new metabolic capacity in an emerging diarrheal pathogen: lessons from the draft genomes of Vibrio fluvialis strains PG41 and I21563
Background Vibrio fluvialis is an emerging diarrheal pathogen for which no genome is currently available. In this work, draft genomes of two closely related clinical strains PG41 and I21563 have been explored. Results V. fluvialis strains PG41 and I21563 were sequenced on the Illumina HiSeq 1000 platform to obtain draft genomes of 5.3 Mbp and 4.4 Mbp respectively. Our genome data reveal the presence of genes involved in ethanolamine utilization, which is further experimentally confirmed by growth analysis. Conclusions Combined in silico and growth analysis establish a new metabolic capacity of V. fluvialis to harvest energy from ethanolamine.
Background
The genus Vibrio of the class Gammaproteobacteria is an ecologically and metabolically diverse group autochthonous to the marine, estuarine, and freshwater environment [1]. This genus comprises of nearly 100 species of which, some members are capable of causing severe diarrheal diseases, thus posing a serious threat in the developing world [2,3]. Among these, Vibrio cholerae O1/O139 and Vibrio parahaemolyticus are considered major diarrheal pathogens and are responsible for several pandemics and epidemics [4,5]. The other members of the Vibrionaceae family namely Vibrio mimicus and Vibirio fluvialis are also frequently found to be associated in diarrheal outbreaks [6,7].
Vibrio fluvialis is a halophilic, polarly-flagellated, Gramnegative bacterium. It was first isolated in 1975 from the stool of a diarrhea patient in Bahrain and categorized as group F Vibrio and rechristened as Vibrio fluvialis in 1981 [8]. Since its discovery, the organism has been implicated in several outbreaks and sporadic cases of diarrhea [9]. Between 1976 and 1977, 500 patients (mostly children and young adults) were reported to be infected with Vibrio fluvialis in Bangladesh with symptoms marked by vomiting, abdominal pain, moderate to severe dehydration and significant fever [10]. In the United States, Vibrio fluvialis has been associated with enterocolitis in infants [11]. In Indonesia, Vibrio fluvialis has been recognized as one of major enteric pathogen causing cholera-like diarrhea [12]. Recently, an examination of 400 non-agglutinating Vibrio species collected from patients with diarrhea in the period 2002-2009 in Kolkata, India identified 131 strains of Vibrio fluvialis of which 43 strains were suggested to be the sole pathogen and the remaining 88 strains were co-pathogens with other prominent enteric pathogens [7]. In 2009, an episode of massive diarrhea broke out in coastal regions of India following the cyclone Aila. Further investigation confirmed Vibrio fluvialis as the predominant pathogen responsible for this diarrheal outbreak [13]. Clinically, Vibrio fluvialis causes diarrhea having symptoms similar to that of cholera [14]. The organism contains El Tor-like hemolysin [15] and exhibits cytotoxic and cell-vacuolating activity on HeLa cells [16]. Collectively, the information garnered from epidemiological studies clearly establishes Vibrio fluvialis as an emerging diarrheal pathogen. The situation is further aggravated by the characterization of several multi-drug resistant clinical isolates of this strain [17,18].
There is now a growing realization regarding the significance of ethanolamine (EA), a small molecule present abundantly in host diet, as well as in bacterial and epithelial cells of the vertebrate intestine, that acts as an energy source for numerous bacteria including pathogens [19]. Using Salmonella enterica serovar Typhimurium as a model organism, the process of utilization of EA as an energy source has been demonstrated previously. The eut operon contains 17 genes whose concerted action converts EA into more metabolically suitable molecules. In case of Salmonella enterica, all essential proteins for EA metabolism are clustered into a multiprotein complex known as the metabolosome, which is reminiscent of the bacterial micro-compartment [20]. The presence of ethanolamine lyase (EutBC), a key enzyme of EA utilization machinery (eut) has been established in about 100 bacterial genomes [21]. In a recent effort, our group has uncovered the presence of eut operon in Vibrio alginolyticus and the capacity of this bacterium to utilize EA as a nitrogen source [22].
Taxonomically, Vibrio fluvialis belongs to the Cholerae clade. The other members of the Cholerae clade are Vibrio furnissii, Vibrio cholerae, Vibrio mimicus and Vibrio metschnikovii [23]. Genomic analysis of the members of the Cholerae clade reveals the presence of eut operon genes, thus indicating the possibility of such metabolic potential in these bacteria [22]. So far, no genome information for any strain of Vibrio fluvialis is available. This prompted us to embark on the present study to decipher the genome and examine the ability of Vibrio fluvialis to harvest energy from EA.
Genome sequencing
To pursue our interest, two Vibrio fluvialis strains namely PG41 and I21563, clinical isolates of 1998 and 2004 outbreaks respectively [5,7] were sequenced using the Illumina-HiSeq 1000 technology (See Additional file 1). For genome analysis, library preparation was carried out according to the Tru Seq DNA sample preparation protocol (Illumina, Inc., San Diego, CA) at C-CAMP, Bangalore, India. Briefly, 1 μg of bacterial DNA was sheared to an average length of 300 to 400 bp, and standard blunt ending with "A" base (paired-end DNA sample preparation kit; Illumina, Inc.) was performed. Illumina index adapters were ligated to the ends of the fragments. After ligation reaction and separation of non-ligated adapters, samples were amplified by PCR for 8 cycles to selectively enrich those fragments in the library having adapter molecules at both ends. The sample was quantified and the quality was tested using a Bioanalyzer. Libraries were sequenced in a paired-end 100 base run, using TruSeq PE Cluster Kit v3-cBot-HS for cluster generation on C-bot and TruSeq SBS Kit v3-HS for sequencing on the Illumina HiSeq1000 platform according to manufacturer recommended protocols. A total of 24,420,454 and 21,454,382 paired-end reads were obtained for V. fluvialis strains PG41 and I21563, respectively.
Assembly and annotation
De novo assembly approach was used to finalize the draft genomes using CLCbio wb6. The genomes were assembled with several different parameters. The genome finishing module of CLCbio was applied on the best assembly. The contigs thus obtained were scaffolded using SSPACE v2.0 scaffolder [24] and the gaps were filled by GapFiller v1.10 [25]. The gap-filled scaffolds thus obtained, were broken at the unfilled gaps. Functional annotation was carried out by RAST (Rapid Annotation using Subsystem Technology) [26], tRNA was predicted by tRNAscan-SE 1.23 [27] and rRNA genes by RNAmmer 1.2 [28].
Submission of genome sequence
This Whole Genome Shotgun project has been deposited at DDBJ/EMBL/GenBank under the accession ASXS00000000 and ASXT00000000 for Vibrio fluvialis PG41 and Vibrio fluvialis I21563 respectively. The version described in this paper is the first version ASXS01000000 and ASXT01000000.
Quality assurance
The genomic DNA was isolated from pure bacterial isolate and was further confirmed by 16S rDNA gene sequencing (See Additional file 1) as well as examining certain phenotypic characteristics such as tolerance to high salt and negative for gas production in glucose-rich media which are defining characteristics of Vibrio fluvialis [16]. Multi Locus Sequence Analysis (MLSA) tree was generated with the gene sequences of six housekeeping genes ftsZ, mreB, pyrH, recA, rpoA and topA for species characterization and confirmation. The concatenated sequences for these genes were aligned using PCMA [29]. PhyML tree was build using Topali v2.5 [30] (HKY model, 100 bootstraps) ( Figure 1A). The proteins of Vibrio fluvialis strain PG41 was subjected to BLASTp at E-value 1e -5 to Vibrio fluvialis strain I21563 to find the proteome similarity percentage between the two strains. Table 1).
Existence of eut-operon
It has been shown that EA, a small host derived molecule serves as an energy source for many bacteria including pathogens such as Salmonella enterica serotype Typhimurium and Enterohaemorrhagic Escherichia coli (EHEC) [19,20,31,32]. Recently, our group has established the potential of EA utilization in Vibrio alginolyticus [23]. We therefore examined the presence of genes related to EA utilization pathway in the genomes of the Vibrio fluvialis strains and compared it to homologs from Vibrio alginolyticus. It has been documented that genes from the EA utilization machinery can be clustered in the form of short or long operons [21]. The genomes of both Vibrio fluvialis strains have the short operon. Only EutBCEGPR and ethanolamine permease proteins could be identified in the draft genomes. Genes corresponding to EutRBC and ethanolamine permease are in genome context ( Table 2). The percentage identity was evaluated for Eut proteins in V. alginolyticus 12G01 and Vibrio fluvialis ( Table 1). EutBCEG of both the Vibrio fluvialis strains are > 50% identical to the respective proteins in Vibrio alginolyticus 12G01. EutPR and ethanolamine permease are~30% identical to the proteins of Vibrio alginolyticus. We could not find any homologs of the eutD and eutQ genes in the draft genomes of Vibrio fluvialis strains PG41 and I21563. The eut operon is 100% conserved between the Vibrio fluvialis strains PG41 and I21563 and share more than 90% sequence identity to their homologs in Vibrio furnissii NCTC 11218, the closest Vibrio for which whole genome information is available.
Vibrio fluvialis utilizes ethanolamine as an energy source
As evident from the preceding section, the genome of Vibrio fluvialis strains contain genes encoding proteins of the eut operon (Table 3). To ascertain the capacity of V. fluvialis to utilize EA as an energy source, two clinical isolates of Vibrio fluvialis viz., PG41 and I21563 were subjected to growth analysis. The growth experiment was carried out in minimal media supplemented with EA as an energy source using a previously described procedure [22,31]. Briefly, overnight Luria broth grown cultures of these strains were further diluted and grown to bacterial OD of 1.0 at 37°C in Luria broth. The cultures were centrifuged, washed and again diluted 100-fold in M9 minimal salt medium containing Figure 2). Interestingly, Salmonella utilizes EA both as a nitrogen and carbon source [19], while EHEC and Vibrio alginolyticus prefers to use it as a nitrogen source [22,31].
Future directions
Compared to other notable diarrheal pathogens of the Vibrionaceae family, our understanding of the biology of Vibrio fluvialis is not sufficiently explored. Recent works have highlighted some information on the epidemiology and pathogenic determinants of Vibrio fluvialis. In this regard, our draft genomes will serve as a good starting point to explore and obtain novel insights into the biology of this emerging diarrheal pathogen. Moreover, regulation of eut operon and significance of EA in controlling virulence as seen in other pathogens could be examined in Vibrio fluvialis and this is likely to shed additional light on the pathogenesis and ecology of this emerging pathogen.
Availability of supporting data
This Whole Genome Shotgun project has been deposited at DDBJ/EMBL/GenBank under the accession ASXS00000000 and ASXT00000000 for Vibrio fluvialis PG41 and Vibrio fluvialis I21563 respectively. The version described in this paper is the first version ASXS01000000 and ASXT01000000.
Conclusions
Our draft genome analysis clearly reveals the existence of the eut machinery in Vibrio fluvialis, thereby highlighting a new metabolic potential of this bacterium. Furthermore, growth analysis clearly demonstrates the capacity of this organism to harvest energy from EA preferably as a nitrogen source.
Additional file
Additional file 1: Materials and methods. | 2,548 | 2013-07-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Towards Zero-Emission Refurbishment of Historic Buildings : A Literature Review
Nowadays, restoration interventions that aim for minimum environmental impact are conceived for recent buildings. Greenhouse gas emissions are reduced using criteria met within a life-cycle analysis, while energy saving is achieved with cost-effective retrofitting actions that secure higher benefits in terms of comfort. However, conservation, restoration and retrofitting interventions in historic buildings do not have the same objectives as in modern buildings. Additional requirements have to be followed, such as the use of materials compatible with the original and the preservation of authenticity to ensure historic, artistic, cultural and social values over time. The paper presents a systematic review—at the intersection between environmental sustainability and conservation—of the state of the art of current methodological approaches applied in the sustainable refurbishment of historic buildings. It identifies research gaps in the field and highlights the paradox seen in the Scandinavian countries that are models in applying environmentally sustainable policies but still poor in integrating preservation issues.
Introduction
the renovation potential of buildings in the European Union (EU) is huge.Up to 110 million buildings could be in need of renovation [1] as 35% of the EU's buildings are over 50 years old and, in Europe, there is a slow replacement rate [2].
In the existing built environment, a historic building (HB) is a single manifestation of immovable tangible cultural heritage that does not necessarily have to be a heritage-designated building [3,4].The historic buildings (HBs) that are not listed or fully protected by countries' legislation may have a significant cultural value in identifying the form of cities, and play a significant role in providing a sense of identity to the community.However, existing materials, building structures and envelope design may limit the choice of interventions to be applied, while the restraints in thermal-performance upgrades may limit their cost-effectiveness.This means that, if compared to recent buildings, these interventions are more demanding in terms of maintenance and adaptation and more challenging in energy-saving during the operational stage.
Nowadays, the preservation of historic buildings is at risk, not only due to natural weathering of their materials but also by the convenience of rebuilding instead of restoring or of developing renovation methods tailored to modern buildings.The topic has recently gained a lot of attention, including the first achievements of planning and executing preservation, protection, maintenance and restoration of immovable cultural heritage in a standardised way [3].
in recent years, several databases (e.g., the Odyssee database used by the European Environment Agency (EEA) [5]), assessment methods (e.g., Building Sustainability Assessment (BSA) [6]) and modelling and evaluations tools (e.g., the SURE Indicator Tool [7]) applicable to different stages of the refurbishment process have been created.in addition, different sustainability certification systems to assess building performance have been developed.The most important at European level are: • BREEAM (Building Research Establishment Environmental Assessment Methodology), leading in the EU market (80% of all the EU-certified sustainable buildings) but mostly used in the United Kingdom, where it was created in 1990 [8]; • LEED (Leadership in Energy and Environmental Design) developed in the USA in 1998 [9]; • HQE (High-Quality Environmental) developed in France in 1992 [10]; • Miljobyggnad (environmental buildings) created in Sweden in 2005 [11]; and • the DGNB (German Sustainable Building Council) system developed in Germany in 2007 [12].
These tools apply a rating method to compare different options in new, converted or renovated buildings; for example, to assess the improvements in energy and materials before and after refurbishment.However, their scoring methods are actually not applicable for the conservation of HBs, as they are not designed to highly rate: (i) the multi-value of immovable cultural heritage; (ii) the significant embodied energy savings within this building stock; and (iii) the energy performance targets achievable through refurbishment.
Decisions on conservation, restoration and retrofitting interventions in HBs need to take into account not only the aspects mentioned in the above paragraph but also a broader range of benefits counting for historic, artistic, cultural and social values or the preservation of authenticity and use of materials compatible with the originals.In such a case, reversible techniques are preferable because, if proven to be inefficient or of low durability over time, they can be replaced without damaging the original material or decreasing artistic and historical value.However, reversible techniques (i.e., maintenance and preservation actions) do not always solve existing restoration problems that require higher levels of interventions of the irreversible type.
Is it possible to save HBs by implementing sustainable-refurbishment actions?What are the existing methods used by heritage scientists, environmental engineers and, generally, decision-makers to plan correct and effective sustainable interventions?Are the two main research communities working on these objectives?What are the gaps in knowledge?
This paper puts into the sustainability specialist and conservators' debate the potential conflict between the need to meet environmental targets-particularly greenhouse gas emissions, e.g., the objective of a 20% energy-saving target by 2020 [13]-and to retain cultural heritage values and resources (Section 1-Introduction).The aim is to clarify such issues through a systematic literature review (Section 2-Methodology).The results indicate a need for knowing, characterizing and summarizing the existing methodological approaches on cultural heritage safeguarding and CO 2 -savings potentialities linked to refurbishment (Section 3-Results).Finally, the paper in Section 4 (Discussion and Conclusions) identifies the gaps in the methodological approach that must be addressed in the future.It also highlights the current situation created in the Scandinavian countries that are meritorious, and a model in applying sustainable policies that are nonetheless poor when it comes to integrating preservation issues.
Methodology
in research studies, there is a variety of methods that can be applied during a literature review and the choice of the appropriate one is a delicate process because the use of different methods in the same field may appear to have contradictory outcomes [14].The topic of "sustainable refurbishment of historic buildings" involves different research communities and asks for a review of large bodies of information from different fields.For this reason, the systematic literature review method was selected and applied at the junction between the environmental sustainability and the heritage sectors, as this method guarantees a proper mapping of different areas of knowledge and of relevant research gaps and uncertainties and highlights research needs properly [15].
Selection of Publications
Identification and counting of existing research publications in the field of sustainable refurbishment of historic buildings was done using the online Elsevier database, Scopus.This platform was selected because it is the world's largest abstract and citation database of peer-reviewed literature i.e., scientific journals, books and conference proceedings, with over 22,000 titles from more than 5000 international publishers [16].The interests of the two main research communities involved, sustainability and refurbishment specialists, drove the choice of the two initial sets of keywords in the search, using one set for each community.The first set was created to identify the publications related to sustainable methodologies applied to historic buildings by using the keywords "sustainab*" AND "method*" AND "histor* build*"; while the second research results related to interventions aiming at the preservation of historic buildings by using the keywords "preserv*" AND "interven*" AND "histor* build*".The two sets have in common only the category of analysed buildings, i.e., historic buildings, while they differ for the rest.The keywords were written keeping the root of the word and adding the asterisk symbol (*) after it to include all the grammar forms of the word.As the research topic is quite new, the search was performed for scientific publications from the year 2000 until the present day (search performed in September 2017).The search results gave a total number of 274 publications, of which 118 documents resulted from the first set of keywords (sustainability field) and 156 documents from the second set (preservation field).After a first scan, the total number was reduced to 246, removing 9 documents not written in English, 9 duplicate documents, and 10 lecturers' notes or conference proceedings' books.This final list was subject to a document analysis both in term of general characteristics, contents, gaps and needs.The list of the publications considered for the review is provided in the supplementary file.
Analysis of Publications
the first level in the analysis, i.e., the general characteristics for each document, was retrieved by reading the abstract aiming to identify the following information: the classification of the documents regarding their discipline served as an input for the second level of analysis i.e., the content characteristics.Within this level, the documents were grouped using the scheme in Figure 1.They were categorised according to the intervention-driving factor i.e., sustainability or the measures to improve the performance of the building.When one document was judged to belong to more than one category, it was assigned to the most relevant field by the authors.From these two main driving factors (orange colour-Figure 1), more precise categories of contents were recognized (green colour in Figure 1) and the classes of environmental (impact) and refurbishment (process), the focus of our paper, were selected for further review.This deep review was the third and last level of analysis, i.e., the content's characteristics.This consisted of full text readings of papers that were assigned to environmental and refurbishment green boxes (Figure 1), in order to understand the objectives and the authors' judgement and track future research needs.Specifically, research products focused on methodological approaches (blue cell-Figure 1) were the ultimate objective of this review as the base on which to build new and effective tools in planning the sustainable refurbishment interventions of HBs in Scandinavian countries.
Geography of Publications
The geographical distribution of the documents is defined taking into account the continent and the country of the first author's affiliation.By screening of the entire list it can be seen that 79% (n = 193) of the documents are published by researchers from the European continent, 10% of the documents (n = 25) are published in Asia, and each of the other continents has produced less than 5%.This result reflects the efforts and the financial availability that the European Commission is investing in Framework Programme (FP) for Research and Technological Development in order to develop innovative and effective ways to preserve its cultural heritage.In fact, over the last few decades, the largest EU-funded research initiatives such as the Noah's Ark [17], Climate for Culture [18], EFFESUS [19], 3EnCult [20] and MOVE [21] projects, have demonstrated valuable methodological approaches in the cultural heritage (CH) protection field.
It is interesting to examine the results within Europe.Almost half of the relevant European literature (45%) is published in Italy (n = 86), followed by United Kingdom with 11% (n = 22), Spain and Turkey with 6% (n = 11), Czech Republic with 5% (n = 10), and other countries with less than 10 publications.Regarding northern Europe, the number of publications is very low, with two documents published in the Scandinavian countries (both of them part of the European project EFFESUS [19]) and two documents published from researchers affiliated with the Baltic countries.The results show that the topic is still unexploited and more research should be conducted for the green refurbishment of historic buildings in northern Europe.The geographical distribution is given in Table 1.
Geography of Publications
the geographical distribution of the documents is defined taking into account the continent and the country of the first author's affiliation.By screening of the entire list it can be seen that 79% (n = 193) of the documents are published by researchers from the European continent, 10% of the documents (n = 25) are published in Asia, and each of the other continents has produced less than 5%.This result reflects the efforts and the financial availability that the European Commission is investing in Framework Programme (FP) for Research and Technological Development in order to develop innovative and effective ways to preserve its cultural heritage.In fact, over the last few decades, the largest EU-funded research initiatives such as the Noah's Ark [17], Climate for Culture [18], EFFESUS [19], 3EnCult [20] and MOVE [21] projects, have demonstrated valuable methodological approaches in the cultural heritage (CH) protection field.
It is interesting to examine the results within Europe.Almost half of the relevant European literature (45%) is published in Italy (n = 86), followed by United Kingdom with 11% (n = 22), Spain and Turkey with 6% (n = 11), Czech Republic with 5% (n = 10), and other countries with less than 10 publications.Regarding northern Europe, the number of publications is very low, with two documents published in the Scandinavian countries (both of them part of the European project EFFESUS [19]) and two documents published from researchers affiliated with the Baltic countries.The results show that the topic is still unexploited and more research should be conducted for the green refurbishment of historic buildings in northern Europe.The geographical distribution is given in Table 1.
Type of Publication
the search has shown that documents were written in all forms of scientific literature, with the journal article being the most found genre (128 documents (52%)).As journal articles are expected to have top-level quality due to rigorous peer-review processes before publication and a larger impact in the research community, they received most attention during the literature-review process.The percentage of publications related to conferences is also considerable with 43% (n = 109) of the documents categorised as conference papers.The other types of publications such as books or book chapters account for less than 5%.
Year of Publication
the sustainable refurbishment of historic buildings is a multi-disciplinary topic that has received a lot of attention among researchers in recent years.In fact, while the number of publications within this field was quite low (n = 2) in 2000, over the last few years that has increased significantly, reaching a maximum in 2015 with 38 publications followed in 2016 with 35. Figure 2 shows the number and the categories of publications per year.
Type of Publication
The search has shown that documents were written in all forms of scientific literature, with the journal article being the most found genre (128 documents (52%)).As journal articles are expected to have top-level quality due to rigorous peer-review processes before publication and a larger impact in the research community, they received most attention during the literature-review process.The percentage of publications related to conferences is also considerable with 43% (n = 109) of the documents categorised as conference papers.The other types of publications such as books or book chapters account for less than 5%.
Year of Publication
The sustainable refurbishment of historic buildings is a multi-disciplinary topic that has received a lot of attention among researchers in recent years.In fact, while the number of publications within this field was quite low (n = 2) in 2000, over the last few years that has increased significantly, reaching a maximum in 2015 with 38 publications followed in 2016 with 35. Figure 2 shows the number and the categories of publications per year.The graph highlights an increased number of publications in 2008 with regard to the set of search keywords related to interventions (i.e., "preserv*" AND "interven*" AND "histor* build*").From the data analysis, the increase this year mainly came from publications related to the International Conference on Structural Analysis of Historic Construction (SAHC08).In addition, regarding sustainability issues, the number of publications reflects three fruitful series of conferences-the Central Europe towards Sustainable Building (CESB) event held in Prague, Czech Republic in 2010, 2013 and 2016.The 2015-2016 maximum in the number of publications is not a result of a separate event but rather the effect of the EU framework programme FP7-environment.This EU framework, over a 6-year period (2007-2013), produced a general increase in consciousness of environmental the graph highlights an increased number of publications in 2008 with regard to the set of search keywords related to interventions (i.e., "preserv*" AND "interven*" AND "histor* build*").From the data analysis, the increase this year mainly came from publications related to the International Conference on Structural Analysis of Historic Construction (SAHC08).In addition, regarding sustainability issues, the number of publications reflects three fruitful series of conferences-the Central Europe towards Sustainable Building (CESB) event held in Prague, Czech Republic in 2010, 2013 and 2016.The 2015-2016 maximum in the number of publications is not a result of a separate event but rather the effect of the EU framework programme FP7-environment.This EU framework, over a 6-year period (2007-2013), produced a general increase in consciousness of environmental technologies to be used in CH protection and necessary knowledge that resulted in a rise in the number of publications a few years later.Publications in 2017 are counted until early September, the date when the search was concluded.
Field of Publication
the sustainable refurbishment of historic buildings has embraced researchers from different fields and disciplines.The grouping of documents according to their field of publication is reported in Figure 3.In about 34% (n = 84) of the listed documents (see supplementary file), the main driver of the publication is the refurbishment process, from maintenance (preservation, conservation) i.e., low-level interventions to renovation and/or restoration i.e., high-level interventions.Within this group of documents (primary driver: refurbishment), 55% (n = 47) of the publications focus on energy efficiency and the energy retrofit of historic buildings as part of the global effort to reduce energy consumption [13,20].a wide variety of passive and active interventions were used to achieve such energy goals, e.g., passive interventions directed to the building envelopes, insulation of roofs and walls, introduction of high-performance windows, and active measures directed at energy-saving improvements linked to equipment maintenance, system controls, change in lighting, and heating, ventilation and air-conditioning (HVAC) systems.Ten documents (i.e., 12% within this driving factor) are, instead, related to the revitalization/reuse of abandoned buildings or their change of use.
the second large sub-group (Figure 3-yellow colours) of listed publications has sustainability issues as its main driver (n = 62, i.e., 25%) in accordance with the three main pillars of sustainability: environmental (n = 30, i.e., 12%), social (n = 23, i.e., 9%) and economic (n = 9, i.e., 4%).Although this sub-group is strictly connected with the first, this division was undertaken to maintain the focus of the paper, i.e., to analyse the union and intersection between the physical process of the intervention (sub-group 1 i.e., refurbishment) and the impact of the intervention (sub-group 2 i.e., sustainability).The environmentally sustainable-related documents mainly emphasise the reduction of greenhouse-gas emissions in the construction sector as part of worldwide action towards a decarbonised society [22].Research in this sector is also devoted to the assessment of the impact of climate change on historic buildings, following the general increased awareness related to the topic and the call for action by the EU community in this field [17,18].In the review, 15 documents (6%) that treat climate change-related research were identified.
the third large sub-group (i.e., Engineering in Figure 3) includes research contributions dealing with the integrity of the structure and its ability to resist natural ageing and decay.This category of publications has predominantly an engineering and technical character and includes several disciplines, such as structural engineering, geological and geotechnical issues, material sciences, and computer technologies.The number of publications listed in this category is comparable with those regarding sustainability (n = 62, i.e., 25%).The result points to two aspects: Conservation and, above all, restoration interventions are conducted when HBs are in a situation of "emergency" i.e., when the risk of partial or complete loss of the building is high due to instability, leaning, rising damp, damage of building materials through moisture, corrosion, salt crystallization, etc.; 2.
the value of an HB is often perceived by stakeholders, owners and users as intimately connected with the use and technical performance of the building itself [23].
the last sub-group (i.e., Hazard in Figure 3) refers to publications that intend to preserve tangible CH under natural hazards and catastrophic events (38 publications, i.e., 16% in total).Among those, 33 publications discuss the integrity of historic buildings during and/or after earthquakes.This result reflects the location of the majority of case studies in the Mediterranean Basin, which has a high risk of seismic activity.Although there is a diversity of publications concerning this topic, the majority of them discuss the strengthening interventions before the hazardous event, e.g., base isolation, fibre polymers and other non-invasive techniques with the help of computer simulations and laboratory testing.Only a few of them (3) are focused on post-disaster interventions and efforts to restore as much as possible of the initial buildings.In the list, there are also research documents that aim at the stability of buildings during other hazardous events such as fire (2), erosion (1), floods (1) or wind (1).
Type of Contribution
Research outcomes dealing with refurbishment processes and the environmental sustainability pillar were identified with respect to four types of contributions, and are reported in Figure 4.The literature review is the less-used approach when working with sustainable interventions in heritage buildings (i.e., the smallest category, with nine listed documents).At this point, it is quite common to present research results as descriptions of the methodological approaches to be applied during restorations (the largest category with 48 documents).It is also common (n = 34) to use the analysis of data and information gathered on specific case studies, eventually supported by computer simulation, to suggest generalized conservation and/or energy-retrofitting actions on similar buildings in comparable geographical conditions.
Finally, the last type of contribution is mostly focused on the management process, including communication methods and channels used to involve different types of stakeholders (n = 23).This
Type of Contribution
Research outcomes dealing with refurbishment processes and the environmental sustainability pillar were identified with respect to four types of contributions, and are reported in Figure 4.
Type of Contribution
Research outcomes dealing with refurbishment processes and the environmental sustainability pillar were identified with respect to four types of contributions, and are reported in Figure 4.The literature review is the less-used approach when working with sustainable interventions in heritage buildings (i.e., the smallest category, with nine listed documents).At this point, it is quite common to present research results as descriptions of the methodological approaches to be applied during restorations (the largest category with 48 documents).It is also common (n = 34) to use the analysis of data and information gathered on specific case studies, eventually supported by computer simulation, to suggest generalized conservation and/or energy-retrofitting actions on similar buildings in comparable geographical conditions.
Finally, the last type of contribution is mostly focused on the management process, including communication methods and channels used to involve different types of stakeholders (n = 23).This the literature review is the less-used approach when working with sustainable interventions in heritage buildings (i.e., the smallest category, with nine listed documents).At this point, it is quite common to present research results as descriptions of the methodological approaches to be applied during restorations (the largest category with 48 documents).It is also common (n = 34) to use the analysis of data and information gathered on specific case studies, eventually supported by computer simulation, to suggest generalized conservation and/or energy-retrofitting actions on similar buildings in comparable geographical conditions.Finally, the last type of contribution is mostly focused on the management process, including communication methods and channels used to involve different types of stakeholders (n = 23).This proves the importance, both in the heritage and sustainable sector, of keeping decision makers, owners, and local communities involved in HB conservation projects.Concern about the social aspect from the beginning may positively influence the planning of the interventions (i.e., maintenance, preservation, and refurbishment/restoration), as well as guarantee the long-lasting and effective application of advice coming from the research community.
Methodological Contributions
Documents presenting methodological approaches (48 papers, marked with italic in the supplementary file) to apply during refurbishment processes were further screened to pinpoint achievements and gaps in the field (Figure 5a).The first document in this category was published in 2008.This shows how research into developing a methodological approach is still in its early phase and has recently gained increasing interest.About 54% of these documents (n = 26) describe methodological approaches that deal with intervention processes, while 31% of them (n = 15) focus on energy-retrofit measures and energy-efficiency evaluation after the refurbishment process (e.g., [24][25][26][27]).Four publications (8%) present conservation methods that take into account the effects of future climate-change scenarios [28,29] and the evaluation of microclimate conditions [30,31] in the building.Finally, two documents primarily focus on the carbon footprint calculation after intervention [32,33], and one publication discusses the methodology in the decision making process [34].
the 26 documents that describe a methodological approach in maintenance and refurbishment were further categorised according to the levels of intervention (Figure 5b).Three categories were used: low (preservation and conservation), middle (refurbishment and rehabilitation), and high (renovation and restoration).The actions of the first category refer to maintenance interventions, while the middle-and high-level interventions are performed during deeper adaptation processes.From the analysis, 14 documents (54%) describe methods referred to a low level of interventions i.e., preservation (e.g., [35,36]) and conservation (e.g., [37,38]) using the rule of minimum intervention and as much as possible non-destructive techniques.Five publications (19%) have as a primary driver mid-level interventions (i.e., refurbishment, rehabilitation) (e.g., [39][40][41]) while seven documents (27%) present methodological approaches applied to deeper interventions and the full restoration of decayed or abandoned buildings (e.g., [42][43][44][45]).proves the importance, both in the heritage and sustainable sector, of keeping decision makers, owners, and local communities involved in HB conservation projects.Concern about the social aspect from the beginning may positively influence the planning of the interventions (i.e., maintenance, preservation, and refurbishment/restoration), as well as guarantee the long-lasting and effective application of advice coming from the research community.
Methodological Contributions
Documents presenting methodological approaches (48 papers, marked with italic in the supplementary file) to apply during refurbishment processes were further screened to pinpoint achievements and gaps in the field (Figure 5a).The first document in this category was published in 2008.This shows how research into developing a methodological approach is still in its early phase and has recently gained increasing interest.About 54% of these documents (n = 26) describe methodological approaches that deal with intervention processes, while 31% of them (n = 15) focus on energy-retrofit measures and energy-efficiency evaluation after the refurbishment process (e.g., [24][25][26][27]).Four publications (8%) present conservation methods that take into account the effects of future climate-change scenarios [28,29] and the evaluation of microclimate conditions [30,31] in the building.Finally, two documents primarily focus on the carbon footprint calculation after intervention [32,33], and one publication discusses the methodology in the decision making process [34].
The 26 documents that describe a methodological approach in maintenance and refurbishment were further categorised according to the levels of intervention (Figure 5b).Three categories were used: low (preservation and conservation), middle (refurbishment and rehabilitation), and high (renovation and restoration).The actions of the first category refer to maintenance interventions, while the middle-and high-level interventions are performed during deeper adaptation processes.From the analysis, 14 documents (54%) describe methods referred to a low level of interventions i.e., preservation (e.g., [35,36]) and conservation (e.g., [37,38]) using the rule of minimum intervention and as much as possible non-destructive techniques.Five publications (19%) have as a primary driver mid-level interventions (i.e., refurbishment, rehabilitation) (e.g., [39][40][41]) while seven documents (27%) present methodological approaches applied to deeper interventions and the full restoration of decayed or abandoned buildings (e.g., [42][43][44][45]).A further analysis was made regarding the type of methodological approach used to achieve the sustainable refurbishment of historic buildings.The results underline a huge variety of approaches used in the field in recent years.The most common approach was the multi-criteria assessment method that was applied in buildings for both energy-efficiency improvement [46] and for interventions [35,44,47].Decision-makers, using this assessment, have the ability to rank different interventions in order to select the most effective and appropriate actions.Criteria eventually in conflict-that create awareness about conservative interventions-can be also identified.Particular methodological approaches were: maturity matrix assessment [48], multi-attribute value theory (MAVT) [42], methodology for energy-efficient building refurbishment (MEEBR) [25], the functionality index [39], or other methods that require the use of computer simulation or numerical methods.This diversity and heterogeneity of tools shows the importance of using cross-disciplinary, multi-criteria, multi-index, multi-level procedures to develop an effective method/tool able to plan assess different levels of sustainable interventions depending on the conservation needs, type of building, and climate conditions.
Further Findings
Further analyses of the data gathered from the listed papers allowed the type of building and level of applied interventions to be determined, as well as the building materials subject to alterations.For example, no method was identified that can tailor sustainable interventions on buildings' façades, although in HBs the front walls are often representatives of much of the aesthetic and architectural value and constantly exposed to climate and anthropic-induced decay.The majority of the methods (60%, i.e., n = 29) (e.g., [44,47]) were applied to single (as a whole) buildings while the rest (40%, i.e., n = 19) to interventions at district level (e.g., [34,49]) (see Figure 6a).Regarding the occupancy of the building, about 33% focus on residential buildings (n = 16, e.g., [48,50]), 17% on religious buildings (n = 8, e.g., [45,51]), 10% on educational buildings (n = 5, e.g., [24,25]), 8% on museums (n = 4, e.g., [31,32,46]) etc. (see Figure 6b).A further analysis was made regarding the type of methodological approach used to achieve the sustainable refurbishment of historic buildings.The results underline a huge variety of approaches used in the field in recent years.The most common approach was the multi-criteria assessment method that was applied in buildings for both energy-efficiency improvement [46] and for interventions [35,44,47].Decision-makers, using this assessment, have the ability to rank different interventions in order to select the most effective and appropriate actions.Criteria eventually in conflict-that create awareness about conservative interventions-can be also identified.Particular methodological approaches were: maturity matrix assessment [48], multi-attribute value theory (MAVT) [42], methodology for energy-efficient building refurbishment (MEEBR) [25], the functionality index [39], or other methods that require the use of computer simulation or numerical methods.This diversity and heterogeneity of tools shows the importance of using cross-disciplinary, multi-criteria, multi-index, multi-level procedures to develop an effective method/tool able to plan and assess different levels of sustainable interventions depending on the conservation needs, type of building, and climate conditions.
Further Findings
Further analyses of the data gathered from the listed papers allowed the type of building and level of applied interventions to be determined, as well as the building materials subject to alterations.For example, no method was identified that can tailor sustainable interventions on buildings' façades, although in HBs the front walls are often representatives of much of the aesthetic and architectural value and constantly exposed to climate and anthropic-induced decay.The majority of the methods (60%, i.e., n = 29) (e.g., [44,47]) were applied to single (as a whole) buildings while the rest (40%, i.e., n = 19) to interventions at district level (e.g., [34,49]) (see Figure 6a).Regarding the occupancy of the building, about 33% focus on residential buildings (n = 16, e.g., [48,50]), 17% on religious buildings (n = 8, e.g., [45,51]), 10% on educational buildings (n = 5, e.g., [24,25]), 8% on museums (n = 4, e.g., [31,32,46]) etc. (see Figure 6b).It is interesting also to analyse the type of the materials that constitute the building subject to intervention.More than 40% (i.e., n = 19) are brick buildings that require interventions to improve mortar and plaster conditions and to reduce energy consumption through the addition of insulation.
Sixteen documents (i.e., 33%) focus on the refurbishment of stone buildings with interventions directed towards thermal insulation of the walls and application of chemical agents against moisture, while less than 10% (i.e., n = 3) of documents propose suggestions for the refurbishment of timber buildings.The findings are summarised in
Discussion and Conclusions
This review offers insights into the state of knowledge on sustainable refurbishment of HBs and reports how these topics are being explored globally.Its ultimate aim is to influence scholars belonging to the two communities of experts on sustainability and conservation of cultural heritage by further increasing science-based knowledge within the field and influencing decision-making in safeguarding heritage in a society that demands better energy management.This systematic review shows that such topics were incorporated in research agendas since 2006, demonstrating growing interest with an increasing production of research papers.However, current research is geographically limited to Europe and still has some significant gaps in knowledge, as recognized and analysed in the following sub-section.
Knowledge Gap and Research Needs
First, almost all the published methodological approaches evaluate the actual performance of the buildings and suggest the application of interventions to improve their energy performance and related environmental impact.Environmental sustainable improvements are always assessed during the operational phase i.e., after the conclusion of interventions.No methods are proposed to assess the environmental impact of the refurbishment process itself.
This identified gap is driving our future work on the assessment of the environmental footprint of different refurbishment scenarios by developing a methodological tool that will respect conservation principles i.e., the adoption of minimal technical interventions (avoiding unnecessary replacement of historic fabric), compatibility, and reversibility.The refurbishment scenarios, while ensuring the best preservation, have the potential to become a powerful tool in optimizing the re-use of original materials, planning the time of intervention, and reducing its cost.In fact, they can be developed to take advantage of embodied energy, to recognize areas most vulnerable to climate-induced decay, and to focus interventions on minimum waste production, and thereby on the whole to increase a building's lifetime.
Second, all the published methods for refurbishment processes are fragmentary with a focus on different stages or procedures and based on the partial needs of different stakeholders.In our perspective, there is a call for a multi-disciplinary, inclusive method able to confront and link different issues that can help stakeholders in:
•
revealing and improving the protection of the historic, cultural, and socio-economic value of the building; • using a life-cycle assessment (LCA) approach to find optimal combinations that maximize the reuse of materials and their lifetimes, thus reducing the carbon footprint of interventions.
Such inclusive and effective sustainable-refurbishment processes can take place given the close cooperation of professionals from different fields such as urban planners, architects, engineers, heritage scientists, conservation specialists, buildings owners, and decision-makers involved in heritage management.From the perspective of planning a long-term building management strategy, its use provides benefits for both the conservation of HBs and the reduction of environmental impact.
Due to the complexity of the field, the methodology will first be applied to regions with similar climatic conditions and to historic buildings with similar architectural attributes.Later, it will be further developed into a tool to be applied in different built environments and places.
Third, the research should be performed in a broader spatial context for monumental buildings, i.e., extending the method to the neighbourhood scale, as this would result in time and cost savings in adaptation processes.In a district perspective, it is more efficient and economical to categorise the buildings and give solutions for each category than to treat them one by one.Moreover, in towns and cities, buildings with no outstanding historic and architectural value by themselves may, taken as a whole, represent an important part of the country's heritage [52].This wider-scale approach of increasing the number of buildings subject to refurbishment would enhance the achievement of ambitious energy-efficiency targets and would significantly improve the living conditions of the inhabitants.Furthermore, it would upgrade the image of the cities and the incomes through leisure and tourism.
the Scandinavian Paradox
Finally, this review pointed to the Scandinavian paradox.In Norway, more than 300,000 buildings from before 1900 have been identified, and about 6000 buildings are protected under the Cultural Heritage Act [53].In Denmark, the number of protected buildings as of 2016 was about 7000 [54]; while in Sweden there are 1500 sites identified as protected (containing many more buildings) [55].However, the number of papers published in international peer-reviewed journals from researchers affiliated to Scandinavian institutions was very low and they all resulted from the EFFESUS EU project.It was in the interest of the authors to underline the contribution obtained by Scandinavian countries in the results depicted by this literature review.This accentuates the need for future research work and broader dissemination strategies to develop a methodological approach that targets zero-emission refurbishment of historic buildings.
the major publications from the Norwegian governmental institutions that deal with the preservation of cultural heritage, such as the Norwegian Institute for Cultural Heritage Research (NIKU) [56] and the Directorate for Cultural Heritage (Riksantikvaren) [57], are transmitted as reports and, therefore, cannot be traced in a Scopus database search.Moreover, some of them are written in Norwegian, which makes them not easy accessible to researchers of other countries.However, the database search has indicated that even Scandinavian research bodies have devoted very little attention to new methods to effectively maintain and refurbish historic buildings through conservative actions and/or to develop environmental friendly, science-based tools to increase such practice.The existing publications are mainly national reports that, although they contain valuable results in the field [58], have limited dissemination potential due to the language and type of publication.
the literature review has shown that Norway is keeping to traditional established refurbishment and maintenance methods without asking for innovative, science-based approaches.Conservators and researchers in this field want to build further knowledge about maintenance and restoration, collect information on what has been done in the course of the last few years on the usage of traditional handicrafts, and develop "new" knowledge concerning the use of different traditional materials (e.g., results from the "Stave Church Preservation Programme" funded by Riksantikvaren over the 2001-2015 period [59]).
Research on such "new" knowledge concerning the use of traditional material is required in Scandinavia to preserve wooden historic buildings that have high maintenance demands.A detailed knowledge is required to understand the (i) properties of original, aged materials, restored materials and new/created composite materials (e.g., assembling new and aged materials); (ii) changes in building performances (e.g., air-exchange rates, thermal transmission) that include the aesthetic and physical impacts on the existing structure; and (iii) alterations in decay rates or duration of interventions.
An international research project that involves the Norwegian University of Science and Technology (NTNU), NIKU, Riksantikvaren, the Getty Conservation Institute, and the Polish Academy of Science, focuses on the preservation of Stave Churches in Norway and historic wooden buildings in the Scandinavian countries.In the next few years (2018-2021) this will answer some of the questions about the sustainable management of heritage buildings with a long-term perspective.[60] On the other hand, Norway and the other Scandinavian countries are the most active countries aiming at zero emissions for new construction [61,62] or in developing energy-retrofitting measures for existing buildings, even at a large scale (e.g., district level) [19,63].This means that:
•
Sweden is one of the countries in the EU that, since 2005, has created an energy and sustainable certification scheme for commercial and residential buildings [11], while the large stock of residential buildings in Europe is not certified yet [64].
•
in the Scandinavian countries, an increasing number of new constructions, residential or not, are targeted to be nearly zero-energy buildings before 2020 i.e., to balance any CO 2 emission caused by the use of electricity (or other energy carriers) during the building's operation with onsite generation of renewable energy [65].the energy-efficiency renovation rate in Norway is at the maximum level compared with that in the 13 countries of the European Union where data are available.It reaches 2.5% a year, while in other countries it varies in a range from 0.5% to 2.0% a year [66][67][68], with a typical figure being 1% (about 250 million m 2 ) per year [69].If retrofit actions are blindly applied to historic buildings without complete knowledge of the challenges involved, in a short time uncontrolled decay will increase the risk of losing valuable historic buildings and will require a huge economic effort to repair the damage caused.Supplementary Materials: the following are Available online at www.mdpi.com/2075-5309/8/2/22/s1.
Figure 1 .
Figure 1.Flow diagram for the content review of the documents.Step 1 groups the documents according to the focus of publication (orange cells), step 2 categorizes them according the field of publication (green cells), and step 3 identifies the type of contribution (blue cells).
Figure 1 .
Figure 1.Flow diagram for the content review of the documents.Step 1 groups the documents according to the focus of publication (orange cells), step 2 categorizes them according the field of publication (green cells), and step 3 identifies the type of contribution (blue cells).
Figure 2 .
Figure 2. Distribution of documents by year of publication with indication of some major projects and conferences in the field that have influenced the growth of interest in this research topic.The Norwegian research centres for Zero-Emission Building (ZEB) and on Zero-Emission Neighbourhoods in Smart Cities (FME ZEN) are also highlighted.
Figure 2 .
Figure 2. Distribution of documents by year of publication with indication of some major projects and conferences in the field that have influenced the growth of interest in this research topic.The Norwegian research centres for Zero-Emission Building (ZEB) and on Zero-Emission Neighbourhoods in Smart Cities (FME ZEN) are also highlighted.
Buildings 2018, 8 ,
x FOR PEER REVIEW 7 of 16 of computer simulations and laboratory testing.Only a few of them (3) are focused on post-disaster interventions and efforts to restore as much as possible of the initial buildings.In the list, there are also research documents that aim at the stability of buildings during other hazardous events such as fire (2), erosion (1), floods (1) or wind(1).
Figure 3 .
Figure 3. Distribution of the documents by field of publication i.e., content characteristics.
Figure 4 .
Figure 4. Distribution of the environmental and refurbishment documents by type of publication.
Figure 3 .
Figure 3. Distribution of the documents by field of publication i.e., content characteristics.
Buildings 2018, 8 ,
x FOR PEER REVIEW 7 of 16 of computer simulations and laboratory testing.Only a few of them (3) are focused on post-disaster interventions and efforts to restore as much as possible of the initial buildings.In the list, there are also research documents that aim at the stability of buildings during other hazardous events such as fire (2), erosion (1), floods (1) or wind(1).
Figure 3 .
Figure 3. Distribution of the documents by field of publication i.e., content characteristics.
Figure 4 .
Figure 4. Distribution of the environmental and refurbishment documents by type of publication.
Figure 4 .
Figure 4. Distribution of the environmental and refurbishment documents by type of publication.
Figure 5 .Figure 5 .
Figure 5. Findings from the systematic literature review: (a) categorisation of the documents presenting methodological approaches by primary driver; (b) categorisation of the documents describing a methodologic approach in maintenance and refurbishment by level of intervention.
Figure 6 .Figure 6 .
Figure 6.Findings from the systematic literature review: (a) categorisation of the scale of intervention at building (blue) or district level (orange); (b) categorisation of the building by its function.
• in Norway, projects involving dozens of public and industrial partners as well as a large number of pilot projects have been funded since 2009 with industry and governmental support to enable the transition to a low-carbon society.These research centres are: the Research Centre on Zero-Emission Buildings (ZEB) 2009-2017 [61] and the Research Centre on Zero-Emission Neighbourhoods in Smart Cities (FME ZEN) 2016-2024 [62].
Table 1 .
Distribution of the publications by continent within the two main research communities involved, i.e., the sustainability and conservation specialists.
Table 1 .
Distribution of the publications by continent within the two main research communities involved, i.e., the sustainability and conservation specialists.
Table 2, with some examples of the most common interventions performed.
Table 2 .
Categorisation of publications by primary building constructive material, number of related publications, and most common performed interventions. | 10,113.6 | 2018-01-31T00:00:00.000 | [
"Engineering"
] |
Learning Accounting Courses on Digital Platforms: How Do Non-Accounting Students Accept?
The study aimed to examine the perception and the factors influencing the online learning process of non-accounting students in accounting courses. A quantitative approach with a structured questionnaire was used for data collection and it was distributed to the non-accounting undergraduate students who enrolled in two accounting courses; Principles of Accounting and Managerial Accounting in the semester April 2020. The questionnaire consists of four dimensions; learner characteristics, technology and system, interactive application, and instructor characteristics. Of the total of 194 non-accounting undergraduate students, there were 130 respondents participated in the study. Descriptive statistics and statistical inference were accustomed analyse the mean, standard deviation, frequency, and percentage using the Statistical Package for Social Sciences version 25.0. The findings showed that there was a high-level acceptance of the non-accounting students towards learning accounting courses online. Results indicated that the instructor and learner characteristics were the two most significant influential factors of non-accounting students to learn accounting courses on digital platforms. The findings of this study cannot be generalised to other universities due to different environments and situations. Nevertheless, the implications of the study are crucial for instructors, practitioners, and institutions who are within the planning and are currently engaged in offering online learning courses.
Introduction
The outbreak of the Covid-19 epidemic has affected various sectors including the education sectors. Many countries have called for the temporary closure of educational institutions as a part of measures to prevent the spread of the Covid-19 pandemic. On 16 March 2020, the Malaysian Prime Minister, Tan Sri Muhyiddin Yassin has announced on the closure of all kindergartens, government and private schools, and other primaries, secondary and pre-university institutions following the implementation of government's Movement Control Order (MCO). The temporary closure of the schools and universities was expected to cause massive disruption to teaching and learning activities.
As a result, public and private universities have activated the online learning mode for classes and postponed outdoor activities.
The Malaysia Education Blueprint 2015-2025 has been introduced by the Ministry of Higher Education in 2015 to transform the higher education sector to be in line with the emergence of Industrial Revolution 4.0 (IR 4.0). One amongst the shifts discussed in the blueprint is to empower online learning, particularly to widen access to good quality content, enhance the standard of teaching and learning, and lower the cost of delivery (Ministry of Education Malaysia, 2013). Hence, the world crisis has provided the opportunities to revisit, revise, and rethink the present education system to adapt to the changes brought by digital technology and adopt a more flexible and interactive approach in teaching and learning activities. Thus, online learning is not any longer an option but a requirement to both instructors and learners, particularly during the enforcement of MCO. Many local universities, both public and private including schools in Malaysia, have no choice but to use virtual teaching and learning methods to confirm that the syllabus of teaching is best delivered and learning sessions are not delayed.
Problem Statement
With the development of modern technology, students are significantly in need of relevant knowledge, experience, and training to stay competitive in the real world after they graduated from their studies. Non-accounting students are sometimes required to complete certain accounting courses throughout their programme either as compulsory or elective courses. In the Faculty of Business and Accountancy, a faculty in a Malaysian university, students who are majoring in Business, Finance, Human Resource, and Marketing are required to enroll in two accounting courses, Principles of Accounting and Managerial Accounting during their first year of study. The first course contains a strong emphasis on general journal entries, ledger accounts, financial ratios, and simple preparation of the financial statement, while the latter cover job order costing, process costing, cost volume profit analysis, absorption and variable costing, and budgeting.
Accounting courses require an understanding of theoretical concepts and high practicality. It cannot be denied that certain topics are quite challenging for students. Previous studies have revealed that the non-accounting students perceived the accounting courses were irrelevant to their discipline, difficult to handle, and hard to score (Malgwi, 2006). As a consequence, many of them have lost focus and interest due to failed in grasping the concept of accounting. The non-accounting students needed a clear explanation and encouragement to increase their confidence and interest to enroll in accounting courses. Tickell, Tiong, and Balasinghan (2012) stated that it is difficult to develop the first course in accounting that is interesting, useful, and challenging to accounting and non-accounting students. Ismail and Kasim (2011) proposed that the accounting course, particularly Management Accounting should be offered to higher level non-accounting students rather than taking it within the earlier study. They found that year of study has a positive impact on the non-accounting students' academic performance of the accounting course. Table 1 below shows the academic performance of the non-accounting students in the Faculty of Business and Accountancy who enrolled in the two accounting courses for the past six regular semesters. Each semester is conducted over 18 weeks comprise of weekly lectures, mid-semester break, revision week, and exam weeks. Based on the table, it can be seen that the non-accounting students were managed to score the Managerial Accounting course as compared to Principles of Accounting course. The course encompasses a 100% passing rate for three consecutive semesters. However, about 3.2 percent to 11.1 percent of the students have failed in Principles of Accounting course for the past six semesters. Salwa, Amariah Hanum, Haslin, Jamil, and Nurizzah (2013) revealed that various factors result in the high failure rate; the course was a non-preferred course to the nonaccounting students, the questions in the final exam were not clear, and there have been plenty of assignments for the semester. Common teaching methodology for these courses to include an explanation with theory, questioning, and discussion among students, problem-based learning, group learning-teamwork, and assessment over lecture materials like tests and quizzes. The course content and learning material are taught by the instructor to a group of students in the classroom. Many of the universities have provided the Learning Management System (LMS) as an online learning platform designed to facilitate the delivery of learning materials online. However, most of the lecturers used this online learning platform as a repository for students to obtain learning materials in the form of a "PowerPoint" presentation and notes in PDF format. The worldwide spread of Covid-19 has shown a sharp increase in the use of online learning to replace the teaching and learning activities in the classroom. Various platforms such as WhatsApp, Telegram, YouTube, Kahoot, Zoom, Skype, Google Meet, and other online web conferencing platforms are widely used to replace the traditional type of learning instruction.
With the advancement and development of technology information that exists today, the method of teaching has changed to suit the present needs. According to Amichai-Hamburger, Wainapel & Fox (2002), students are more actively involved in the online discussions. Students can interact directly with the instructor if they have questions about the topics studied. There are four types of students' interactions as discussed by Said and Tahir (2013) which are between students and students, students and instructors, students and learning materials, and students and interfaces. However, Krishnan (2016) in his study found that students preferred and more comfortable interacting with their peers and the instructor in the face-to-face learning mode. Consistent with Tichavsky, Hunt, Driscoll, and Jicha (2015), students perceived online learning as lacking in social interaction with peers, and with the instructors. In other words, students do not feel that having online classes mode is akin to the lecture-based classroom.
Hence, this study tries to examine if the preference for online learning could be predicted from some relevant factors. Specifically, the study focuses on the non-accounting students' perceptions to learn accounting courses online relative to their respective academic disciplines. When Covid-19 hit Malaysia in 2020, online learning continues to take place to replace the face-to-face classroom. The Higher Education Ministry has announced that all teaching and learning activities to be conducted through online platforms until 31 December 2020. The process seems simple, but the challenge of educating remains, not just for the instructors, students but also among parents. Thus, the study addresses the following two general research questions: RQ 1: How do non-accounting students perceive to learn accounting courses on digital platforms? RQ 2: What is the most influential factor that can influence the non-accounting students to learn accounting courses on digital platforms?
Literature Review
Electronic learning or commonly referred to as online learning has been around since 1999 (Hussin, Bunyarit & Hussein, 2009). In step with Marianne, Linda, Yukie, and Austin (2012), the term online learning can be used to refer to a wide range of programs that use the web to supply instructional materials and facilitate interactions between teachers and students and in some cases among students as well. Previous authors (Sujit, Marguerite & Paul, 2018;Agung & Ramdani, 2019;Rao, 2011) may refer the term online learning as digital learning, virtual learning, or visual learning synonymously and interchangeably to bring the meaning of any kind of learning that features using digital technology.
The teaching and learning activities are not limited to the traditional model where the instructors focus this activity within the classroom rather it should be in line with the present development of information technology facilities. Online learning is one of the examples of using information and communication technology to facilitate the learning and teaching process (Chear, 2017;Nordin & Singh, 2018). Games, videos, slideshows, video conferencing, and live discussions are all parts of online learning tools utilized by the instructors during online classes. Moreover, the way of learning has changed with the event of technology. In reality, the majority of students nowadays are from Generation Z or Gen Z which they are more likely to depend on the utilization of technology in their daily activities and inseparable from the technology. Therefore, an appropriate shift needed to be done in the teaching and learning process to be in line with technological developments.
The characteristics of online learning have changed to be in line with technology advancement. Two categories of technology application can be employed in teaching and learning activities, namely synchronous and asynchronous. The first application involves technology platforms that can be used in real-time such as Skype, Google Hangout, Google Meet, YouTube Live, Zoom Meeting, and Facebook Live. The instructors can conduct their classes as usual without having to assemble in the classroom or lecture hall. During MCO, these platforms were widely used to lower the risk of Covid-19 infection. On the other hand, LMS, e-bulletin boards, emails, social media platforms, and learning videos are among examples of technology that may be used asynchronously. This application allows teaching and learning activities to take place without requiring the instructors and students to be present at the same time. LMS allows the institutions to manage contents, record lectures, store learning materials, and communicate with students (Ninoriya, 2011).
Communication in conventional learning is commonly cited as one-way and teacher-centered where students interact directly if they have any doubts about what is being taught (Shahaimi & Khalid, 2016). Online interaction on the other hand can open up greater space and opportunities for the students to interact in the discussion. A study by Zaidatun and Yap (2000) shows that the use of interactive multimedia materials can make learning easier and understandable. Students are more inquisitive about the new technology and willing to use it for online learning. Zazaleena, Nursyahidah, Mohd Norafizal, and Nor Zalina (2012) found that the acceptance level of the students in using online learning for teaching and learning activities is high if it can provide the identical learning experience as the current education style and able to provide an interactive learning environment. The success of an online course depends on effective course design employing a student-centered model, delivery, and assessment (Mortagy & Bonghikian-Whitby, 2010).
In another study conducted by Donnie, Bambang, Ahmed, and Syafika (2018), students have a rather positive perception towards online learning and that they were ready for blended learning. On top of that, students were no longer relying upon their instructors to provide the educational materials. The instructors should not just deliver their syllabus but should be more to encourage the students' participation in the online discussion and communicate with one another. The encouragement and motivation from the instructors to have interaction in the usage of online learning are important because students nowadays used mobile technology extensively but they were unacquainted with online learning tools (Ngampornchai & Adams, 2016). In another study done by Husam and Selieman (2009), students' prior experience in using computers may influence their perceived skills and perceptions towards using computers for learning purposes. Findings from Selvi (2010) showed that the instructors' competencies, participants' attention, online learning environment, and time management contribute to the students' motivation in the online course. In a nutshell, online learning incorporates a positive influence and a positive impact on students' performance with a much better understanding of their registered courses (Mahajan & Kalpana, 2018).
A study conducted by Patricia, Gilvania, Neilson, and Pamella (2015) suggested that students in Business Administration, Economics, and Accounting majors had positive initial perceptions of introductory accounting classes. In a very comparison of the first two majors, accounting students' majors were the foremost optimistic since the accounting classes were more relevant to their academic and professional performance. Azleen, Mohd Rushdan, Rahida, and Mohd Zulkeflee (2009) in their findings revealed that accounting experiences students perceived more confident in taking the accounting course compared to non-accounting experience student. Similar findings were discussed in Tickell, Tiong, and Balasinghan (2012) where the accounting major students hold significantly more positive attitudes to the first course in accounting than does business major students. This finding supports that of Geiger and Ogilby (2000) who find that students majoring in accounting have a more favorable perception of the introductory accounting course than do other students.
The role of the instructor was perceived as very significant in making the course easily understood and enjoyable (Gois & Bras, 2013). Students' perception of accounting course highly depending on how the instructor influence the student's opinion on the usefulness of the course. Similar findings were mentioned in Azmi, Zam, and Zulkarnain (2010), where the role of instructors was a key factor to influence the students' results, particularly for accounting courses. Besides, the accuracy in answering the questions also contributes to the students' performance in accounting courses. However, non-accounting students may perceive differently (Moriza, Wan Mustaffa, Zia, 2017). Accounting courses were perceived as a difficult course which is hard to learn as if learning a brandnew language. As a result, it creates anxiety among the students. This could be explained that interest maybe a drive or a person's tendency to relinquish attention to something, someone, or activity. Interactive learning sessions must be conducted and also the instructors should concern whether the non-accounting students feel that the accounting course is hard, burden, and uninteresting.
Online learning might not be appropriate for each student. There are two main challenges for instructors to conduct online classes, namely internet accessibility (Amiruddin & Khaizer, 2019;Norazlin & Rahaimah, 2019) and stability of teaching platforms. Internet accessibility becomes one in all the important criteria for conducting online classes. The instructors work out suitable teaching and learning platforms by identifying the amount of the internet accessibility whether low, medium, or high for every student so the online classes may be conducted smoothly. Students living in rural areas with poor internet coverage must be reached through more suitable platforms like WhatsApp, Telegram, Instagram, and Messenger applications. This can be important to prevent students from leaving behind and at the same time to stimulate students' cognitive systems continuously. For students that have medium access to the internet, platforms like YouTube and Kahoot are most fitted to use while for students who have a good internet connection, face to face platforms like Zoom, Microsoft Team, Skype and Google Meet are suitable to be used.
Nevertheless, the responsibility to facilitate the migration process from traditional mode to online teaching and learning activities falls on the shoulders of both parties, namely the instructors and students. Online learning unveils new opportunities for all higher educations in the effort to remodel the education sectors into digitalization in line with the IR 4.0 and the education blueprint. Furthermore, this can be certainly the safest method to interchange face to face interactions within the classrooms during the implementation of the MCO to interrupt the Covid-19 epidemic chain.
Research Methodology
The primary data collection process was carried out in Faculty of Business and Accountancy, at a private Malaysian university using a structured survey questionnaire which was designed and tailored to academic settings (Bibiana Lim, Hong & Tan, 2008;Songsangyos, Kankaew & Jongsawat, 2016;Winarto, Panjaitan & Tambunan, 2019;Sumarni & Zamri, 2018). The survey form consists of Section A (demographic information) and Section B (non-accounting undergraduate students' acceptance towards learning accounting courses on digital platforms). Data for Section B were gathered using five-point Likert type scales ranging from 1 (strongly disagree) to 5 (strongly agree), consists of 32 items assessed from four dimensions namely; learner characteristics, technology and system, interactive application, and instructor characteristics.
The questionnaires were distributed to all non-accounting undergraduate students who were enrolled in two accounting courses; Principles of Accounting and Managerial Accounting in semester April 2020 using the web-based survey tool, Google Form. These two courses are among compulsory courses for the non-accounting undergraduate students to be able to graduate from their programme. In all, 130 non-accounting undergraduate students had participated in the study and after going over the respondents' data, all data were usable for further analysis, giving a usable rate of 67%. Descriptive statistics and statistical inference were used to analyse the mean, standard deviation, frequency, and percentage using the Statistical Package for Social Sciences (SPSS) version 25.0. The results were based on the total number of respondents answering each particular question.
Findings and Discussion
The section presents the results for the questions associated with the level of acceptance of nonaccounting students towards learning accounting courses on digital platforms. Descriptive characteristics of the profile of the respondents from the survey were illustrated in Table 2 further indicates that the majority of the respondents preferred Google Meet (40.8%) as their online learning platform. Since Google Meet is free for everyone and simply accessible from anywhere, it is the foremost preferred online learning platform by students and lecturers. Meanwhile, OWC (31.5%) is an online web conferencing and learning platform provided by the private university, and also the majority of the courses were delivered through this platform. Apart from these two online learning platforms, other platforms were Google Classroom (16.2%), Hangouts (7.7%), and Skype (3.8%). Based on the sample criterion set in this study, the majority of the respondents (76.9%) have accounting knowledge, probably from the Diploma programme and only 30 respondents (23.1%) did not have any prior accounting knowledge.
The level of non-accounting students' acceptance and willingness to learn accounting courses on digital platforms were discussed supported the frequency, percentage, and mean. The mean score interpretation was taken from five values points which is that the highest value of 5.00 divided into three levels, namely low (mean = 1.00 -2.49), moderate (mean = 2.50 -3.49), and high (mean = 3.50 -5.00). Table 3 below shows the interpretation of the mean employed in Jamil (2002) and Wiersma (2000) in their studies. Table 4 below shows the mean scores for the four contributing factors towards the acceptance of the non-accounting students to learn accounting courses through digital platforms. From the descriptive statistics, it showed that among the items from the four factors, students are more preferred to learn accounting courses in the classroom (mean = 4.25). Most of the students agreed that they furnish careful attention to the instructors in the classroom. The result shows that students have difficulty in learning accounting courses (mean = 3.06) on digital platforms. They find it difficult to understand certain accounting topics though many of them have basic accounting knowledge (mean = 3.42) and have confidence and motivation (mean = 3.48) in learning accounting courses online (mean = 3.36). Nevertheless, they are willing to participate and involve (mean = 3.65) in online learning activities.
The findings also show that the online learning system allows easy access to information (mean = 3.61), hence the coursework such as quizzes, tests, tutorials, and assignments is uploaded easily to the LMS (mean = 3.68). Many of them also agreed that there is an interaction between instructors and students on digital platforms. They believe that having online discussion enables students to ask questions and exchange ideas between one another (mean = 3.53). However, many of the students also agreed that the online learning system did not provide interactive applications (mean = 3.43), the design was not appropriate (mean = 3.45) and the guidance screen was not clear and difficult to use (mean = 3.29). Sometimes, they were disconnected during the online classes (mean = 3.30) and the browsing speed was low (mean = 3.24). The instructors also did not provide relevant learning content associated with the course (mean = 3.34). The online learning system allows easy access to information.
1.089 3.61 High 1 The information from the online learning system is up-to-date enough for my purposes.
0.964 3.57 High 2 The screen layout and design are appropriate. 1.027 3.45 Moderate 3 The online learning system can provide learning contents that are relevant to study accounting 1.008 3.34 Moderate 4 I am rarely disconnected during the online tutorial.
1.111 3.30 Moderate 5 The guidance screen is clear and easy to use. 1.060 3.29 Moderate 6 I am satisfied with the online learning system functions.
Characteristics
The instructor encourages student interactions. 0.887 3.79 High 1 The instructor's knowledge of using Internet technology affects the efficiency of online learning. 0.817 3.75 High 2 The instructor is enthusiastic about teaching and explaining via the web. 0.898 3.68 High 3 The instructor is easily contacted. 0.857 3.67 High 4 The instructor provides fast feedbacks to queries in the discussion forum. 0.793 3.66 High 5 The instructor solves emerging problems efficiently.
0.843 3.65 High 6 The instructor provides sufficient learning resources online. 0.923 3.57 High 7 The instructor explains how to use the website at the beginning of the semester. 0.942 3.53 High 8 From the table above, it can be seen that the instructor characteristics are the foremost influential factor for the students to be motivated and participate in online learning activities. All eight items in the variable score mean above 3.50 which may be interpreted as high. The majority of the students agreed that the role of the instructors to encourage the students' interaction in online learning activities is vital (mean = 3.79). The instructors must know using internet technology to ensure the efficiency of online learning (mean = 3.75) must be enthusiastic and motivated to teach online classes (mean = 3.68). The finding from the study is consistent with Abidin (2014) where the role of the instructors to obtain digital literacy skills was considered very significant in ensuring the success of the online classes. Digital literacy is constructed based on three principles (Suleiman, 2012); knowledge and skills, ability to understand the content and its application, and skill to use digital technology.
The role of instructors in the era of IR 4.0 is not only to disseminate knowledge but to educate and guide the students into resilient, flexible, lifelong learning, critical thinking, and creativity. They ought to not only impart information and knowledge to the students but also encourage them to participate in the online discussions. During online classes, the instructors must be easily contacted (mean = 3.67), provide fast feedbacks (mean = 3.66), and supply sufficient learning resources (mean = 3.57).
On top of that, the instructors must enlighten the students on how to use digital platforms at the beginning of the semester. The instructors themselves must make sure that they can apply the technology in their teaching and learning process so that they can guide the students and manage the online learning effectively. Chew (2015) suggested that the instructor must upgrade their ability to structure their teaching and apply it when designing the learning activities. Table 5 below shows the overall means of the four contributing factors towards the acceptance level of the non-accounting students to learn accounting courses on digital platforms. From the table, it can be concluded that instructor characteristics (overall mean = 3.66) are the foremost influential factor towards the non-accounting students' acceptance to learn accounting courses online.
Additionally, learner characteristics also contribute to the high overall mean value (3.54) to the acceptance level. Interactive application and technology and also the system can be said moderately affect the acceptance level of the non-accounting students to learn accounting courses on digital platforms, which bring overall mean 3.48 and 3.38 respectively. From the overall means above, the study found that learning accounting courses on digital platforms were perceived positively by non-accounting students. However, if the students were offered to learn accounting courses between the online-based classroom and physical classroom, they would value more highly to have the latter. It was found that the instructor characteristics could influence and motivate the students to learn accounting courses online. In today's environment, the role of the instructors can easily get replaced by artificial intelligence to carry out their tasks. Thus, an appropriate shift needed to be made in the process of delivering knowledge to be in line with technological developments. In all, it can be said that the instructor will continue to play a central role in education, as a learning catalyst and knowledge navigator for students participating in online education (Olson, 2005). Hence, for at least the following few years, the institutions need to come out with online learning tools, contents, and modules for better teaching and learning delivery to form a positive impact on the students' performance (Mahajan & Kalpana, 2018).
On the other hand, students must arrange their learning schedule, and find the appropriate learning materials, be more active and independent. Basic skills in using computers are important to facilitate the employment of online learning. Yusri (2017) mentioned that it is important for a student to master the fundamental skills to use online learning well. Finding authentic and accurate learning materials is one of the lifelong learning skills that require to be developed by each student. In another study by Ismail & Shelley (2008), as long as students have the appropriate skills to use online learning tools, they perceive online learning may be a useful and versatile way of learning, they felt motivated and enjoyed from online instruction. Students should bear in mind their responsibilities in ensuring the learning process continues, while there have been no physical classes.
The Covid-19 crisis has brought major changes to the educational system in Malaysia. The impact can be seen where universities and colleges were forced to deliver their courses online. The adoption of online learning on various digital platforms has been unprecedented by many of the instructors still as students. On the other hand, the speed of the industrial revolutions has changed the patterns of teaching delivery and learning activities to be more digitally centered and become the norm.
Conclusion
In conclusion, the findings of the study showed a high level of acceptance among non-accounting undergraduate students to learn accounting courses on digital platforms. It was revealed that the instructor characteristics were the foremost significant influential factor for non-accounting students to be motivated to learn accounting courses online. A flexible way of learning, communicating, and sharing from online discussion and instruction leads to a high level of acceptance, engagement, and enjoyment in learning accounting courses. Besides, the learner characteristics of learning accounting courses are positive and favorable. The study revealed that the non-accounting undergraduate students felt confident and were keen and wanting to learn accounting courses online as long as the perceived skills to use the online tools were acquired. Learning accounting courses on digital platforms may be more easily accepted if it can provide the identical learning experience based on the traditional lecture-based classroom.
Although the present study drew a broader picture of the influential factors to the non-accounting students to learn accounting courses online than previous studies, due to a small number of participants, it is not possible to generalize the acceptance level of the non-accounting students to other universities due to different environments and situations. Based on the findings, some suggestions for future studies were proposed. The larger and more diverse sample size is necessary so that the data will be more applicable to the increasingly diverse student population in Malaysian universities and colleges. The statistical data could even be collected using qualitative approaches, as an example from the semi-structured interviews, to produce a deeper understanding of the nonaccounting students' perception towards learning accounting courses online. The relationship between the acceptance level with the non-accounting students' attitude and behavior could also be further investigated. Nevertheless, the implications of the study are crucial for instructors, practitioners, and institutions who are within the planning and are currently engaged in offering online learning courses.
This study contributes to understanding the influencing factors intention of non-accounting students to learn accounting courses online. To the best of our knowledge, this present study is that the first to examine the preferences for online learning environments in terms of learning accounting courses from the perspective of the non-accounting students. From an academic perspective, this paper adds a brand-new perspective to the literature about online learning. From a practitioner perspective, the findings show that the universities, particularly the instructors must also empower themselves to ensure the online learning sessions be conducted effectively. In practical terms, this study contributes to giving possible recommendations to the practitioners and institutions on how to carry out the online learning sessions more effectively within the universities. In short, the committed effort between students, instructors, and institutions is vital to ensure proper implementation and adaptation to the online learning environment system. | 7,001.4 | 2020-08-16T00:00:00.000 | [
"Business",
"Education",
"Computer Science"
] |
Wideband passive source localization
This paper develops a mathematical method for determining the locations of multiple transmitters from passive measurements of the signals at two or more receivers. The method applies to the case of emitters transmitting either wideband or narrowband signals.
Introduction
This paper develops a mathematical method for determining the locations of multiple transmitters from passive measurements of the signals at two or more receivers. A typical geometry is shown in figure 1. In this problem, the emitted signals are unknown, and, in particular, the time at which they originate is unknown, so ordinary echolocation and triangulation methods cannot be used.
We consider the case of emitters that are transmitting acoustic or electromagnetic energy for the purposes of sensing or communication. The receivers of such emissions must be able to identify emissions from a particular source. For example, different broadcast radio stations use different frequency bands. Similarly, a cell-phone receiver must be able to identify the signals due to a particular call; in the US, this is done by means of the code division multiple access (CDMA) protocol, in which different calls use the same frequency band but are assigned to orthogonal codes.
On the other hand, there are some emitters that interfere with others, either accidentally, as in the case of emissions from an electric motor, or intentionally, as in the case of jamming. In this paper we do not consider the case of multiple interfering emitters.
The problem of determining locations of emitters has been previously studied; see e.g. [1,3,6,7,9,12,16,18] and references in these works. Most of this work assumes the presence of a single point-like transmitter. The problem for multiple transmitters has been addressed in [10,11] which approach the problem from an imaging point of view. A detection-theoretic approach for multiple transmitters has been recently developed by the authors of [13]. The problem of passive source localization is closely connected to the problem of passive imaging, which has been studied in e.g. [5,17,20].
Efforts to locate unknown sources require decisions regarding two issues: (i) What exactly is being imaged? Much of the previous work assumes that only a single point-like source is present. Other previous work, such as [10,11], assumes a statistical distribution of sources that is incoherent, i.e. delta-correlated. In contrast, we seek to form an image of a non-statistical distribution of sources. (ii) How should we handle the fact that the emitted waveform is unknown? In the case of a single source, the unknown waveform can be eliminated. Much of the work dealing with multiple emitters simply ignores the waveform and instead focuses on the phase. In contrast, we exploit the (approximate) waveform orthogonality that the spectrum users themselves require.
One of the challenges in passive sensing is that the natural data consists of receiver signal cross-correlations. These cross-correlations are quadratic quantities, which means that the usual linear methods do not apply.
In this work, we assume that the transmitters are stationary and that at least one of the receivers is moving. The fact that one receiver is moving enables us to compute time-frequency transforms of the received signals. Specifically, we use the data from two receivers to compute the receiver cross-ambiguity function, which we first relate to the emitter auto-ambiguity function and then backproject to form a spatial image of the emitter locations. This technique combines the known efficacy of time-difference-of-arrival (TDOA) and frequency-differenceof-arrival (FDOA) methods in passive detection, with the ability of synthetic-aperture-like backprojection imaging [4] to coherently focus a weak signal. Signals are received on two receivers, labelled γ 1 (moving) and γ 2 (stationary), and the goal is to find the locations of the sources, here shown in the lower left corner.
We use wideband ambiguity functions [14], to allow for long integration times and emitter waveforms with high frequency diversity. Thus, our method applies to the case of emitters transmitting either wideband or narrowband signals. Relevant work on the wideband ambiguity function includes [14,19,20].
The method developed in this paper does not require the introduction of an emitter density function, and does not rely on the theory of Fourier integral operators. Rather, the method exploits a relation similar to that of [2] to derive a wideband version of a Moyal-like identity [19].
Mathematical model for the data
We consider the case in which the receivers, which are assumed to be pointlike, are moving along paths γ m (t), m = 1, 2, . . .. For example, for a stationary receiver located at z, we have simply γ m (t) = z.
We denote the waveform emitted from a transmitter at location y by p y (t). For example, if the scene consists of a single cell phone at location y 0 emitting the CDMA signal p y 0 (t), then we could take p y (t) = δ(y − y 0 )p y 0 (t). If there is no source at a location y, then p y (t) = 0. We assume that the transmitters are isotropic, so that the signal from each transmitter is received at each receiver. We also assume that the emitted waveforms p y (t) are smooth in t, and we assume that the emissions from different sources (i.e. from different locations) can be disentangled in a sense that will be made more precise below.
Background on the wave equation
We assume that the field u emanating from the source satisfies the scalar wave equation The free-space Green's function is which satisfies
The model for received data
The solution of (1) is obtained by convolving the right side of (1) with the Green's function (2): The data d m (t) measured by receiver m is u(t, γ m (t)): A receiver antenna beam pattern could easily be included in (5); this would limit the region of integration in y at each t. In (5), by assuming that time is the same in all reference frames, we are neglecting any effects of special relativity.
Variable counts and time scales
We note that in the case in which the sources lie on a known surface, the unknown source p y (t) is a function of three variables, namely y 1 , y 2 , and t, whereas the signal d m (t) is a function of only one variable. However, this problem involves multiple time scales. First, the speed of wave propagation is 3 · 10 9 m s −1 , which is much faster than the speed at which the receivers move, which is typically subsonic, less than about 300 m s −1 . Consequently we introduce 'fast' and 'slow' time scales with the windowing technique below.
Second, the emitted waveforms are likely to be highly oscillatory: cell phones, for example, operate at roughly 1 GHz =10 9 Hz. We exploit the oscillatory nature of the signals by performing a type of time-frequency analysis in the fast time scale. In this manner we convert the original one-dimensional signal into a function of three variables, namely slow time (s), fast time (t or τ) and fast frequency or scale factor σ.
Ambiguity functions
For a single delta-like source at y 0 , it can be shown 3 that over short time intervals, the signal where a is the Doppler scale factor 4 (also known as the Doppler stretch factor), and b is related to the time delay required for the signal to propagate from y 0 to the receiver. The time delay and Doppler scale factor cannot be determined from the data (5), because the transmitted signal is unknown. We can, however, measure the signal at two different receivers, and compare the measurements.
In fact, the usual procedure for detecting the presence of a known expected signal in a received signal contaminated by noise is to use a matched filter [15], which cross-correlates the received signal with the expected signal. In our case, we do not know the transmitted signal, but we do have two received signals, one measured at each receiver. Although both may be contaminated by noise, if both receivers record a signal due to the same transmission, we expect that the noise-free part of the signal recorded at one receiver should be a time-delayed, Doppler-scaled version of the noise-free part of the signal recorded at the other receiver. Consequently we first use the signal at one receiver as a matched filter for the signal at the other receiver. In other words, we compute the (windowed) wideband cross-ambiguity function of the received signals: Here σ is the Doppler scale factor due to the fact that the receivers are in relative motion, and τ is the time delay due to the fact that the receivers are at different distances from the emitter. In (6), the time integration is limited automatically by the support of the window function h.
The time window h is included in (6) in order to separate the different time scales. We refer to the time t as the 'fast' time, corresponding to the time scale on which the waveforms and wave propagation takes place, and s as the 'slow' time. The time delay and Doppler scale factor change with the geometry, which changes only over the slow time scale. The length of the time window should be chosen so that the geometry, and therefore the time delay and Doppler scale, do not change signficantly over the duration of the time window.
Using (5) in (6), we obtain where R m,y (t) = |γ m (t) − y| is the range of the emitter from the receiver platform m. We note that the data (7) depends quadratically on the source terms p z (t) and p y (t) about which we wish to obtain information.
In order to identify the Doppler effect, we perform a Taylor expansion of R about the window center s: where the dot denotes differentiation with respect to the argument of R. We write Using (8) in (7) results in 5 where in the last line we have introduced notation for the Doppler scale factor as a m,y = (1 −Ṙ m,y (s)/c) (and temporarily suppressed the s dependence).
In (9) we make the change of variables t → t ≡ ta l,z − r l,z (s)/c, which has inverse to obtain where we have neglected the Jacobian because it is of the form 1 + O(Ṙ/c). We assume in this work that if multiple emitters are present, each emitter must be able to distinguish its own signal from that of other emitters. Consequently we assume that the emitter waveforms are approximately orthogonal in the sense that where h denotes a windowing function and χ y,s denotes the windowed (wideband) emitter auto-ambiguity function for the waveform p y . We see that (11) is also of the form of a windowed cross-ambiguity function; we can put it in the form of the left side of (12) by noting that the argument of p * y can be written With assumption (12) about orthogonality of the transmitted waveforms, we write (11) as where we have neglected the order Ṙ /c terms in the argument of the smoothly varying window h (see appendix B). In (14), we have used the notation The significance of (14) is that the receiver cross-ambiguity function χ m,l,s , which we measure, is related to the emitter auto-ambiguity function χ y,∆ s,l , about which we wish to obtain information, evaluated at the arguments involving the expressions (15). The arguments of (14) can be interpreted as follows. We note that in the case of a single emitter, the integration over y in (14) disappears, and the transmitter waveform auto-ambiguity function on the right side has a maximum when σ and τ are related to y by The first condition of (16) is known as the frequency difference of arrival (FDOA) condition, and the second is known as the time difference of arrival (TDOA) condition. The set of points y determined by the TDOA condition τ = constant and FDOA condition σ = constant are each algebraic curves (hyperbolas in the case when one receiver is stationary). When the geometry is favorable, these curves intersect in a unique point y, which is the desired source location. Such a favorable situation is indicated in figure 2. The TDOA and FDOA curves depend on the receiver positions and velocities; in unfavorable geometry, for example, the TDOA and FDOA curves could be nearly tangent. The case when both receivers are moving results in FDOA curves that can be much more complicated; see figure 3. This process can be viewed in the following alternative way. In the single-emitter case, the values of σ and τ at the cross-ambiguity maximum give the TDOA and FDOA of the emitter. In other words, the ambiguity function locates the emitter in the TDOA-FDOA coordinate system, and under favorable conditions it is possible to transform the TDOA-FDOA coordinates into spatial coordinates ( y).
Difficulties arise when there are multiple emitters present or when the cross-ambiguity function is ridge-like, so that the precise location of the maximum may be difficult to determine. For these circumstances, we perform an additional matched filtering step: we use the receiver auto-ambiguity function as a matched filter ('second filter') for the cross-ambiguity function.
Receiver auto-ambiguity function
When m = l in (14), the function χ l,l,s (τ , σ) is the receiver auto-ambiguity function: where Thus the receiver auto-ambiguity function (which we measure) is also a shifted, weighted sum of transmitter auto-ambiguity functions (about which we wish to obtain information). When only a single emitter is present, the right side of (17) attains its maximum at σ = 1, τ = T l,l,y,s (1) = 0.
Image formation
The idea is to use a shifted version of the receiver auto-ambiguity function (17) as a matched filter for the cross-ambiguity function (14). In order to determine the correct shift, we need to find the correct arguments to use in the auto-ambiguity function. In particular, we want to choose τ and σ so that at location y = x = z, the arguments of p * z in Thus we should choose σ so that and choose τ so that Solving (22) for τ , we obtain To form an image, we use the shifted auto-ambiguity function χ * l,l,s (τ (τ , x), σ (σ, x)) as a matched filter for the cross-ambiguity function χ m,l,s (τ , σ) and then integrate over 'slow time' s: where the shifts τ (τ , σ, x) and σ (σ, x) are given by (23) and (21), respectively.
Image analysis
To understand what information is provided by the image (24), we relate (24) to the transmitted waveforms. To do this, we substitute (17) and (11) into (24). After some calculations (see appendix C) , we obtain the approximate result where P y denotes the Fourier transform of p y . The t integral of (25) is the (windowed) power of the emitter waveform p y . The ω integral of (25) has its maximum when the arguments of P y match and when the exponent is zero. These conditions are 1 = a m,x a l,y a m,y a l, which are both satisfied at x = y. The integration over s provides a coherent sum similar to that of synthetic-aperture imaging. Thus we expect the image to show a peak at the correct spatial location, and the intensity of the image is determined by the signal energy.
We can understand (25) by considering each slow time s separately. In particular, we can think of the (windowed) source energy density (the t integral) as the quantity we want to image. The ω integral is then the point-spread function for that s. These individual images are then combined coherently as s ranges over the time interval for which we have measurements.
We see that each slow-time point-spread function depends not only on the receiver trajectories, but also on the waveform transmitted. We leave a systematic exploration of this pointspread function for the future. In the following section we consider a very simple special case.
Approximate analysis of (25) in a special case
In this section we carry out an approximate analysis of (25) in the case when the emitted waveform is the duration-T continuous-wave (CW) pulse where the rect function is 1 when its argument is less than 1/2 and 0 otherwise. The Fourier transform of p is where sinc x = (sin x)/x. The main lobe of (28) has width 2|ω − ω 0 | = 4π/T , which implies that for large duration T, the CW pulse (27) has a spectrum that is concentrated around ω = ω 0 .
Replacing ω by ω 0 in the denominator of the ω integration of (25) results in the ω integral being proportional to the frequency-domain expression for the (unwindowed) wideband ambiguity function [14]. For a narrowband signal, the wideband ambiguity function reduces to the narrowband ambiguity function (see appendix A): The narrowband ambiguity function for (27) is [8] When T is large, this ambiguity function is a long, thin ridge in the τ direction; in other words, the long-duration CW pulse corresponds to good resolution in the Doppler direction but poor resolution in the delay direction.
If the geometry is such that we can consider the delay (TDOA) resolution and Doppler (FDOA) resolution separately (for example, in the geometry of figure 2), then for the peak-tonull resolution, we have roughly |τ (x, y)| T and a m,x a l,y a m,y a l, These conditions give the 'thickness' of the TDOA and FDOA curves, respectively. From (26), for the geometry shown in figure 2, where one receiver is stationary and the second receiver's trajectory is perpendicular to the direction to the first receiver, the FDOA condition of (32) translates into where λ 0 = 2πc/ω 0 is the wavelength corresponding to angular frequency ω 0 . Since R m,x ≈ |γ m (s) − x|/R, (33) translates into resolution in the direction of the flight velocity vector: For the example considered in appendix B, in which R = 10 4 m, |γ| = 100 m s −1 and λ 0 = .3 m, we find FDOA resolution of ∆R ≈ 5T m. Similarly, the TDOA conditions of (26) and (32) translate into which translates roughly into |x − y| cT . When T is large, this resolution is poor. Consequently we expect the TDOA localization to be accomplished by viewing the emitter from different locations along the synthetic aperture. A flight path encircling the emitter would then result in resolution (34).
Conclusions
We have developed an imaging algorithm for showing the locations of multiple transmitters that could be transmitting broadband, difficult-to-detect waveforms. The algorithm proceeds by first computing the receiver cross-ambiguity function and the receiver auto-ambiguity function. The receiver auto-ambiguity function, appropriately shifted, is then used as a matched filter for the receiver cross-ambiguity function in an image formation algorithm similar to standard synthetic-aperture imaging. We leave for the future the following interesting questions.
• How can the resolution of images be predicted in the case of more general geometry and more general emitted signals? Figure 3 suggests that the general case will involve not only signal processing but also algebraic geometry. • How robust is this method in the presence of additive noise and uncertainties in receiver position and velocity? • Can an approach to emitter localization based on sparsity be developed?
• Is it possible to develop a similar theory that would apply to interfering emitters?
Acknowledgment
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-14-1-0185 6 . We also thank the Office of Naval Research for support of NRL in-house research which contributed to this effort. We would also like to thank the anonymous reviewers for their comments, which helped us to improve the exposition.
Appendix B. Example of window length
For a situation in which the receiver is 10 km from the emitter and flies in a straight line, perpend icular to the direction to the emitter, at a speed of 100 m s −1 , we carry out the following calculation to find an appropriate window length. To make this less than, say, one-tenth of a full cycle, or less than 2π/10 radians, we should have (t − s) 2 6 · 10 −2 or (t − s) .2 s. A window of length 2 · .2 = .4 s will encompass approximately 5 · 10 8 cycles of the carrier wave. A window of this length corresponds to a largest Fourier frequency of 1/.4 = 5/2. In this example, Ṙ /c ≈ 3 · 10 −7 . The term in the window argument neglected in (14) therefore corresponds to the fraction of a full cycle of the highest-frequency component of the window. 6 Consequently the US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the US Government. | 4,944.8 | 2017-07-05T00:00:00.000 | [
"Engineering",
"Physics"
] |
New Insights into the Basic and Translational Aspects of AMPK Signaling
5'-adenosine monophosphate (AMP)-activated protein kinase (AMPK) is an enzyme regulating numerous cellular processes involved in cell survival as well as health- and lifespan [...].
5 -adenosine monophosphate (AMP)-activated protein kinase (AMPK) is an enzyme regulating numerous cellular processes involved in cell survival as well as health-and lifespan. AMPK has emerged not only as a key cellular energy sensor but also as an important integrator of signals that manages cellular energy balance. Due to this property and being ubiquitously expressed in all mammalian cell types, AMPK has attracted interest in virtually all areas of biomedical research. In this Special Issue of Cells, several up-to-date reviews and original studies have provided new insights into the basic and translational aspects of AMPK signaling.
Basic Aspects
Visnjic et al. [1] considered an important methodological approach in AMPK research by giving an overview of the widely used AMPK activator 5-amino-4-imidazolecarboxamide ribonucleoside (AICAr). In particular, the authors described its AMPK-dependent and AMPK-independent effects covering basic aspects for both, such as metabolism, nucleotide synthesis, cell cycle, and diseases, e.g., ischemia, diabetes, and cancer.
In the other review paper, Aslam and Ladilov [2] described the pathways that link cAMP and AMPK. Since many of the physiological stresses lead to cellular cAMP elevation and are associated with increased energy consumption, it is not surprising that the activation of cAMP signaling may promote AMPK activity. The authors described several mechanisms leading to the activation of the AMP-AMPK axis and its beneficial role in mitochondrial homeostasis, lipid metabolism, and inflammation regulation, as well as in diseases such as ischemia and diabetes. An important translational message of the review is that physical activity leading to the elevation of cellular cAMP levels may be a "drug-free" approach to promoting the cAMP-AMPK axis.
Obesity has been categorized by the American Medical Association as a chronic progressive metabolic disorder and is a growing public health concern worldwide. In the paper by van der Vaart et al. [3], an important approach for fighting obesity was reviewed. The authors described the basic mechanisms of brown adipose tissue (BAT) activation and investigated how AMPK can be used as a target for BAT activation. In particular, several AMPK-mediated signaling pathways involved in BAT activation were presented in the review. These pathways control three main processes involved in BAT activation: the development of brown adipocytes, support for mitochondrial health, and increased thermogenesis.
Finally, an original study by Li et al. [4] provided new data describing the role of AMPK in the effects of C1q tumor necrosis factor-alpha-related proteins (CTRPs) on glucose and fatty acid metabolism in adult rat cardiomyocytes and H9C2 cardiomyoblasts. Among several CTRPs leading to the phosphorylation of AMPK, only CTRP2, 7, 9, and 13 induced GLUT1 and GLUT4 translocation and glucose uptake, as well as the upregulation of enzymes involved in glucose or fatty acid metabolism. Since the knockdown of the adiponectin receptor 1 abolished CTRP7-and CTRP9-mediated phosphorylation of AMPK and ACC, the authors suggest a major role of this receptor in promoting AMPK activation via CTRPs.
Translational Aspects
In a sophisticated study, Olivier et al. [5] investigated the intestinal epithelial protective role of AMPK in colitis. A deficiency of AMPK in the intestinal epithelial cells delayed epithelial repair in a mouse model of colitis. On the other hand, metformin, an anti-diabetic drug activating AMPK, accelerated intestinal repair [5,6]. Interestingly, diabetic patients using metformin to control their blood glucose level are at a lower risk of inflammatory bowel disease compared to diabetic patients using other anti-diabetic drugs [7]. To explore this phenomenon further, a number of clinical trials have been registered to use metformin as an add-on/adjuvant therapy in ulcerative colitis [8][9][10].
Non-alcoholic fatty liver disease (NAFLD) is emerging as a leading cause of chronic liver illness affecting > 30% of the European population, and even higher rates can be seen in the US population [11,12]. The hallmark finding that AMPK regulates fatty acid metabolism by inactivating acetyl CoA carboxylase (ACC) [13] is elegantly discussed in the review by von Loeffelholz et al. [14]. Chronic caloric oversupply (e.g., hyperglycemia) may lead to the suppression of AMPK activity, promoting the activation of ACC and the deposition of lipids in the liver [14][15][16]. Several clinical trials are being conducted using metformin and other AMPK activators in combination with other anti-diabetic drugs to suppress de novo lipid production in NAFLD [17][18][19].
Both transcriptional and genetic alterations in AMPK pathways may impact tumor microenvironments in a tissue-dependent manner either positively or negatively [20]. This aspect of AMPK in breast cancer is reviewed in detail by Uprety and Abrahamse [21]. The authors discuss various pharmacological agents and strategies which activate AMPK signaling and examine their effects on cancer cells. The authors particularly discuss the potential use of vanadium compounds in anti-cancer therapy. Vanadium compounds potentially inhibit protein tyrosine phosphatase 1B (PTP1B) which is an endogenous inhibitor of AMPK. Thus, by relieving the suppression of AMPK by PTP1B, vanadium compounds indirectly activate AMPK. Therefore, vanadium compounds offer an alternative to direct-AMPK activators to treat cancer and diabetes as a combination therapy with other drugs.
Lastly, Bhutta et al. [22] reviewed the potential use of drugs that modulate AMPK activity as an anti-viral therapy. The authors discussed several studies conducted in cell culture or animal models showing beneficial or detrimental effects of AMPK activators on infection/proliferation of various viruses such as HBV, HCV, HMCV, HSV-1, KSHV, RSV, and SARS-CoV-2. The authors suggest that the use of activators or inhibitors of AMPK signaling may be beneficial depending on the type of virus. Several clinical trials are currently underway regarding the potential use of metformin as an add-on drug in HBVand HCV-infected patients. | 1,327.2 | 2023-01-01T00:00:00.000 | [
"Biology"
] |
Multiobjective Scheduling of an Active Distribution Network Based on Coordinated Optimization of Source Network Load
With the development of active distribution networks, the means of controlling such networks are becoming more abundant, and simultaneously, due to the intermittency of renewable energy and the randomness of the demand-side load, the operating uncertainty is becoming serious. To solve the problem of source–network–load coordination scheduling, a multiobjective scheduling model for an active distribution network (ADN) is proposed in this paper. The operating cost, renewable energy utilization rate, and user satisfaction are considered as the optimization objectives, and the distributed generation (DG) output power, switch number, and incentive price for the responsive load are set as the decision variables. Then the probabilistic power flow based on Monte Carlo sampling and the chance-constrained programming are used to deal with the uncertainty of the ADN. Moreover, the reference point–based many-objective evolutionary algorithm (NSGA3) is used to solve this nonlinear, multiperiod, and multiobjective optimization problem. The effectiveness of the proposed method is verified in the modified IEEE 33-bus distribution system. The results show that the proposed scheduling method can effectively improve the system status.
Introduction
With the availability of distributed generation (DG) and energy storage systems (ESSs), the application of advanced information communication and power electronics technologies, and the utilization of demand-side resources, the traditional distribution network is gradually developing into a multicoordinated active distribution network (ADN) [1].The important feature of the ADN is openness and interaction.Compared with the past, the power sources, networks, and demand-side loads in the ADN have changed greatly and have strong flexibility.With the cooperation of flexible power sources, the predictability and controllability of various intermittent renewable energy sources have been greatly improved.In terms of loads, flexible loads are increasingly becoming the trend of load development.By combining controllable conventional loads with various energy storage components and demand-side response means, increasing regulation demand of the power system can be adapted.The development of information technology has also facilitated the exchange of information between the source, network, and loads and has enhanced the interactions among the three.In addition, access to various power electronic devices also enhances the controllability of the power grid.All of these new changes provide convenient conditions for the coordinated operation of the source-network-load of the ADN, making it an important development trend in the future.
Measurement devices, communication equipment, and control systems make up the essential backbone of ADN operation.In order to ensure reliable operation of the ADN and effective control of the source-network-load, first it is necessary to ensure reliable monitoring and communication of DGs and ESSs.Reference [2] proposes a new solution for the remote monitoring and control of DGs and ESSs connected to distribution networks with the function of voltage regulation and power shuttering.Reference [3] develops a new interface device solution and a proper communication architecture, which allows implementing remote control of DG power production or islanding detection.Reference [4] proposes a new method considering power lines as an alternative communication medium for remote monitoring applications of smart grids for renewable energy sources.Second, the network side can realize the monitoring, control, and fast fault isolation of a distribution network by means of automated distribution technology.One way to achieve distribution automation is by implementing substation automation systems [5].For the demand-side resources, communication technologies such as advanced metering infrastructure (AMI) and supervisory control and data acquisition (SCADA) can acquire the user-side information in time, so as to formulate the corresponding incentive scheme and actively manage the user-side load, which ensures the smooth implementation of demand-side management and demand response [6].
In view of the optimal scheduling problem of the ADN, relevant research has been carried out from different angles.Aiming at the uncertainties of wind turbine (WT) and photovoltaic (PV) cell output, an energy-optimal scheduling model for an ADN with WTs, PV cells, and ESSs is proposed based on chance constrained programming in reference [7].In reference [8], the optimal scheduling operation of an ADN is considered.However, the optimization object is limited to the active and reactive "source" of the ADN and does not involve flexible topology adjustment of the network and flexible load control.To be part of the network reconfiguration, distributed generation and distribution network reconfiguration are optimized together to minimize total power loss and improve the voltage stability index in [9].
As user-side resources gradually participate in the demand response (DR), researchers began to study demand response models.The main purpose of the DR is to minimize customers' electricity payment [10] or achieve a generally uniform electricity load profile [11].During the scheduling process of the ADN, demand response is always applied in combination with DG control.The source-load coordinated optimization scheduling in the distribution network mainly includes minimizing system operation costs [12], improving the utilization ratio of renewable energy and user satisfaction [13], smoothing renewable energy outputs [14], reducing renewable generation curtailment [15], and selecting the site and size of DGs for the purpose of coordinating multiple interests [16,17].Reference [18] focuses on the optimal intraday scheduling of a distribution system that includes renewable energy generation, energy storage systems, and thermostatically controlled loads.Reference [19] proposes a multiobjective dynamic economic scheduling model considering the EVs and uncertainties of wind power to minimize the total fuel cost and pollutant emissions.Reference [20] uses multiscene technology to deal with the uncertainty of intermittent DGs and loads, and a two-step optimal scheduling model of ADN considering day-ahead and real time is established.Reference [21] proposes a multi-timescale cost-effective power management algorithm for islanded microgrid operation targeting generation, storage, and demand management.However, in our study, the DR is implemented with responsive loads that consider the uncertainty of the load participation.Moreover, the satisfaction of users participating in the demand response is modeled.
With the gradual application of ESSs, the management of demand-side loads also becomes possible [22].In Reference [23], to minimize the cost of power loss, robust optimization is used to deal with the uncertainty of electricity price and the day-ahead scheduling problem of ESSs and responsive loads.Reference [24] comprehensively considers a variety of adjustable resources in the ADN, such as DGs, ESSs, voltage regulators, switchable capacitor banks, and interruptible loads, to minimize the total operating cost of the system in the scheduling cycle.To reduce curtailment from renewable distributed generation, reference [25] chooses minimum storage sizes at multiple locations in distribution networks.Considering the space-time relationship between ESSs and flexible loads as well as the influence of power flow, a multiobjective ADN optimization scheduling model is constructed in reference [26], and the synergy is quantified by setting the priority of the scheduling units of each generation unit.
When it comes to research on optimal scheduling of source-network-load coordination, the single objective of the system operating cost is mainly considered.In order to realize the optimal control of DGs, networks, and demand-side loads, a smart distribution network optimization model with the minimum operating cost is proposed in reference [27].Reference [28] proposes a comprehensive operational scheduling model to determine optimal decisions on active elements of the network, DGs, and responsive loads, seeking to minimize the day-ahead composite economic cost.Reference [29] presents a mixed-integer dynamic optimization model for the optimal scheduling of ADNs with the objective of minimizing the daily costs of electricity purchased from distribution substations.Reference [30] develops a framework for operational scheduling of distribution systems with dynamic reconfiguration considering coordinated integration of energy storage systems and demand response programs to minimize the total costs, including cost of total loss, switching cost, cost of bilateral contract with power generation owners and responsive loads, and cost of exchanging power with the wholesale market.Reference [31] introduces a multiagent system and multilayer electricity price response mechanism to construct an optimization model of the distribution network layer, direct coordination source-load layer, and indirect coordination microgrid layer.Reference [32] proposes an optimal scheduling model aiming to find the lowest operating cost of a complete scheduling period, and taking controllable DGs and tie switches as control means.The impact of electricity price and the adjustment of tie switches on the operating cost is considered, and the energy conservation and capacity constraints of the ESSs throughout the scheduling period are guaranteed.Reference [33] proposes a source-network-load coordinated economic scheduling method in an ADN considering the electricity purchase cost, power loss cost, and demand-side management cost and taking DGs, ESSs, flexible network topologies, interruptible loads, and transferable loads as the control means.
To summarize, in the present research on optimal ADN scheduling, there are mainly the following problems:
•
The abundant controllable resources and diverse scheduling means are not fully considered, which is mainly limited to some aspect of source-network-load, such as the source-load interaction.
•
The scheduling model tends to aim for economic optimization while ignoring the uncertainty of WTs, PV cells, and loads.
•
The scheduling model has only one objective and fails to fully consider the interaction between the source-network-load, which cannot guarantee that multiple aspects will be simultaneously optimal.
This paper comprehensively utilizes the controllable elements of DGs, ESSs, switches, and interruptible loads in the ADN, fully considers the uncertainties of renewable energy and demand-side load response, and establishes a multiobjective scheduling model with coordinated source-network-load, which aims at finding the lowest operating cost, the highest renewable energy utilization rate, and the highest user satisfaction in the scheduling cycle.The NSGA3 algorithm is used to solve the three-objective scheduling model, and the performance of the algorithm is compared with another algorithm.The main contributions of the paper include the following:
•
Considering the deficiency of control measures for existing scheduling strategies, the proposed method fully considers the controllable resources of the source-network-load, including the DGs, switches, responsive loads, and storage systems.The specific control method includes DG control, network reconfiguration, and demand-side management.
•
Different from previous single-objective models, a three-objective scheduling model with coordinated source-network-load is proposed, considering the lowest operating cost, the highest renewable energy utilization rate, and the highest user satisfaction • Different from the method of transforming three objectives into one single objective by weight coefficients, this paper uses the NSGA3 algorithm to calculate the Pareto solution set of the optimization model and uses a fuzzy decision-making method to filter the solution set.
The remainder of this work is organized as follows.In Section 2, we propose the scheduling strategy of source-network-load in the ADN.In Section 3, we analyze the uncertainty of scheduling in the ADN.In Section 4, we establish the multiobjective optimization scheduling model of the ADN.In Section 5, we introduce the reference point-based many-objective evolutionary algorithm (NSGA3).Results are presented in Section 6, and conclusions are drawn in Section 7.
Scheduling Strategy of Source-Network-Load in the ADN
The ADN has a source-network-load ternary structure: "source" refers to all kinds of DGs and ESSs in the ADN, with DGs divided into controllable and intermittent types.Common controllable DGs include microturbines (MTs), diesel generators, fuel cells, etc., and intermittent DGs include WTs, PV cells, etc.; the "network" mainly includes transformers, lines, switches, and other power equipment, whose important function is to manage the power flow through a flexible network topology; "load" refers to various types of load resources on the demand side, including conventional loads, interruptible loads, etc.The control elements of ADN optimization scheduling in this paper include controllable DGs, ESSs, switches, and load resources.From the perspective of source-network-load, the ADN is a distribution system that can coordinate various types of DGs and ESSs, optimize the power flow based on the flexible topology, actively manage demand-side resources, promote the absorption of renewable energy generation, and ensure the safe and economic operation of the distribution network on the basis of meeting users' electricity demands.
The specific scheduling strategy is shown in Figure 1.The ADN dispatching center uses the corresponding communication devices to collect information of source-network-load three-terminal equipment and then figures out the corresponding dispatching scheme based on the proposed strategy.Finally, the control system controls the three source-network-load aspects to achieve the scheduling goals.
•
Different from the method of transforming three objectives into one single objective by weight coefficients, this paper uses the NSGA3 algorithm to calculate the Pareto solution set of the optimization model and uses a fuzzy decision-making method to filter the solution set.
The remainder of this work is organized as follows.In Section 2, we propose the scheduling strategy of source-network-load in the ADN.In Section 3, we analyze the uncertainty of scheduling in the ADN.In Section 4, we establish the multiobjective optimization scheduling model of the ADN.In Section 5, we introduce the reference point-based many-objective evolutionary algorithm (NSGA3).Results are presented in Section 6, and conclusions are drawn in Section 7.
Scheduling Strategy of Source-Network-Load in the ADN
The ADN has a source-network-load ternary structure: "source" refers to all kinds of DGs and ESSs in the ADN, with DGs divided into controllable and intermittent types.Common controllable DGs include microturbines (MTs), diesel generators, fuel cells, etc., and intermittent DGs include WTs, PV cells, etc.; the "network" mainly includes transformers, lines, switches, and other power equipment, whose important function is to manage the power flow through a flexible network topology; "load" refers to various types of load resources on the demand side, including conventional loads, interruptible loads, etc.The control elements of ADN optimization scheduling in this paper include controllable DGs, ESSs, switches, and load resources.From the perspective of sourcenetwork-load, the ADN is a distribution system that can coordinate various types of DGs and ESSs, optimize the power flow based on the flexible topology, actively manage demand-side resources, promote the absorption of renewable energy generation, and ensure the safe and economic operation of the distribution network on the basis of meeting users' electricity demands.
The specific scheduling strategy is shown in Figure 1.The ADN dispatching center uses the corresponding communication devices to collect information of source-network-load three-terminal equipment and then figures out the corresponding dispatching scheme based on the proposed strategy.Finally, the control system controls the three source-network-load aspects to achieve the scheduling goals.
Uncertainty of User-Side Response
According to the principles of consumer psychology, the incentive policy given by the grid has a threshold and saturation value for each user's stimulus [34].For distribution network users, the uncertainty can be reflected in the users' demand response participation rate, that is, the response range under certain incentive conditions.
The response model in this paper uses random numbers in a certain interval [λ i2 , λ i1 ] to characterize the uncertain behavior of user participation responses [35,36], as shown in Figure 2, and λ represents the load shedding rate.The price pd is the start incentive, representing the minimum incentive price for users to start to respond; pc is the critical incentive making the response greater than zero considering the response uncertainty; pm is the saturation incentive making users' responses reach the upper limit.The corresponding load shedding rate can be calculated as Equation (1).
Uncertainty of User-Side Response
According to the principles of consumer psychology, the incentive policy given by the grid has a threshold and saturation value for each user's stimulus [34].For distribution network users, the uncertainty can be reflected in the users' demand response participation rate, that is, the response range under certain incentive conditions.
The response model in this paper uses random numbers in a certain interval [λi2, λi1] to characterize the uncertain behavior of user participation responses [35,36], as shown in Figure 2, and λ represents the load shedding rate.The price pd is the start incentive, representing the minimum incentive price for users to start to respond; pc is the critical incentive making the response greater than zero considering the response uncertainty; pm is the saturation incentive making users' responses reach the upper limit.The corresponding load shedding rate can be calculated as Equation (1).(1)
Uncertainty of Renewable Energy Generation
Due to the influence of natural factors, the output of renewable DGs fluctuates greatly.Their output is usually determined by studying the probability model.
Sunlight intensity can be approximated by obeying the beta distribution: The output power of the solar cell panel can be expressed as: Based on the formula of light intensity and photovoltaic output power, the output probability density function of solar photovoltaic can be obtained by Equation ( 4): [λ i2 ,
Uncertainty of Renewable Energy Generation
Due to the influence of natural factors, the output of renewable DGs fluctuates greatly.Their output is usually determined by studying the probability model.
Sunlight intensity can be approximated by obeying the beta distribution: The output power of the solar cell panel can be expressed as: Based on the formula of light intensity and photovoltaic output power, the output probability density function of solar photovoltaic can be obtained by Equation (4): The power generation of the WT is related to the wind speed.The probability density function described by the two-parameter Weibull distribution is generally used to deal with the randomness of wind speed as Equation ( 5): The power generation of the wind turbine can be expressed as Equation ( 6): The bus installed on the WT or PV cell can be simplified as a PQ node, assume that the power factor can be kept constant through the automatic switching of capacitors.The reactive power is obtained as:
Active Distribution Network Scheduling Model
The scheduling model established in this paper considers energy storage, controllable distributed generation, switches, and demand-side loads.Therefore, the decision variables include the active power of energy storage, the active power of DGs, the state of switches, and the incentive price of demand-side loads.
Minimize Total Operating Costs
According to the general principles of optimal scheduling, the first optimization objective is to achieve minimum total operation cost of the ADN, including electricity purchase cost C buy,t , DG generation cost C DG,t , storage battery operation cost C ESS,t , network reconfiguration cost C SW,t , and compensation cost for users C comp,t .The total operating costs are as Equation ( 8): The electricity purchase cost C buy,t at time t included two parts, which can be calculated as Equation ( 9): The DGs in this paper include photovoltaic (PV) cells, wind turbines (WTs), and microturbines (MTs).The controllable DGs considered in this paper are MTs.DG generation cost at time t C DG,t mainly includes fuel cost F DG,t , operation maintenance cost M DG,t , and depreciation cost D DG,t [37,38] as Equation (10): where η DG,t can be calculated as η DG,t = m 3 (P DG * ,t The operation cost of storage batteries at time t C ESS,t can be expressed as a quadratic function: The network reconfiguration cost at time t C SW,t can be expressed as Equation ( 12): The compensation cost for users at time t C comp,t can be expressed as Equation ( 13):
Maximize Utilization Rate of Renewable Energy
The output of WTs and PV cells depends on wind and solar energy during this period.By rationally scheduling the controllable units, the maximum consumption of renewable energy can be realized.Taking a 24-hour day as a complete scheduling cycle, this paper measures the utilization of renewable energy by the proportion of renewable energy generated as Equation ( 14):
Maximize User Satisfaction
The level of satisfaction can affect users' enthusiasm for participating in DR.From the users' point of view, satisfaction can be defined as the percentage of actual running time after the electrical equipment participates in the response and the initial total demand time [26].Usually, the control of interruptible electrical equipment will directly affect users' habits.Therefore, reducing interruptible electrical equipment is considered to describe user satisfaction after participating in DR in this paper [39,40] as Equation (15): The charging and discharging status of energy storage is limited not only by the capacity of grid-connected devices but also by the state of charge (SOC) of energy storage [41].Assuming that the charging and discharging efficiency remain unchanged during operation, the energy storage operation is constrained as:
Solving Strategy Based on NSGA3
The above model is a nonlinear, multiperiod, and multiobjective optimization problem.In a multiobjective optimization problem, the relationship between optimal solutions is usually nondominated, and there are a few cases where one optimal solution dominates all other feasible solutions.Therefore, the optimal solution of the optimization problem is usually a set of solutions, called the nondominated solution set or the Pareto optimal solution set.
In the two-objective optimization problem, the nondominated sorting genetic algorithm 2 (NSGA2) with the crowding distance strategy is usually adopted [42][43][44].However, in the face of multiobjective optimization problems of three or more objectives, if we continue to use the crowding distance of NSGA2, the convergence and diversity of the algorithm will be problematic, such as an uneven distribution of the Pareto solution on the nondominated layer, resulting in the algorithm falling into a local optimum.Therefore, the reference point-based many-objective evolutionary algorithm (NSGA3) is proposed.The framework of the NSGA3 is basically the same as that of the NSGA2, except that the selection mechanism is different.The NSGA2 uses crowding distances to select individuals with the same nondominated level, while the NSGA3 uses a reference point-based approach [45,46] to select individuals.
The Basic Process of NSGA3
NSGA3 randomly generates the initial population containing N individuals, and then starts to iterate.In the tth generation, the algorithm generates the offspring population Q t by random selection, simulated binary crossover (SBX), and polynomial variation on the basis of the current population P t .Both P t and Q t are N in size.Then the two populations P t and Q t are combined to form a new population R t with a population size of 2N.
Population Classification into Nondominated Levels
In order to select the best N solutions from population R t into the next generation, R t is first divided into several different nondomination levels using a nondominated sorting method.Then, a new population S t is constructed by adding the solutions of each nondomination level to S t from level 1 until the size of S t is equal to or greater than N for the first time.Assuming that the last acceptable nondomination level is level L, the solutions in level L + 1 are discarded, and the solution FL in level L is selected as the solution in the next population P t+1 .The remaining individuals in P t+1 need to be selected from FL.
In the original NSGA2, a solution with a large crowding distance in FL is preferentially selected.However, the crowding distance is not suitable for solving multiobjective optimization problems of three or more objectives.Therefore, the NSGA3 no longer uses the crowding distance but adopts a new selection mechanism, which analyzes the individuals in St more systematically through the provided reference points and selects partial solutions in FL into P t+1 .
Reference Point Determination on a Hyperplane
The reference points of NSGA3 are critical, and the number of generated reference points depends on the dimension m of the object vector and another positive integer H.
The number of solutions of the equation can be calculated as follows: Assume that x j,1 , x j,2 , • • • , x j,m T is the jth solution of the equation, then reference point λ j can be obtained by Equation ( 27): Geometrically speaking, reference points δ 1 , δ 2 , • • • , δ N are all located in the hyperplane, as shown in Figure 3, and H is the number of divisions along each objective axis.
Population Adaptive Normalization
First, the minimum value of each dimension i of M objective functions needs to be calculated.Assume that the acquired corresponding minimum value on the ith objective is zi, and the set of zi is the ideal point set mentioned in the NSGA3 algorithm.
Then use Equation ( 28) to translate objectives:
Population Adaptive Normalization
First, the minimum value of each dimension i of M objective functions needs to be calculated.Assume that the acquired corresponding minimum value on the ith objective is z i , and the set of z i is the ideal point set mentioned in the NSGA3 algorithm.
Then use Equation ( 28) to translate objectives: To find the extreme points, the achievement scalarizing function (ASF) as Equation ( 29) is used: where ) T and satisfies that if i = j, w i,j = 0, else w i,j = 1; for w i,j = 0, a small value of 10 −6 is used to replace it.
Traverse each function to find the individuals with the lowest ASF values, which are extreme points.These points and the origin (the ideal point) consist of three lines, which can form a hyperplane, as shown in Figure 4.The intersections a i between this surface and the three axes are the final intercepts.After finding the intercepts, normalization is carried out through Equation ( 30): Ideal point line f1 f2 Figure 3. Reference points on a hyperplane.
Population Adaptive Normalization
First, the minimum value of each dimension i of M objective functions needs to be calculated.Assume that the acquired corresponding minimum value on the ith objective is zi, and the set of zi is the ideal point set mentioned in the NSGA3 algorithm.
Then use Equation ( 28) to translate objectives: To find the extreme points, the achievement scalarizing function (ASF) as Equation ( 29) is used: where Traverse each function to find the individuals with the lowest ASF values, which are extreme points.These points and the origin (the ideal point) consist of three lines, which can form a hyperplane, as shown in Figure 4.The intersections i a between this surface and the three axes are the final intercepts.After finding the intercepts, normalization is carried out through Equation ( 30):
Association Operation
After normalization, the individuals need to be associated with the reference points.Use the line formed by the reference point and the origin as the reference line.For each individual, traverse all reference lines to find the nearest reference line to each population individual and record the information of the corresponding reference point and the shortest distance.The distance from the individual population to the reference line will be described by the perpendicular distance.
As shown in Figure 5, suppose u is the projection of f (x) on reference line L, d j,1 (x) is the distance between the origin and u, and d j,2 (x) is the perpendicular distance from f (x) to line L. The distance can be calculated as follows [47]:
Association Operation
After normalization, the individuals need to be associated with the reference points.Use the line formed by the reference point and the origin as the reference line.For each individual, traverse all reference lines to find the nearest reference line to each population individual and record the information of the corresponding reference point and the shortest distance.The distance from the individual population to the reference line will be described by the perpendicular distance.As shown in Figure 5, suppose u is the projection of f(x) on reference line L, dj,1(x) is the distance between the origin and u, and dj,2(x) is the perpendicular distance from f(x) to line L. The distance can be calculated as follows [47]: After the association, each reference point will have the individual number ρj associated with it.
Niche-Preservation Operation
A reference point may have one or more population individuals associated with it or no individual population associated with it.Denote the niche count as ρj for the jth reference point and select the reference point j with minimum ρj.
If ρj = 0, this indicates that there are no solutions in the current population associated with this reference point.If there is a solution from the nondomination level that has the smallest distance to reference point j, the solution will be selected.Otherwise, the reference point is removed from the current population.
If ρj > 1, a solution associated with the reference point from the nondomination level will be randomly selected to add to the population.After the association, each reference point will have the individual number ρj associated with it.
Niche-Preservation Operation
A reference point may have one or more population individuals associated with it or no individual population associated with it.Denote the niche count as ρj for the jth reference point and select the reference point j with minimum ρj.
If ρj = 0, this indicates that there are no solutions in the current population associated with this reference point.If there is a solution from the nondomination level that has the smallest distance to reference point j, the solution will be selected.Otherwise, the reference point is removed from the current population.
If ρj > 1, a solution associated with the reference point from the nondomination level will be randomly selected to add to the population.
Genetic Operations to Create Offspring Population
In NSGA3, after P t+1 is formed, the offspring population Q t+1 is created by randomly selecting the parents from P t+1 using conventional genetic operators (crossover and mutation).
Selection of Optimal Compromise Solution
In this paper, the fuzzy decision method [48] is used to select the optimal compromise solution from the Pareto optimal solution set.The membership function u ij of the jth objective value of the ith Pareto solution f ij is: For the ith Pareto solution, its normalized membership function u i is:
Probabilistic Power Flow Based on Monte Carlo Sampling
The power flow calculation is the basis for the optimization analysis of the ADN in this paper; due to the probability property of renewable energy and the load response, the power flow is uncertain.To deal with the problem, probabilistic power flow based on Monte Carlo sampling is used.
Use Monte Carlo sampling to generate lots of deterministic scenarios based on the probability distribution characteristics and the limits of renewable distributed power generation and controllable load response.Assume the number of sampling times is set to k times.The random vectors of each controllable load and renewable power supply are obtained as Equation ( 34): A large number of random sampling samples are obtained under certain constraints, and then deterministic power flow calculation is carried out to obtain the probability characteristics of the node voltage and branch power flow.
The estimation probability is expressed by the large number theorem; that is, the chance constraint condition, such as (17) and (18), holds if and only if the probability condition is satisfied.
Assume the number that satisfies the chance constraint condition is k'.The more Monte Carlo simulation scenarios are generated, the closer the estimated probability k'/k is to the probability that the actual chance constraints are satisfied.
A flowchart of multiobjective optimal scheduling of the ADN based on the NSGA3 algorithm is shown in Figure 6.
Discussion
The modified IEEE 33-bus distribution system as shown in Figure 7 is used in this paper to carry out the analysis.Assume the loads from buses 22 to 32 will participate in demand-side management, which can reduce the load ratio by 10%, and the schedulable time is from 08:00 to 20:00.Two PV cells of 500 kW are installed at buses 9 and 17.Two WTs of 600 kW are installed at buses 4 and 32.Two MTs of 500 kW are installed at buses 8 and 15.Two 500 kW ESSs are installed at 17 and 32, whose SOC is 5%-95% [49].Assume the on-grid price of WTs is 0.30 CNY/kWh, the on-grid price of PV cells is 0.50 CNY/kWh, and the on-grid price of MTs is 0.40 CNY/kWh.Other specific parameters are shown in Table A1.The time-ofuse price and daily forecasting curve of loads, WTs, and PV cells are shown in Figure 8
Discussion
The modified IEEE 33-bus distribution system as shown in Figure 7 is used in this paper to carry out the analysis.Assume the loads from buses 22 to 32 will participate in demand-side management, which can reduce the load ratio by 10%, and the schedulable time is from 08:00 to 20:00.Two PV cells of 500 kW are installed at buses 9 and 17.Two WTs of 600 kW are installed at buses 4 and 32.Two MTs of 500 kW are installed at buses 8 and 15.Two 500 kW ESSs are installed at 17 and 32, whose SOC is 5%-95% [49].Assume the on-grid price of WTs is 0.30 CNY/kWh, the on-grid price of PV cells is 0.50 CNY/kWh, and the on-grid price of MTs is 0.40 CNY/kWh.Other specific parameters are shown in Table A1.The time-of-use price and daily forecasting curve of loads, WTs, and PV cells are shown in Figure 8 [50].
Discussion
The modified IEEE 33-bus distribution system as shown in Figure 7 is used in this paper to carry out the analysis.Assume the loads from buses 22 to 32 will participate in demand-side management, which can reduce the load ratio by 10%, and the schedulable time is from 08:00 to 20:00.Two PV cells of 500 kW are installed at buses 9 and 17.Two WTs of 600 kW are installed at buses 4 and 32.Two MTs of 500 kW are installed at buses 8 and 15.Two 500 kW ESSs are installed at 17 and 32, whose SOC is 5%-95% [49].Assume the on-grid price of WTs is 0.30 CNY/kWh, the on-grid price of PV cells is 0.50 CNY/kWh, and the on-grid price of MTs is 0.40 CNY/kWh.Other specific parameters are shown in Table A1.The time-ofuse price and daily forecasting curve of loads, WTs, and PV cells are shown in Figure 8 To reveal the coordination role of the source network load in the scheduling, the Pareto front solution set is simulated and analyzed according to the scheduling model proposed in this paper.The solution set is shown in Figure 9, which consists of optimal solutions.In practice, the decisionmakers can choose the final best solution according to the specific expectations of the distribution network.In this paper, the fuzzy decision method is used to analyze the optimization results.The solution with the largest membership function value is chosen as the final best solution, as shown in Figure 9.This solution contains not only the information of the decision variables including the DG output power, the switch number, and the incentive price for the responsive load but also the information of objective values.The corresponding operating cost is 36811.14CNY, the renewable energy utilization rate is 0.3909, and the user satisfaction is 0.8917.The active power output plan of the MT and ESSs of the best solution is shown in Figure 10.ESS1 is the energy storage of the PV bus and ESS2 is the energy storage of the WT bus.The MT is used when the load is heavy.On the one hand, the local power supply can reduce the power loss; on the other hand, it can save the electricity purchase cost during the peak period of high electricity price.The charge and discharge scheduling plans of ESSs are those that are charging during the daytime when the renewable DGs output is large, discharging at the peak of high electricity price in the evening, and charging at the low electricity price valley in the early morning.In this way, the ESS can smooth the fluctuation of renewable DG sources, clip peaks and fill valleys, and provide strong support for the economic and safe operation of the ADN.To reveal the coordination role of the source network load in the scheduling, the Pareto front solution set is simulated and analyzed according to the scheduling model proposed in this paper.The solution set is shown in Figure 9, which consists of optimal solutions.In practice, the decision-makers can choose the final best solution according to the specific expectations of the distribution network.In this paper, the fuzzy decision method is used to analyze the optimization results.The solution with the largest membership function value is chosen as the final best solution, as shown in Figure 9.This solution contains not only the information of the decision variables including the DG output power, the switch number, and the incentive price for the responsive load but also the information of objective values.The corresponding operating cost is 36811.14CNY, the renewable energy utilization rate is 0.3909, and the user satisfaction is 0.8917.To reveal the coordination role of the source network load in the scheduling, the Pareto front solution set is simulated and analyzed according to the scheduling model proposed in this paper.The solution set is shown in Figure 9, which consists of optimal solutions.In practice, the decisionmakers can choose the final best solution according to the specific expectations of the distribution network.In this paper, the fuzzy decision method is used to analyze the optimization results.The solution with the largest membership function value is chosen as the final best solution, as shown in Figure 9.This solution contains not only the information of the decision variables including the DG output power, the switch number, and the incentive price for the responsive load but also the information of objective values.The corresponding operating cost is 36811.14CNY, the renewable energy utilization rate is 0.3909, and the user satisfaction is 0.8917.The active power output plan of the MT and ESSs of the best solution is shown in Figure 10.ESS1 is the energy storage of the PV bus and ESS2 is the energy storage of the WT bus.The MT is used when the load is heavy.On the one hand, the local power supply can reduce the power loss; on the other hand, it can save the electricity purchase cost during the peak period of high electricity price.The charge and discharge scheduling plans of ESSs are those that are charging during the daytime when the renewable DGs output is large, discharging at the peak of high electricity price in the evening, and charging at the low electricity price valley in the early morning.In this way, the ESS can smooth the fluctuation of renewable DG sources, clip peaks and fill valleys, and provide strong support for the economic and safe operation of the ADN.The active power output plan of the MT and ESSs of the best solution is shown in Figure 10.ESS1 is the energy storage of the PV bus and ESS2 is the energy storage of the WT bus.The MT is used when the load is heavy.On the one hand, the local power supply can reduce the power loss; on the other hand, it can save the electricity purchase cost during the peak period of high electricity price.The charge and discharge scheduling plans of ESSs are those that are charging during the daytime when the renewable DGs output is large, discharging at the peak of high electricity price in the evening, and charging at the low electricity price valley in the early morning.In this way, the ESS can smooth the fluctuation of renewable DG sources, clip peaks and fill valleys, and provide strong support for the economic and safe operation of the ADN.The daily plan for load reduction and network topology adjustment is shown in Table A2.Load reduction will bring additional demand-side management costs, so the scheduling plan only performs a small amount of load reduction at peak electricity prices.The flexible topology of the ADN is beneficial to reduce network loss, improve voltage quality, and reduce system uncertainty.
Figure 11 depicts the voltage uncertainty range of the network at various times.It can be seen from the figure that after adopting the optimized scheduling scheme, the voltage level of all periods is within the acceptable range and the uncertain range is acceptable.Figure 12 shows the probability density curve of power loss under the influence of uncertainty for 24 hours in the ADN.It can be seen that through source-network-load scheduling, power loss is at a lower level in a day and fluctuation is small.The daily plan for load reduction and network topology adjustment is shown in Table A2.Load reduction will bring additional demand-side management costs, so the scheduling plan only performs a small amount of load reduction at peak electricity prices.The flexible topology of the ADN is beneficial to reduce network loss, improve voltage quality, and reduce system uncertainty.
Figure 11 depicts the voltage uncertainty range of the network at various times.It can be seen from the figure that after adopting the optimized scheduling scheme, the voltage level of all periods is within the acceptable range and the uncertain range is acceptable.The daily plan for load reduction and network topology adjustment is shown in Table A2.Load reduction will bring additional demand-side management costs, so the scheduling plan only performs a small amount of load reduction at peak electricity prices.The flexible topology of the ADN is beneficial to reduce network loss, improve voltage quality, and reduce system uncertainty.
Figure 11 depicts the voltage uncertainty range of the network at various times.It can be seen from the figure that after adopting the optimized scheduling scheme, the voltage level of all periods is within the acceptable range and the uncertain range is acceptable.Figure 12 shows the probability density curve of power loss under the influence of uncertainty for 24 hours in the ADN.It can be seen that through source-network-load scheduling, power loss is at a lower level in a day and fluctuation is small.Figure 12 shows the probability density curve of power loss under the influence of uncertainty for 24 hours in the ADN.It can be seen that through source-network-load scheduling, power loss is at a lower level in a day and fluctuation is small.To reveal the coordinated role of source-network-load control in ADN scheduling, the following three scenarios in Table 1 are simulated and analyzed according to the proposed scheduling model.The obtained objective function values are listed in Table 2.The optimization results are contrasted from three aspects: minimum bus voltage, bus voltage uncertain range, and maximum power loss, shown in Figure 13-15.To reveal the coordinated role of source-network-load control in ADN scheduling, the following three scenarios in Table 1 are simulated and analyzed according to the proposed scheduling model.The obtained objective function values are listed in Table 2.The optimization results are contrasted from three aspects: minimum bus voltage, bus voltage uncertain range, and maximum power loss, shown in Figures 13-15.To reveal the coordinated role of source-network-load control in ADN scheduling, the following three scenarios in Table 1 are simulated and analyzed according to the proposed scheduling model.The obtained objective function values are listed in Table 2.The optimization results are contrasted from three aspects: minimum bus voltage, bus voltage uncertain range, and maximum power loss, shown in Figure 13-15.From Table 2, the operating cost of case 1 is the lowest due to the reasonable source-networkload scheduling.In case 2, the source-load control will require more controllable DGs and responsive load to participate in scheduling, thus reducing renewable energy utilization and user satisfaction.In case 3, no source-network-load control will lead to a high operating cost, and without the management of DGs and demand-side loads, renewable energy utilization will be at a high level and user satisfaction will not be affected.
Moreover, it can be seen from the results that in cases 2 and 3, when the load is heavy, there will be a certain risk of voltage violation, such as at 13:00, and the source-network load scheduling scheme significantly improves this phenomenon.In addition, the voltage uncertainty range of case 1 is also lower than the other two cases.Moreover, the total power loss of case 1 is reduced by 30.72% compared with case 3 and by 16.80% compared with case 2. On the whole, under the condition of coordinated scheduling of source-network-load, the minimum voltage profile, the range of voltage uncertainty, and the maximum power loss in 24 hours have advantages over the other two cases, which shows that the operation state of the active distribution network can be effectively improved by coordinated control of the source-network-load, and it also shows the effectiveness of the proposed method.
The effectiveness of the algorithm is verified by comparing it with NSGA2.By calculating the above example, a compromise optimal solution can be obtained by the fuzzy decision method, and the corresponding voltage variation can be acquired, as shown in Table 3 and Figure 16.The results show that the reference point-based NSGA3 algorithm performs better than the crowding distancebased NSGA2 when dealing with the three-objective optimization model.From Table 2, the operating cost of case 1 is the lowest due to the reasonable source-networkload scheduling.In case 2, the source-load control will require more controllable DGs and responsive load to participate in scheduling, thus reducing renewable energy utilization and user satisfaction.In case 3, no source-network-load control will lead to a high operating cost, and without the management of DGs and demand-side loads, renewable energy utilization will be at a high level and user satisfaction will not be affected.
Moreover, it can be seen from the results that in cases 2 and 3, when the load is heavy, there will be a certain risk of voltage violation, such as at 13:00, and the source-network load scheduling scheme significantly improves this phenomenon.In addition, the voltage uncertainty range of case 1 is also lower than the other two cases.Moreover, the total power loss of case 1 is reduced by 30.72% compared with case 3 and by 16.80% compared with case 2. On the whole, under the condition of coordinated scheduling of source-network-load, the minimum voltage profile, the range of voltage uncertainty, and the maximum power loss in 24 hours have advantages over the other two cases, which shows that the operation state of the active distribution network can be effectively improved by coordinated control of the source-network-load, and it also shows the effectiveness of the proposed method.
The effectiveness of the algorithm is verified by comparing it with NSGA2.By calculating the above example, a compromise optimal solution can be obtained by the fuzzy decision method, and the corresponding voltage variation can be acquired, as shown in Table 3 and Figure 16.The results show that the reference point-based NSGA3 algorithm performs better than the crowding distancebased NSGA2 when dealing with the three-objective optimization model.From Table 2, the operating cost of case 1 is the lowest due to the reasonable source-network-load scheduling.In case 2, the source-load control will require more controllable DGs and responsive load to participate in scheduling, thus reducing renewable energy utilization and user satisfaction.In case 3, no source-network-load control will lead to a high operating cost, and without the management of DGs and demand-side loads, renewable energy utilization will be at a high level and user satisfaction will not be affected.
Moreover, it can be seen from the results that in cases 2 and 3, when the load is heavy, there will be a certain risk of voltage violation, such as at 13:00, and the source-network load scheduling scheme significantly improves this phenomenon.In addition, the voltage uncertainty range of case 1 is also lower than the other two cases.Moreover, the total power loss of case 1 is reduced by 30.72% compared with case 3 and by 16.80% compared with case 2. On the whole, under the condition of coordinated scheduling of source-network-load, the minimum voltage profile, the range of voltage uncertainty, and the maximum power loss in 24 hours have advantages over the other two cases, which shows that the operation state of the active distribution network can be effectively improved by coordinated control of the source-network-load, and it also shows the effectiveness of the proposed method.
The effectiveness of the algorithm is verified by comparing it with NSGA2.By calculating the above example, a compromise optimal solution can be obtained by the fuzzy decision method, and the corresponding voltage variation can be acquired, as shown in Table 3 and Figure 16.The results show that the reference point-based NSGA3 algorithm performs better than the crowding distance-based NSGA2 when dealing with the three-objective optimization model.
Conclusions
In order to study how to make full use of controllable resources in an active distribution network, this paper proposes a multiobjective scheduling strategy.The proposed source-network-load strategy conforms to the development trend of the ADN.The study results show that the sourcenetwork-load coordination control performs better than other control methods, such as source-load control.Moreover, the proposed scheduling method can reduce the system voltage violation risk and ensure customer satisfaction while saving operating cost.Our future work will investigate the influence of uncertainty in network impedances on the scheduling method.Another important avenue for future research is to consider other controllable resources, such as electric vehicles.
Figure 1 .
Figure 1.The scheduling strategy of source-network-load.DG: distributed generation; ESS: energy storage system.
Figure 1 .
Figure 1.The scheduling strategy of source-network-load.DG: distributed generation; ESS: energy storage system.
Figure 2 .
Figure 2. User response characteristic curve under certain incentive level.
Figure 2 .
Figure 2. User response characteristic curve under certain incentive level.
Figure 3 .
Figure 3. Reference points on a hyperplane.
Figure 3 .
Figure 3. Reference points on a hyperplane.
value of 10 −6 is used to replace it.
Figure 4 .
Figure 4. Computing intercepts and forming the hyperplane from extreme points.Figure 4. Computing intercepts and forming the hyperplane from extreme points.
Figure 4 .
Figure 4. Computing intercepts and forming the hyperplane from extreme points.Figure 4. Computing intercepts and forming the hyperplane from extreme points.
Figure 5 .
Figure 5. Association of population members with reference points.
Figure 5 .
Figure 5. Association of population members with reference points.
Figure 6 .
Figure 6.Flowchart of multiobjective optimal scheduling of the active distribution network (ADN).
Figure 6 .
Figure 6.Flowchart of multiobjective optimal scheduling of the active distribution network (ADN).
Figure 6 .
Figure 6.Flowchart of multiobjective optimal scheduling of the active distribution network (ADN).
Figure 8 .
Figure 8. Time-of-use price curve and daily forecasting curve of loads, WTs, and PV cells.
Figure 9 .
Figure 9. Multiobjective optimization results: (a) Pareto fronts under source-network-load optimization, and (b) normalized membership functions of different Pareto solutions.
Figure 8 .
Figure 8. Time-of-use price curve and daily forecasting curve of loads, WTs, and PV cells.
Figure 8 .
Figure 8. Time-of-use price curve and daily forecasting curve of loads, WTs, and PV cells.
Figure 9 .
Figure 9. Multiobjective optimization results: (a) Pareto fronts under source-network-load optimization, and (b) normalized membership functions of different Pareto solutions.
Figure 9 .
Figure 9. Multiobjective optimization results: (a) Pareto fronts under source-network-load optimization, and (b) normalized membership functions of different Pareto solutions.
Figure 10 .
Figure 10.Daily scheduling of active power output of MTs and ESSs.
Figure 11 .
Figure 11.Uncertain range of daily voltage profile curve: (a) lower limit of daily voltage profile, and (b) upper limit of daily voltage profile.
Figure 10 .
Figure 10.Daily scheduling of active power output of MTs and ESSs.
Figure 10 .
Figure 10.Daily scheduling of active power output of MTs and ESSs.
Figure 11 .
Figure 11.Uncertain range of daily voltage profile curve: (a) lower limit of daily voltage profile, and (b) upper limit of daily voltage profile.
Figure 11 .
Figure 11.Uncertain range of daily voltage profile curve: (a) lower limit of daily voltage profile, and (b) upper limit of daily voltage profile.
Figure 12 .
Figure 12.Probability density curve of power loss in a day.
Figure 12 .
Figure 12.Probability density curve of power loss in a day.
Figure 12 .
Figure 12.Probability density curve of power loss in a day.
Figure 13 .
Figure 13.Comparison of minimum bus voltage in a day.
Figure 14 .
Figure 14.Comparison of bus voltage uncertain range in a day.
Figure 15 .
Figure 15.Comparison of maximum power loss in a day.
Figure 14 .
Figure 14.Comparison of bus voltage uncertain range in a day.
Figure 14 .
Figure 14.Comparison of bus voltage uncertain range in a day.
Figure 15 .
Figure 15.Comparison of maximum power loss in a day.
Figure 15 .
Figure 15.Comparison of maximum power loss in a day.
Figure 16 .
Figure 16.Comparison of minimum bus voltage of different algorithms.
Table 1 .
Classification of simulation cases.
Table 2 .
Comparison of objective function values of different cases.
Table 1 .
Classification of simulation cases.
Table 2 .
Comparison of objective function values of different cases.
Table 1 .
Classification of simulation cases.
Table 2 .
Comparison of objective function values of different cases.
Table 3 .
Compromise optimal solution comparisons of different algorithms.
The uncertain range of bus voltage / p.uThe maximum power loss / MW
Table 3 .
Compromise optimal solution comparisons of different algorithms.
The uncertain range of bus voltage / p.uThe maximum power loss / MW
Table 3 .
Compromise optimal solution comparisons of different algorithms.
Table A1 .
, P DG,i,t electricity purchased from power grid and DGs c grid,t , c DG,i unit electricity price from power grid and DGs at time t N DG number of DGs F DG,t ,M DG,t ,D DG,t fuel cost, operation maintenance cost, and depreciation cost c DG , P DG,t , η DG,t unit fuel cost, active power of MT, and MT operation efficiency m 0 , m 1 , m 2 , m 3 characteristic parameters related to efficiency and per-unit active power IL,i,t active power of interruptible loads P PV,t , P WT,t , P MT,t total output power of WTs, PV cells, and MTs at time t N IL number of responsive loads T il,n reduction time of nth interruptible load T Il,n total electricity consumption time of nth interruptible load P i,t , Q i,t injected active and reactive power at time t of bus i U i,t , U j,t voltage amplitude at time t of buses i and j G ij , B ij conductance and susceptance of lines between buses i and j θ ij,t phase difference between buses i and j I i,t actual current amplitude of branch i at time t I i,min , I i,max lower and upper limits of current amplitude of branch V i voltage amplitude of node i V i,min , V i,max lower and upper limits of voltage amplitude of node γ U , γ I confidence level N ts , N br , N bus , N s number of tie switches, branches, nodes, and power source sw max maximum permissible operating time for switches P DG,i,min , P DG,i,max lower and upper limits of generation power of DGs S SOC,t state of charge in t period P ESS,t battery interaction power in t period S ESS battery capacity S SOC,i,max , S SOC,i,min upper and lower limits of the state of charge P ESS,i,max , P ESS,i,min upper and lower limits of battery interaction power the jth objective value of the ith Pareto solution Np number of Pareto solutions λ RL , P WT , P PV values of RL, WT, PV generated by Monte Carlo sampling Simulation parameters.
Table A2 .
Reconfiguration and compensation schemes. | 12,597.2 | 2018-10-11T00:00:00.000 | [
"Engineering"
] |
Nonlinear response of a driven vibrating nanobeam in the quantum regime
We analytically investigate the nonlinear response of a damped doubly clamped nanomechanical beam under static longitudinal compression which is excited to transverse vibrations. Starting from a continuous elasticity model for the beam, we consider the dynamics of the beam close to the Euler buckling instability. There, the fundamental transverse mode dominates and a quantum mechanical time-dependent effective single particle Hamiltonian for its amplitude can be derived. In addition, we include the influence of a dissipative Ohmic or super-Ohmic environment. In the rotating frame, a Markovian master equation is derived which includes also the effect of the time-dependent driving in a non-trivial way. The quasienergies of the pure system show multiple avoided level crossings corresponding to multiphonon transitions in the resonator. Around the resonances, the master equation is solved analytically using Van Vleck perturbation theory. Their lineshapes are calculated resulting in simple expressions. We find the general solution for the multiple multiphonon resonances and, most interestingly, a bath-induced transition from a resonant to an antiresonant behavior of the nonlinear response.
Introduction
The experimental realization of nanoscale resonators which show quantum mechanical behavior [1,2,3,4,5] is currently on the schedule of several research groups worldwide and poses a rather non-trivial task. Important key experiments on the way to this goal have already been reported in the literature [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] and are also reviewed in this Focus Issue. Most techniques to reveal the quantum behavior so far address the linear response in form of the amplitude of the transverse vibrations of the nanobeam around its eigenfrequency. The goal is to excite only a few energy quanta in a resonator held at low temperature. To measure the response, the ultimate goal of the experiments is to increase the resolution of the position measurement to the quantum limit [11,17,18,21,22]. As the response of a damped linear quantum oscillator has the same simple Lorentzian shape as the one of a damped linear classical oscillator [23], a unique identification of the "quantumness" of a nanoresonator in the linear regime can sometimes be difficult.
One possible alternative is to study the nonlinear response of the nanoresonator which has been excited to its nonlinear regime. A macroscopic beam which is clamped at its ends and which is strongly excited to transverse vibrations displays the properties of the Duffing oscillator being a simple damped driven oscillator with a (cubic) nonlinear restoring force [24]. Its nonlinear response displays rich physical properties including a driving induced bistability, hysteresis, harmonic mixing and chaos [24,25,26]. The nonlinear response of (still classical) nanoscale resonators has been measured in recent experiments [9,10,11,16]. In the range of weak excitations, the standard linear response arises while for increasing driving, the characteristic response curve of a classical Duffing oscillator has been identified.
No signatures of a quantum behavior in the nonlinear response of realized nanobeams have been reported up to present. One reason is that a nanomechanical resonator is exposed to a variety of intrinsic as well as extrinsic damping mechanisms depending on the details of the fabrication procedure, the experimental conditions and the used materials [20,27,28,29]. Possible extrinsic mechanisms include clamping losses due to the strain at the connections to the support structure, heating, coupling to higher vibrational modes, friction due to the surrounding gas, nonlinear effects, thermoelastic losses due to propagating acoustic waves, surface roughness, extrinsic noise sources, dislocations, and other material-dependent properties. An important internal mechanism is the interaction with localized crystal defects. Controlling this variety of damping sources is one of the major tasks to be solved to reveal quantum mechanical features. Recent measurement show that in the so far realized devices based on silicon and diamond structures, damping has been rather strong at low frequencies [27,28] indicating even sub-Ohmic-type damping [23] which would make it difficult to observe quantum effects at all. However, using freely suspended carbon nanotubes [30,31] instead could reduce damping at low frequencies due to the more regular structure of the long molecules which can be produced in a very clean manner. Further experimental work is required to clarify this point and to optimize the experimental conditions. Nanoscale nonlinear resonators in the quantum regime have been investigated theoretically starting from microscopic models based on elasticity theory for the beam [32,33,34]. Carr, Lawrence and Wybourne have considered an elastic bar under static longitudinal compression beyond the Euler instability leading to two stable equilibrium positions around which the transverse vibrations of the beam occur. It turned out that quantum tunneling between the two minima is in principle possible in silicon beams and carbon nanotubes. However, the strain has to be controlled with extreme accuracy and the quantum fluctuations in position are of the order of 0.1Å. The detection of such small lengths certainly is challenging. However, a possible method to increase the resolution could be the use of the phenomenon of stochastic resonance [35] for a coherent signal amplification of the nonlinear response of nanomechanical resonators in their bistable regime [36]. Werner and Zwerger [33] have studied a similar setup close to the Euler buckling instability which occurs at a critical strain ǫ c . There, the frequency of the fundamental mode vanishes and quartic terms in the Lagrangian have to be taken into account. An effective Hamiltonian has been derived for the amplitude of the fundamental mode being the dynamical variable which moves in an anharmonic potential. Depending on the strain ǫ being below (ǫ < ǫ c ) or above (ǫ > ǫ c ) the critical value, a monostable or bistable situation can be created. The conditions for macroscopic quantum tunneling to occur have been estimated for the bistable case. In order to measure single-phonon transitions in a nanoresonator, it has been proposed to use its anharmonicity together with a second nanoresonator acting as a transducer for the phonon number in the first one [37]. In this way, the measured signal being the induced current is directly proportional to the position of the read-out oscillator.
In Ref. [34], we have considered a similar setup but restricted to the statically monostable case below the Euler instability, i.e., for ǫ < ǫ c . In addition, we have allowed for a time-dependent periodic driving force F (t) such that an effective monostable quantum Duffing oscillator arises. Possible origin of the driving can be the magnetomotive force when an ac current is applied and the beam is placed in a transverse magnetic field. Moreover, a (weak) influence of the environment has been modeled phenomenologically by a simple Ohmic harmonic bath. The nonlinear response has been determined numerically from solving a Born-Markovian master equation for the reduced density operator of the system after the bath has been traced out. We have identified discrete multiphonon transitions as well as macroscopic quantum tunneling of the fundamental mode amplitude between the two stable states in the driving induced bistability. Moreover, a peculiar multiphonon antiresonant behavior has been found in the numerical results for the damped system [38]. The discrete multiphonon (anti-)resonances are a typical signature of quantum mechanical behavior [34,38] and are absent in the corresponding classical model of the standard Duffing oscillator [24,25,26], also when thermal fluctuations are included [39].
While we have approached the problem in Refs. [34,38] by numerical means, we present in this work a complete analytical investigation of the dynamics of the quantum Duffing oscillator. We intend to elucidate the mechanism behind the reported [38] bath-induced transition from the resonant to the antiresonant nonlinear response of the nanobeam. This is achieved by solving a Born-Markovian master equation for the reduced density operator in the rotating frame. Within the rotating wave approximation (RWA), a simplified system Hamiltonian follows whose eigenstates are the quasienergy states. The corresponding quasienergies show avoided level crossings when the driving frequency is varied. They correspond to multiphonon transitions occurring in the resonator. Moreover, we include the dissipative influence of an environment and find that the dynamics around the avoided quasienergy level crossings is well described by a simplified master equation involving only a few quasienergy states. Around the anticrossings, we find resonant as well as antiresonant nonlinear responses depending on the damping strength. The underlying mechanism is worked out in the perturbative regime of weak nonlinearity, weak driving and weak damping. There, Van Vleck perturbation theory allows to obtain the quasienergies and the quasienergy states analytically. The master equation can then be solved in the stationary limit and subsequently, the line shapes of the resonant as well as the antiresonant nonlinear response can be calculated.
The problem of a driven quantum oscillator with a quartic nonlinearity has been investigated theoretically in earlier works in various contexts. In the context of the radiative excitation of polyatomic molecules [40], Larsen and Bloembergen have calculated the wave-functions for the coherent multiphoton Rabi precession between two discrete levels for a collisionless model. More recently, also Dykman and Fistul [41] have considered the bare nonlinear Hamiltonian under the rotating wave approximation. Drummond and Walls [42] have investigated a similar system occurring for the case of a coherently driven dispersive cavity including a cubic nonlinearity. Photon bunching and antibunching have been predicted upon solving the corresponding Fokker-Planck equation. Vogel and Risken [43] have calculated the tunneling rates for the Drummond-Walls model by use of continued fraction methods. Dmitriev, D'yakonov and Ioffe [44] have calculated the tunneling and thermal transition rates for the case when the associated times are large. Dykman and Smelyanskii [45] have calculated the probability of transitions between the stable states in a quasi-classical approximation in the thermally activated regime. Recently, the role of the detector (in this case, a photon detector) has been studied for the quantum Duffing oscillator in the chaotic regime [46]. The power spectra of the detected photons carry information on the underlying dynamics of the nonlinear oscillator and can be used to distinguish its different modes. However, the line-shape of the multi-phonon resonance which is the central object for studying the nonlinear response remained unaddressed so far. In addition, we start from a microscopic Hamiltonian for the bath and present a fully analytical treatment of system and environment in the deep quantum regime of weak coupling.
Our paper is structured such that we introduce the elasticity model for the doubly clamped nanobeam, derive the effective quartic Hamiltonian and discuss the model for damping in Section 2. Then, we discuss the coherent dynamics of the pure system in terms of the RWA and the Van Vleck perturbation method in Section 3. The dissipative dynamics is studied in Section 4, while the observables are defined in Section 5. The solution for the line shapes are given in Section 6 before the final conclusions are drawn in Section 7.
Model for the driven nanoresonator
We consider a freely suspended nanomechanical beam of total length L and mass density σ = m/L which is clamped at both ends (doubly clamped boundary conditions) and which is characterized by its bending rigidity µ = Y I being the product of Young's elasticity modulus Y and the moment of inertia I. In addition, we allow for a mechanical force F 0 > 0 which compresses the beam in longitudinal direction. Moreover, the beam is excited to transverse vibrations by a time-dependent driving field F (t) =f cos(ω ex t). In a classical description, the transverse deflection φ(s, t) characterizes the beam completely, where 0 ≤ s ≤ L. Then, the Lagrangian of the vibrating beam follows from elasticity theory as [33] Before we study the dynamics of the driven beam, we consider first the undriven system with F (t) ≡ 0. For the case of small deflections |φ ′ (s)| ≪ 1, the Lagrangian can be linearized and the time-dependent Euler-Lagrange equations can be solved by the eigenfunctions φ(s, t) = n φ n (s, t) = n A n (t)g n (s), where g n (s) are the normal modes which follow as the solution of the characteristic equation. For the doubly clamped nano-beam, we have φ(0) = φ(L) = 0 and φ ′ (0) = φ ′ (L) = 0. However, it turns out that this situation is closely related to the simpler case that the nano-beam is also fixed at both ends but its ends can move such that the bending moments at the ends vanish, i.e., φ(0) = φ(L) = 0 and φ ′′ (0) = φ ′′ (L) = 0 (free boundary conditions). For the case of free boundary conditions, the characteristic equation yields the normal modes g free n (s) = sin(nπs/L) and the corresponding frequency of the n−th mode follows as ω free At the critical force F c = µ(π/L) 2 , the fundamental frequency ω free 1 (F 0 → F c ) vanishes as √ ǫ, where ǫ = (F c − F 0 )/F c is the distance to the critical force, and the well-known Euler instability occurs.
For the case of doubly clamped boundary conditions, the characteristic equation yields a transcendental equation for the normal modes which cannot be solved analytically. However, close to the Euler instability F 0 → F c , the situation simplifies again. After expanding, one finds for the fundamental frequency Approaching the Euler instability, the frequencies of the higher modes ω n≥2 remain finite, while the fundamental frequency Hence, the dynamics at low energies close to the Euler instability will be dominated by the fundamental mode alone which simplifies the treatment of the nonlinear case, see below. The fundamental mode g 1 (s) can also be expanded close to the Euler instability and one obtains in zero-th order in ǫ g 1 (s) ≃ sin 2 πs L . (3)
Effective single-particle Hamiltonian
Since the fundamental mode vanishes when F 0 → F c , one has to include the contributions beyond the quadratic terms ∝ φ ′2 , φ ′′2 of the transverse deflections in the Lagrangian. The next higher order is quartic and yields terms ∝ φ ′4 , φ ′2 φ ′′2 . Inserting again the normal mode expansion in the Lagrangian generates self-coupled modes k A 4 k as well as couplings terms k,l A 2 k A 2 l between the modes. This interacting field-theoretic problem cannot be solved any longer. However, since the normal mode dominates the dynamics at low energies closed to the Euler instability, one can neglect the higher modes in this regime. Hence, we choose the ansatz φ(s, t) = A 1 (t)g 1 (s) in the regime F 0 → F c and restrict the discussion in the rest of our work to this regime. The so-far classical field theory can be quantized by introducing the canonically conjugate momentum P ≡ −i ∂/∂A 1 and the time-dependent driving force can straightforwardly be included. Note that when the driving frequency is close to the fundamental frequency of the beam, the fundamental mode will dominate also in absence of a static longitudinal compression force. However, a compression force helps to enhance the nonlinear effects which are in the focus of this work. After all, an effective quantum mechanical time-dependent Hamiltonian results which describes the dynamics of a single quantum particle with "coordinate" X ≡ A 1 in a time-dependent anharmonic potential. It reads with the effective mass m eff = 3σL/8 and the nonlinearity parameter α = (π/L) 4 F c L(1+ 3ǫ). The classical analogous system is the Duffing oscillator [24] (when, in addition, damping is included, see below). It shows a rich variety of features including regular and chaotic motion. In this work, we focus on the parameter regime where only regular motion occurs. For weak driving strengths, the response as a function of the driving frequency ω ex has the well-known form of the harmonic oscillator with the maximum at ω ex = ω 1 . For increasing driving strength, the resonance grows and bends away from the ω ex = ω 1 -axis towards larger frequencies (since α > 0). The locus of the maximal amplitudes is often called the backbone curve [24]. The corresponding nonlinear response of the quantum system shows clear signatures of sharp multi-phonon resonances whose line shapes will be calculated below.
Phenomenological model for damping
In our approach, we do not intend to focus on the role of the microscopic damping mechanisms as this depends on the details of the experimental device. Instead, we introduce damping phenomenologically in the standard way [23] by coupling the resonator Hamiltonian Eq. (4) to a bath of harmonic oscillators described by the standard Hamiltonian with the spectral density with damping constant γ s and cut-off frequency ω c . Our results discussed below are valid for an Ohmic (s = 1, γ 1 ≡ γ) as well as for super-Ohmic (s > 1) baths. Sub-Ohmic baths will not be considered here since the weak-coupling assumption which allows the Markov approximation does not hold any longer. Formally, the coefficients in the master equation would diverge in the sub-Ohmic case, see Eq.
To proceed, we scale H tot (t) to dimensionless quantities such that the energies are in units of ω 1 while the lengths are scaled in units of x 0 ≡ m eff ω 1 . Put differently, we formally set m eff = = ω 1 = 1. The nonlinearity parameter α is scaled in units of α 0 ≡ ω 1 /X 4 0 , while the driving amplitudes are given in units of f 0 ≡ ω 1 /x 0 . Moreover, we scale temperature in units of T 0 ≡ ω 1 /k B while the damping strengths are measured with respect to ω 1 .
Coherent dynamics and rotating wave approximation (RWA)
Let us first consider the resonator dynamics without coupling to the bath. For convenience, we switch to a representation in terms of creation and annihilation operators a and a † , such that X = x 0 (a + a † )/ √ 2. Moreover, it is convenient to switch to the rotating frame by formally performing the canonical transformation R = exp [−iω ex a † at]. We are interested in the nonlinear response of the resonator around its fundamental frequency, i.e., for ω ex ≈ ω 1 , and will not consider the response at higher harmonics. We further assume that the driving amplitude f is not too large such that the nonlinear effects are small enough in order not to enter the chaotic regime. This suggests to use a rotating wave approximation (RWA) of the full system Hamiltonian H(t) in Eq. (4) as the fast oscillating terms will be negligible around the fundamental frequency for weak enough driving. By eliminating all the fast oscillating terms from the transformed Hamiltonian, one obtains the Schrödinger equation in the rotating framẽ H|φ α = ε α |φ α with the Hamiltonian in the RWÃ Here, we have introduced the detuningω = ω 1 − ω ex , the nonlinearity parameter ν = 3 α/(4ω 2 1 ), f =f (8 ω 1 ) −1/2 andn = a † a. In the static frame, an orthonormal basis (at equal times) follows as The Hamiltonian (7) has been studied in Refs. [40,41]. The quasi-energy levels for a given number N of phonons are pairwise degenerate, ε N −n = ε n for n ≤ N, vanishing f → 0 andω = −ν(N + 1)/2. For a finite driving strength f > 0, the exact crossings turn then into avoided crossings which is a signature of multiphonon transitions [34,41]. A typical quasienergy spectrum is shown in Fig. 1 for the parameters ν = 10 −3 and f = 10 −4 . The dashed vertical lines indicate the multiple avoided level crossings which occur all for the same driving frequency. For |ε| = |2f /[ν(N + 1)]| ≪ 1, each pair of degenerate levels interacts only weakly with the other levels, and act effectively like a two-level Rabi system [40]. The Rabi frequency is related to the minimal splitting of the levels and is calculated perturbatively with ε as a small parameter in the following Section.
Van Vleck perturbation theory
Let us therefore consider the multiphonon resonance atω = −ν(N + 1)/2. In addition, we are interested in the response around the resonance and therefore introduce the small deviation ∆. We formally rewriteH as Let us then first discuss the dynamics at resonance (∆ = 0). We divide it in the unperturbed part H 0 and the perturbation εV according to respectively. The unperturbed Hamiltonian is diagonal and near the resonance its spectrum is divided in well separated groups of nearly degenerate quasienergy eigenvalues. An appropriate perturbative method to diagonalize this type of Hamiltonian is the Van Vleck perturbation theory [49,50,51]. It defines a unitary transformation yielding the HamiltonianH in an effective block diagonal form. The effective Hamiltonian has the same eigenvalues as the original one, with the quasidegenerate eigenvalues in a common block. The effective Hamiltonian can be written asH In our case, each block is a two by two matrix corresponding to a subspace formed by a couple of quasienergy states forming an anticrossing. Let us consider the effective Hamiltonian H ′ n corresponding to the involved levels |n and |N − n , being eigenstates of the harmonic oscillator. The degeneracy in the corresponding block is lifted at order N − 2n in Van Vleck perturbation theory. The block Hamiltonian then reads where This is the lowest order of the perturbed Hamiltonian which allows to calculate the corresponding zero-th order eigenstates. By diagonalizingH ′ n in Eq. (12), one finds the minimal splitting for the N−phonon transition as For the case away from resonance, we consider a detuning ∆ = ε N δ. Within the Van Vleck technique, only the zero-th block is influenced according tõ the other blocks given in Eq. (12) for n = 0 are not influenced by this higher-order correction. The eigenvectors for the HamiltonianH at zero-th order are obtained by diagonalizingH ′ n in Eq. (12). One finds |φ n = |n for n ≥ N + 1 or |φ n = (|n +|N −n )/ √ 2 and |φ N −n = (|n −|N −n )/ √ 2 for 0 < n < N/2 and |φ N/2 = |N/2 if N is even. Moreover, where we have introduced the angle θ via tan θ = −2Ω N,0 /[ν(N + 1)N∆].
Dissipative dynamics in presence of the bath
Having discussed the coherent dynamics, we include now the influence of the harmonic bath coupled to the driven system. We therefore assume that the coupling is weak enough such that the standard Markovian master equation for the reduced density operator ρ(t) can be applied. The influence of the bath enters in the superoperator with the correlators and The kernels are given by where T is the environment temperature. Moreover, U(t, t ′ ) = T exp(i t t ′ H(t)dt) is the propagator with the time order operator T . Next, we project the density matrix on the orthonormal set |ϕ α (t) = exp [−iω ex a † at]|φ α , such that the matrix elements read Performing the derivative one obtainṡ For the dissipative term, we need to compute with X αβ,+1 = X * βα,−1 = x 0 φ α |a|φ β / √ 2 being the matrix element of the destruction operator in the rotating frame. We need, moreover, In an analogous way, we have Here, we have defined the time derivative ± and Here, N αβ,±1 are defined as in terms of the bath density of states J(|ε|), the bosonic thermal occupation number and the Heaviside function θ(x). Eq. (29) illustrates why we have to restrict to (super-)Ohmic baths, since N(ε) would diverge for s < 1 at low energies. During the calculation, the τ -integration in the double integrals in Eqs. (26) and (25) has been evaluated by using the representation ∞ 0 dτ exp (iωτ ) = πδ(ω) + iP p (1/ω), where P p denotes the principal part. The contributions of the principal part result in quasienergy shifts of the order of γ s which are the so-called Lamb shifts. As usual, these have also been neglected here.
The ingredients can now be put together to obtain the Markovian master equation in the static frame aṡ Next, we perform a 'moderate rotating wave approximation' consisting in averaging the time-dependent terms in the bath part over the driving period T ωex = 2π/ω ex . This is consistent with the assumption of weak coupling which assumes that dissipative effects on the dynamics are noticeable only on a time scale much larger than T ωex . Under this approximation, the master equation becomeṡ with the dissipative transition rates It is instructive to compare this master equation to the one in Ref. [52], given in terms of the full Floquet quasienergy states. The key difference here is that the density matrix is projected onto the approximate eigenvectors exp (−iω ex a † at)|φ α rather than onto the exact Floquet solutions. As a consequence of the RWA, the sums in Eq. (33) only include the n = ±1 terms indicating that only one-step transitions are possible where n = −1 refers to emission and n = +1 to absorption. Being consistent with the RWA, we can assume that |ν|, |f |, |ω ex −ω 1 | ≪ ω 1 which yields to |ε α −ε β | ≪ ω ex . Hence, N αβ,+1 is the product of the bath density of states and the bosonic occupation number at temperature T . This corresponds to the thermally activated absorption of a phonon from the bath. On the other hand, N αβ,−1 given in Eq. (29) contains the temperature-independent term .. + J(ω ex ) describing spontaneous emission.
Observable for the nonlinear response
Assuming an ergodic dynamics of the full system, or equivalently that there is just one eigenvector ̺ ∞ of the superoperator S in Eq. (32), corresponding to a vanishing eigenvalue, and that all the other eigenvalues have negative real part, the asymptotic solution of Eq. (32) is To simplify notation, we omit in the following the superscript ∞ but only refer to the stationary state ̺ αβ ≡ ̺ ∞ αβ . We are interested in the mean value of the position operator in the stationary state according to Using Eq. (23) yields X = A cos (ω ex t + ϕ), with the oscillation amplitude and the phase shift with θ being the Heaviside function.
6. Analytical solution for the lineshape of the multiphonon resonance in the perturbative regime When the driving frequency ω ex is varied, the amplitude A shows characteristic multiphonon resonances at those values for which the quasienergy levels form avoided level crossings [34]. While in Ref. [34] these resonances have been studied numerically, it is the central result of this work to calculate their line shape analytically by solving the corresponding master equation in the Van Vleck perturbative regime. Within the limit of validity of the RWA, i.e., |ν|, |f |, |ω ex − ω 1 | ≪ ω 1 , we have |ε α − ε β | ≪ ω ex . In the regime of low temperature k B T ≪ ω ex , it follows from Eq. (29) that N αβ,−1 ≃ J(ω ex ) and N αβ,1 ≃ 0 entering in the transition rates in Eq. (33). This approximation corresponds to consider spontaneous emission only and yields the dissipative transition rates Here, we have defined A αβ ≡ φ α |a|φ β . Note that it is consistent with the previous approximation to set ω ex /ω 1 ≈ 1. Hence, all the following results are valid for Ohmic as well as super-Ohmic baths. In the following we will use this simplified transition rates to solve the master equation near the multiple multiphonon resonances. The transition between the groundstate and the N-phonon state is the narrowest. Hence, it will be affected first when a finite coupling to the bath is considered. In particular, it is interesting to consider the case when the damping constant γ s is larger than the minimal splitting Ω N 0 between the two quasienergy states but smaller than all the minimal splittings of the other, i.e., Ω N 0 < γ s ≪ Ω N n for n ≥ 1. In this case, we can assume a partial secular approximation: We set all the off-diagonal elements to zero except for ̺ 0N and ̺ N 0 = ̺ * 0N . In this regime the stationary solutions are determined by the conditions 0 = β L αα,ββ ̺ ββ + L αα,0N 2 Re(̺ 0N ) , For very weak damping, i.e., when γ s is smaller than all minimal splittings (γ s ≪ Ω N n ), the off-diagonal elements of the density matrix are negligibly small and can be set to zero. Within this approximation, the stationary solution for the density matrix is determined by the simple kinetic equation In this regime, a very simple physical picture arises. The bath causes transitions between different quasienergy states, but here, the transition rates are independent from the quasienergies. It is instructive to express the quasienergy solutions in terms of the harmonic oscillator (HO) solutions as |φ α = n c αn |n with some coefficients c αn . The transition rates between two quasienergy states then read This formula illustrates simple selection rules in this low-temperature regime: Only those components of the two different quasienergy states contribute to the transition rate whose energy differs by one energy quantum (n ↔ n + 1).
One-phonon resonance vs. antiresonance
Before we consider the general multiphonon case, we first elaborate on the one-phonon resonance. This, in particular, allows to make the connection to the standard linear response of a driven damped harmonic oscillator which is resonant at the frequency ω 1 + ν. We will illustrate the mechanism how this resonant behavior is turned into an antiresonant behavior when the damping is reduced (and the driving amplitude f is kept fixed). The corresponding effective HamiltonianH ′ 0 follows from Eq. (15) and is readily diagonalized by the quasienergy states |φ 0 and |φ 1 which are of zero-th order in ε and which are given in Eq. (16). The master equation (39) can be straightforwardly solved in terms of the rates L αβ,α ′ β ′ for which one needs the ingredients A 00 = −A 11 = sin(θ/2) cos(θ/2), A 01 = cos 2 (θ/2) and A 10 = − sin 2 (θ/2). The general solution follows as , Imρ 01 = Ω(∆) L 01,01 − L 01, 10 Reρ 01 , where Ω(∆) = ε 0 − ε 1 . In the following, we calculate the amplitude A according to Eq. (36) to zero-th order in ε. In Fig. 2, we show the nonlinear response for the parameter set (in dimensionless units) f = 10 −5 and ν = 10 −3 . Moreover, the one-phonon resonance condition reads ω ex = ω 1 + ν. The transition from the resonant to antiresonant behavior depends on the ratio γ/Ω 10 = γ/(2f ). For the case of stronger damping γ/(2f ) = 10, we find that the response shows a resonant behavior with a Lorentzian form similar to the response of a damped linear oscillator. In fact, the corresponding standard classical result is also shown in Fig. 2 (black dashed line). The only effect of the nonlinearity to lowest order perturbation theory is to shift the resonance frequency by the nonlinearity parameter ν. The resonant behavior turns into an antiresonant one if the damping constant is decreased to smaller values. A cusp-like line profile arises in the limit of very weak damping when the damping strength is smaller than the minimal splitting, i.e., γ/(2f ) ≪ 1. Then, the response follows from the master equation (40) as This antiresonance lineshape is also shown in Fig. 2 (see dotted-dashed line). At resonance ∆ = 0, we have an equal population of the quasienergy states: ρ 00 = ρ 11 = 1/2 and both add up to a vanishing oscillation amplitude A since A 00 = −A 11 . Note that we show also the solution from the exact master equation containing all orders in ǫ, for the case γ/(2f ) = 0.5 and s = 1 (blue dashed line in Fig. 2), in order to verify the validity of our perturbative treatment.
Multiphonon resonance vs. antiresonance
In this subsection we want to investigate the multiple multiphonon resonances N > 1.
In order to illustrate the physics, we start with the simplest case at resonance and within the secular approximation.
The transition rates between states belonging to the same pair are zero with the exception L (N −1)/2(N −1)/2,(N +1)/2(N +1)/2 = γ s (N + 1)/8. The dynamics can be illustrated with a simple analogy to a double-well potential. Each partner of the pair |φ n and |φ N −n of the quasienergy states consists of a superposition of two harmonic oscillator states |n and |N −n which are the approximate eigenstates of the static anharmonic potential in the regime of weak nonlinearity. In our simple picture, |n and |N − n should be identified with two localized states in the two wells of the quasienergy potential, see Fig. 3 for illustration. Note that a quasipotential can be obtained by writing the RWA Hamiltonian in terms of the two canonically conjugated variables X and P [41]. The right/left well should be identified with the internal/external part of the quasieneryg surface shown in Ref. [41].
In the figure, we have chosen N = 8. Within our analogy, the states |0 , |1 , ..., |N/2−1 are localized in one (here, the left) well, while |N , |N −1 , ..., |N/2+ 1 are localized in the other well (here, the right). The fact that the true quasienergy states are superpositions of the two localized states is illustrated by a horizontal arrow representing tunneling.
From Eq. (44) follows that a bath-induced transition is only possible between states belonging to two different neighboring pairs. As discussed after Eq. (41), the only contribution to the transition rates come from nearby HO states. In our case, we consider only spontaneous emission which corresponds to intrawell transitions induced by the bath. This is shown schematically in Fig. 3 by the vertical arrows with their thickness being proportional to the transition rates. We emphasize that the bathinduced transitions occur towards lower lying HO states. Consequently, in our picture, spontaneous decay happens in the left well downwards but in the right well upwards.
The driving field excites the transition from |0 to |N while the bath generates transitions between HO states towards lower energies according to |N → |N − 1 → ... → |0 when only spontaneous emission is considered.
As a consequence, the ratio of the occupation numbers of two states belonging to two neighboring pairs is simply given by the ratio of the corresponding transition rates according to Hence, the unpaired state |φ N/2 (for N even) or the states |φ (N −1)/2 and |φ (N +1)/2 (for N odd) are the states with the largest occupation probability. By iteration, one finds 6.2.2. Density matrix around the resonance So far, we have discussed the dynamics exactly at resonance. Next, we consider the situation around the resonance and for an increased coupling to the bath. Therefore, we compute the stationary solution using the conditions in Eq. (39) and the general leading order solution for the quasienergy states given in Eq. (16). The expressions for the rates which are modified compared to before follow straightforwardly and are given in the Appendix. Similarly, the only three equations which change compared to the previous situation are also presented there. These equations can be straightforwardly solved by Away from the resonance (|θ| ≪ 1), the density matrix follows as In the limit of strong coupling (γ s ≫ Ω(∆)), one finds for any θ. This nicely illustrates that when the coupling to the bath is strong enough, the possibility of resonant tunneling between |0 and |N is destroyed and a trivial asymptotic state results. This is true even if tunneling transitions between the other states are possible. Moreover, this also shows that moving away from resonance also suppresses multiphonon tunneling transitions. In other words, the only requirement for the multiphonon transition to occur in the stationary limit is the possibility of the tunneling transition |0 → |N .
6.2.3. Lineshape around the resonance Within our partial secular approximation, the lineshape of the oscillator nonlinear response given in Eq. (36) reduces to The leading order is given by the zero-th order expression for ̺ and the first-order expressions for A αα , A N 0 and A 0N . In order to compute these matrix elements, we determine the first order eigenvectors using Van Vleck perturbation theory according to where S 1 is the first order component in the expansion of S with respect to ε given in Eq. (11). The matrix elements of its off-diagonal blocks are given by Here, E α are the eigenenergies of the unperturbed Hamiltonian H 0 given in Eq. (10). This yields for N = 2 The corresponding result for the nonlinear response for N = 2 is shown in Fig. 4 for the case ν = 10 −3 and f = 10 −4 for different values of γ s /Ω 20 . For strong damping γ s /Ω 20 = 5, the resonance is washed out almost completely. Decreasing damping, a resonant lineshape appears whose maximum is shifted compared to the resonance condition ω ex = ω 1 + 3ν/2. Note that the dashed line refers to the result which includes all orders in ε and which follows from the numerical solution of the master equation for an Ohmic bath at temperature T = 0.1T 0 . The picture which arises for the behavior is the following: For weak damping (γ s ≪ Ω 20 ), the equilibrium state is a statistical mixture of quasienergy states. At resonance, the most populated state is |φ 1 which oscillates with a phase difference of −π in comparison with the driving. This is due to the negative sign of A 11 in Eq. (54). Hence, at resonance the overall oscillation of the observable occurs with a phase difference of ϕ = −π. Far away from resonance, the most populated state is |φ 0 , see Eq. (49), which oscillates in phase with the driving. Thus, the overall oscillation occurs in phase, i.e., ϕ = 0. If no off-diagonal element of the density matrix is populated (which is the case for weak damping), the overall phase is either ϕ = 0 or ϕ = −π. Hence, increasing the distance from resonance, the amplitude A has to go through zero yielding a cusp-like line-shape. This implies the existence of a maximum in the response. For slightly larger damping, the finite population of the off-diagonal elements leads to a smearing of the cusp. For larger damping, the resonance is washed out completely, as has been already discussed, see Eq. (50). In this regime, the oscillation is in phase with the driving. By decreasing the damping, the population of the out-of-phase state starts to increase near the resonance resulting in a reduction of the in-phase-phase oscillation and thus producing a minimum of the response. This mechanism is effective for a broad range of parameters including larger N, larger ε, and larger temperature T , as also shown numerically in Refs. [34,38]. Note that a calculation up to first order in ε for the density matrix is required for N odd, since the matrix elements A (N ±1)/2(N ±1)/2 have a zero-th order term, in order that the overall result for A is again of first order in ε. Since one obtains more complicated expressions than before, we omit to present them in their full lengths. In Fig. 5, we show the behavior for N = 3 for various damping constants γ s /Ω 30 for the case f = 0.5 × 10 −4 and ν = 10 −3 . For a large value for γ s /Ω 30 , the resonance is washed out completely. When the damping is decreased, a dip appears which corresponds to an antiresonance.
Decreasing the damping further, the antiresonance turns into a clear resonance. This behavior is opposite to the case N = 1 as discussed above, but similar to the case N = 2.
Conclusions
We have studied the nonlinear response of a vibrating nanomechanical beam to a timedependent periodic driving. Thereby, a static longitudinal compression force is included and the system is investigated close to the Euler buckling instability. There, the fundamental transverse mode dominates the dynamics whose amplitude can be described by an effective single-particle Hamiltonian with a periodically driven anharmonic potential with a quartic nonlinearity. Damping is modeled phenomenologically by a bath of harmonic oscillators. We allow for an Ohmic as well as for a super-Ohmic spectral density and have considered the regime of weak system-bath coupling. In this regime, the dynamics is captured by a Born-Markovian master equation formulated in the frame which rotates with the driving frequency. The pure driven Hamiltonian shows avoided level crossings of the quasienergies which correspond to multiphonon transitions in the resonator. In fact, a transition between a resonant and an antiresonant behavior at the avoided level crossings has been found which depends on the coupling to the bath. Concentrating to driving frequencies around the avoided level crossings, the dynamics can be simplified considerably by restricting to a few quasienergy levels. In order to illustrate the basic principles governing the resonance-antiresonance transition, we investigate the perturbative regime of weak nonlinearity and weak driving strength. Then, Van Vleck perturbation theory allows to calculate the quasienergies and the quasienergy states and an analytic solution of the master equation becomes possible yielding directly the nonlinear response. For the one-phonon case, we find a simple expression for the nonlinear response which displays a Lorentzian resonant behavior for strong damping. Reducing the damping strength, an antiresonance arises. For the multiphonon transitions, first an antiresonance arises when damping is reduced. For even smaller values of the damping constant, the antiresonance turns into a resonant peak. This is due to a subtle interplay of varying populations of quasienergy states which is affected by the bath.
Finally, a comment on the observability of this effect is in order. The amplitude A measuring the nonlinear response is of the order of the oscillator length scale x 0 . This makes it challenging to measure the effect directly since the deterministic vibrations are on the same length scale like the quantum fluctuations. In turn, more subtle detection strategies have to be worked out, for instance, the capacitive coupling of the resonator to a Cooper pair box [54,55] or single electron transistors [56,57], the use of squeezed states in this setup [55], the use of a second nanoresonator as a transducer for the phonon number in the first one [37], or the coherent signal amplification by stochastic resonance [36]. In any case, the experimental confirmation of the theoretically predicted effects remains to be provided.
Acknowledgments
This work has been supported by the DFG-SFB/TR 12.
Appendix: Density matrix around the multiphonon resonance For completeness, we present in this Appendix the calculation of the density matrix around the multiphonon resonance which is required for Section 6.2.2. The expressions for the rates which are modified compared to before are readily calculated to be | 10,168.2 | 2005-12-07T00:00:00.000 | [
"Physics"
] |
A cantilevered liquid-nitrogen-cooled silicon mirror for the Advanced Light Source Upgrade
A cantilevered liquid-nitrogen-cooled silicon mirror is described that will achieve diffraction-limited performance even under extreme power density. This mirror will serve as a robust first optic for the high-brightness undulator beamlines at the upgraded Advanced Light Source.
Introduction
A project to upgrade the Advanced Light Source (ALS) is currently underway. This project (known as the ALS-Upgrade or ALS-U) includes a new soft X-ray 'FLEXON' beamline (FLuctuation and EXcitation of Orders in the Nanoscale) optimized for photon energies between 400 and 1400 eV with full polarization control. The current design of this beamline incorporates a 4 m-long apple X-type undulator (Schmidt & Calvi, 2018), a horizontally deflecting planar first mirror (referred to in this article as M1), a vertically deflecting monochromator, a horizontally deflecting refocusing mirror and an exit slit. This photon delivery system will serve ambitious research programs relying on highly coherent beams, requiring preservation of the wavefront under conditions of exceptionally high power density in the soft X-ray range.
The fundamental challenge in the mechanical design of M1 is to integrate the cooling and mounting system such that the distortion of the mirror -quantified in this paper as the rootmean-square (r.m.s.) height error -is acceptably small for the broad range of heat loads that correspond to the operating range of the undulator. The advantageous material properties of cryogenically cooled silicon and germanium have been well known in the synchrotron light source community since at least 1986 (Rehn, 1986;Bilderback, 1986). For high power applications it has been estimated that the slope error for liquid-nitrogen-cooled silicon is approximately 100 times smaller than for water-cooled silicon (Zhang, 1993). In the 1990s, this technology was tested in monochromator crystals (Comin, 1995;Knapp et al., 1995;Rogers et al., 1995;Meron et al., 1997) and it is now used routinely.
A variety of cryogenically cooled silicon monochromator designs have been developed which generally fall into one of two categories: directly or indirectly cooled. Indirectly cooled crystals are clamped between liquid-nitrogen-cooled copper ISSN 1600-5775 blocks with a layer of indium foil (Carpentier et al., 2001;Lee et al., 2001;Mochizuki et al., 2001;Tamasaku et al., 2002;Zhang et al., 2003;Chumakov et al., 2004;Zhang et al., 2013;Huang et al., 2014). In a directly cooled crystal the liquid nitrogen is in direct contact with the crystal, which is clamped to a coolant manifold, with the fluid being contained against the silicon by a compressed metal seal (Lee et al., 2000;Rowen et al., 2001;Liu et al., 2014).
Silicon mirrors cooled with liquid nitrogen are significantly less common, but do exist (Polack et al., 2010;Brookes et al., 2018). Considerations in the design of a successful liquidnitrogen-cooled silicon mirror system include, but are not limited to, carbon contamination of the optical surface, thermal strain between the mounting system and the mirror, mounting system stiffness, and thermal control, the latter three of which we will now briefly discuss. For a discussion of carbon contamination the reader is referred to Yao-Leclerc et al. (2011), Risterucci et al. (2012, Pellegrin et al. (2014) and Toyoshima et al. (2015). We plan to clean the gold-coated mirror while in operation, using the proven oxygen flow technique (Risterucci et al., 2012) in which the mirror is exposed to oxygen continuously, and the action of the undulator light is to create reactive radical species that prevent carbon formation. As an alternative, or for implementation of the cryogenically cooled mirror with coatings that are not compatible with oxygen, the mirror chamber can be equipped with in situ RF plasma cleaning ports. In this case the mirror temperature would be raised for cleaning and the cleaning gas mixture would depend on the mirror coating (Pellegrin et al., 2014).
The thermal strain in silicon on cooling from 295 to 125 K is approximately 2.5 Â 10 À4 m m À1 . To the extent that the mirror mounting structure applies a reaction force to the mirror in response to this contraction, the mirror will deform. Typically, synchrotron beamline optics are kinematically mounted to permit this thermal strain with minimal reaction force. Kinematic mounting can be accomplished with spheres and V-grooves or with flexures; in either case the six degrees of freedom of the optic are exactly constrained and rigidbody motion is prevented. Additionally, differential thermal expansion can be managed by controlling the temperature of the mounting system (Saveri Silva et al., 2017). However, the stiffness of the mounting system is also important because it partially determines the positional stability of the mirror. The system has to be designed so that vibration does not cause significant intensity noise. The criterion used is that the angular deflection of the mirror is less than 2.5% of the FWHM of the angular source size at the highest energy of the beamline, corresponding to an amplitude noise of 0.1%. For the worst case at ALS-U, this corresponds to an angular amplitude of $27 nrad r.m.s. We estimate that, to minimize sensitivity to environmental noise and stay below these vibration limits, the M1 assembly must have a first natural frequency (FNF) above 200 Hz. While this FNF requirement does not necessarily preclude kinematic mounting, designing a sufficiently stiff non-kinematic overconstrained mounting scheme is much more straightforward Another fundamental problem in the design of a kinematically mounted cooled mirror are the forces applied by the coolant lines to the mirror system. These forces vary in magnitude, direction and time with coolant pressure, temperature, flow rate and potentially also mirror alignment, and therefore are not easily characterized. To prevent unwanted motion or distortion of the mirror, the sphere and V-groove based kinematic mounts are often spring-loaded with sufficiently high preload that friction at the sphere to V-groove interface renders the mount nonkinematic. Intentionally non-kinematic designs also exist, one example being the water-cooled cantilever mirror designed for use on an NSLS 1 beamline (Ice & Sparks, 1988).
Regarding thermal control, ideally the temperature of a cryogenically cooled silicon mirror or crystal is held near 125 K, where the instantaneous coefficient of thermal expansion (CTE) of silicon is approximately zero. To a first-order approximation, the temperature drop from the hottest part of the mirror to the coolant is proportional to the absorbed heat load. In the case of M1 the heat load varies by a factor of approximately five (45 to 220 W), depending on the undulator deflection parameter K, and therefore the temperature drop from the peak mirror temperature to the coolant would also vary by a factor of approximately five. One solution is to use electric heaters near the coolant manifold; ideally the extra heat load would be applied on the reflecting surface of the mirror, which could be achieved with the incident X-ray beam itself, by opening an upstream aperture as the K of the undulator is reduced. However, as we shall show, in the case of the current M1 neither heaters nor variable apertures are needed, as the optical performance is adequate even at temperatures significantly below 125 K.
In this paper we present the design of a novel liquidnitrogen-cooled silicon mirror (Fig. 1); this end-cooled cantilever design addresses the fundamental challenges of thermal strain, mounting stiffness, unknown coolant line forces and thermal control, as described in the previous paragraph. This paper is divided into two main parts. In the first part we describe our design, outline our analytical approach to thermal tuning and present finite-element calculations of the thermoelastic distortion of the mirror. In the second part we first describe our method for calculating the r.m.s. height error, phase error and Strehl ratio from the finite-element results, and then present wavefront propagation simulations using the deformed mirror shape. Based on these calculations we predict that with a fixed pitch adjustment -but without any higher-order (for example circular) correction -our design will achieve a Strehl ratio greater than 0.85 for the entire operating range of the beamline for two polarization modes. With a correction achieved by adjusting the focal length by 7.5 mm, the minimum Strehl ratio is calculated to be 0.988.
Description of design
In the ALS-U FLEXON M1 (Fig. 1), one end of the silicon mirror substrate is clamped to a manifold made from a nickeliron Invar alloy. Heat is transferred through the mirror substrate, through a layer of indium foil, across an array of research papers pins machined into the manifold and into the flow of liquid nitrogen. The mirror is clamped to the manifold by a single screw and barrel nut. The clamping preload is set to achieve the required thermal contact conductance and is maintained at cryogenic temperatures by a spring washer. Translation of the substrate relative to the manifold is prevented by a hollow dowel pin in a hole concentric with the clamping screw, while rotation is prevented by a second pin in a slot.
The idea behind this design is to confine the deformation of the mirror substrate caused by thermal strain of the mirror and its mounting system to an optically insignificant part of the mirror. In other words, the center of the X-ray beam is located sufficiently far from the manifold that strain at the manifoldsubstrate interface does not significantly affect the shape of the reflecting area. In so doing we are freed from the need to mount the mirror substrate kinematically and can use a comparatively stiff overconstrained mounting system. The stiffness of this mounting system, combined with the tapered shape of the mirror, means that the first natural frequency of the mirror-manifold system is at 402 Hz, a factor of two greater than the design target for ALS-U optics. Thermal control is achieved by tuning the thermal resistance between the mirror substrate and the coolant, and is described in the next section.
Source considerations
In the case of M1, both the magnitude and spatial distribution of the absorbed power vary with the undulator deflection parameter K and polarization mode (Fig. 2). To maximize the flux from 400 to 1400 eV, the first harmonic of the undulator radiation would be used from K = 2.1 to K = 1, with a switch to the third harmonic at $874 eV and thereafter using the range K = 2.6 to 1.9. For some applications, the beamline will be used down to 230 eV, reached by using the first harmonic to a maximum K of 3. The peak power density ranges from approximately 0.2 W mm À2 at K = 1 to 1 W mm À2 at K = 3. Over this same K range the total absorbed power ranges from 45 to 220 W (assuming a fixed aperture dimension corresponding to AE3 standard deviations of the spatial distribution of 230 eV photons). Additionally, the polarization mode of the undulator can be changed in approximately 3 s. Storage ring, undulator and M1 parameters are given in Table 1. The heat load changes in both magnitude and shape depending on the undulator deflection parameter K and the polarization mode. Between K = 1 [panels (a and (b)] and 3 [panels (c) and (d)] the peak power density ranges from 0.2 to 1 W mm À2 , while the orientation of the power distribution rotates by 90 between linear horizontal [panels (a) and (c)] and vertical [panels (b) and (d)] polarization modes. Note the different axes scales. These heat loads were calculated using the SPECTRA code and account for the absorption spectrum of the mirror coating and grazing angle (Tanaka & Kitamura, 2001).
Figure 1
In the ALS-U FLEXON M1, one end of the silicon mirror substrate (labeled 1) is clamped to a cooling manifold (2) with a screw (3) and barrel nut (4). Preload at cryogenic temperatures is maintained by a spring washer (6). Indium foil is compressed between the substrate and manifold. Liquid nitrogen enters and exits the manifold via the welded-in fittings (5) and flows across an array of pins (9). Movement of the substrate relative to the manifold is prevented by a pair of hollow dowel pins, one in a slot (7) and one in a hole (8). The coordinate system used for finite-element calculations is shown at the center of the beam footprint (10) as a dashed line for the 6 dimensions of 230 eV photons. The beam grazing angle is 1.25 , which is exaggerated in the drawing for clarity.
Thermal tuning
We tuned the thermal resistance of our mirror system using a simple one-dimensional thermal-resistor model (Fig. 3). In this model the thermal resistance of conduction in the mirror substrate R s is related to the length L s , thermal conductivity s and cross-sectional area A s of the substrate by The contact resistance at the substrate-manifold interface is found from where is the thermal contact conductance and A i is the interface area. Note that we assume the resistance of conduction across the indium foil to be negligible. The conduction resistance in the manifold is where L m , m and A m are, respectively, the length, thermal conductivity and cross-sectional area of the manifold between the interface and the coolant. Finally, the convection resistance is where h is the convection film coefficient and A h is the total manifold-coolant interface area. The temperature drop across the mirror substrate, T 1 À T 2 , is The temperature drop across the substrate-manifold interface is that across the manifold to the coolant interface is and that from the coolant interface to the coolant bulk is To these 'design equations' we also add the temperature rise of the coolant flowing through the manifold at a mass flow rate _ m m, where C p is the specific heat capacity of the coolant and T 6 is the temperature of the coolant at the manifold outlet. We can choose up to 14 of the 19 variables in these five equations; for the purpose of designing this mirror, it is convenient to choose T 1 , T 2 , T 4 , T 5 , T 6 , q, s , L s , , A i , m , A m , A h and C p , and solve for A s , T 3 , L m , h and _ m m. If we assume that the coolant is liquid nitrogen at T 5 = 77 K, we can choose manifold-coolant interface and manifold outlet temperatures that limit the pressure necessary to prevent vaporization of the coolant, for example T 4 = 80 K and T 6 = 79 K. Because the instantaneous coefficient of thermal expansion of silicon is approximately zero at 125 K, we choose T 1 = 125 K, and because we would like to limit the thermal gradient in the substrate we choose a similar value for the minimum temperature, T 2 = 120 K. For a given undulator K and aperture size we know the power q. Assuming the substrate is silicon and the manifold is an Invar alloy, we know s and m . Based on published measurements of the thermal contact conductance (Yu et al., 1992;Asano et al., 1993;Khounsary et al., 1997;Marion et al., 2004) we assume a conservatively low value of = 1500 W m À2 K À1 . Once the mass flow rate and convection film coefficient are found, we find the dimensions of the pin array using the model developed by Zukauskas (1972), where C1, C2, m and n depend on the pin-array geometry, Re is the Reynolds number, Pr is the Prandtl number, Prs is the pin surface Prandtl number, c is the thermal conductivity of the coolant and d is the pin diameter. Despite its simplicity, this one-dimensional thermal-resistor model agrees with the three-dimensional finite-element model discussed in the next section.
Finite-element calculations of thermoelastic distortion
To evaluate the performance of our design we calculated the thermoelastic distortion using the finite-element code ANSYS (ANSYS 1 Mechanical APDL, Release 19.0). In this calculation, the full three-dimensional geometry of the design is modeled, along with temperature-dependent and orthotropic material properties for single-crystal silicon. We modeled the strain at the substrate-manifold interface by fixing the positions of nodes at the interface, as if the substrate were 'welded' to a manifold with zero coefficient of thermal expansion. This assumption is conservative because it overpredicts the strain in the substrate. In reality, in cooling from room temperature the manifold contracts, the substrate slides A simple one-dimensional thermal-resistor model of the mirror system. In this model q is the heat load and T 1 to T 5 are the temperatures at various locations in the mirror system. The maximum on the mirror surface is T 1 , the minimum at the cooled end of the substrate is T 2 , the substrate-manifold interface is T 3 , the manifold-coolant interface is T 4 and the coolant bulk is T 5 . The thermal resistances are: conduction in the mirror substrate R s , contact resistance at the substrate-manifold interface R i , conduction in the manifold R m and convection to the coolant bulk temperature R h . at the interface, and while the indium layer contracts it also permits some internal shear, all of which combine to reduce the overall strain and the resulting thermoelastic distortion of the mirror. To model the strain at the substrate-barrel-nut interface we first computed the local contact pressure with a finely meshed model of the barrel nut and substrate split at the symmetry planes, and then used the resulting pressure distribution as a boundary condition in the substrate model without the meshed barrel-nut geometry. This modeling sequence reduces computational cost and increases accuracy compared with a fine-meshed contact model for the full substrate and nut geometry. We computed the thermoelastic distortion of the mirror for various load steps that simulate the assembly and operation of the mirror. In the first load step we applied a preload tension to the screw, pulling the barrel nut against the substrate. Next we applied gravity, and then cooled the assembly from 295 K to a uniform 77 K. After that we 'turned on' the X-ray beam and computed the steady-state temperature distribution and thermoelastic distortion for a range of heat loads corresponding to undulator deflection parameter K values between 1 and 3, for both linear horizontal and vertical polarization modes. An example of the temperature distribution is plotted in Fig. 4. The total power absorbed by M1 as a function of undulator deflection parameter K and the temperature of the mirror as a function of total absorbed power are plotted in Fig. 5.
From the results of these simulations we conclude that the thermoelastic distortion is dominated by the cooling step from 295 to 77 K. In other words, not only do clamping and gravity contribute relatively little to the distortion (Fig. 6) The steady-state temperature of the optically significant portion of the mirror is between 128 and 134 K at K = 3 and linear horizontal polarization mode, as computed using the ANSYS finite-element code. Image used courtesy of ANSYS, Inc. (a) The height error and (b) the slope error in the tangential plane of the mirror for gravity sag are small compared with that of clamping. In both plots the manifold-substrate interface is at the far left (x = À150 mm) and the beam center is at x = 0. In panel (a), the approximately 10 nm tall bump at x = À125 mm is caused by the compression of the barrel nut against the mirror substrate.
of the X-ray beam power is also small compared with the effect of differential thermal expansion between the manifold and substrate (Figs. 7 and 8). This strain causes a 'cool-down' pitch in the reflecting portion of the mirror of À0.6 mrad, and this can be corrected either by a rotational stage or by the initial orientation of the mirror during beamline assembly.
Estimation of height error and Strehl ratio
We post-processed the finite-element results to calculate the height error, phase error and Strehl ratio. The height error g is the root-mean-square (r.m.s.) value inside a window on the mirror's surface. From the height error, the grazing angle and the wavelength , we compute the phase error ' from and the Strehl ratio S from Based on the wavefront propagation simulations described in the next section, we determined that the correct window size for computing the r.m.s. height error is 6 (six standard deviations) of the (approximately Gaussian) spatial distribution of photons of wavelength . This window is larger than the 2 Â FWHM (or 4.7) window discussed elsewhere (Goldberg & Yashchuk, 2016;Cocco & Spiga, 2019) because the height error of this mirror is not random, but instead has a particular profile (Fig. 9) which mostly affects the spherical and defocus aberrations. The shape of the height error is an important factor in assessing the window size, as also pointed out by Herloski (1985). In other words, different aberrations should be weighted over different apertures. For example, for coma and astigmatism, 4.8 and 4.7 are, respectively, the correct window sizes. For spherical aberration the size goes up to 5.6. The derivation from Herloski is based on a twodimensional radially symmetric distribution and provides a guideline for defining the proper window to use for calculating the shape error, phase error and Strehl ratio.
The calculated height error and Strehl ratio are plotted in Fig. 10. After removing the constant cool-down pitch angle of À0.6 mrad, the height error range is 0.5 to 3.5 nm, and the Strehl ratio range is 0.837 to 0.997. However, to reach the maximum energy of the beamline (1400 eV) the third harmonic is used at K = 1.9, where the Strehl ratio is 0.85. The peak Strehl ratio is at K = 3, which is to be expected as the mirror system is thermally tuned to be near the zero CTE temperature of silicon at this operating point. At lower K values the mirror is colder and the shape error is concave. (a) As the mirror system is cooled from 295 to 77 K the mirror contracts, resulting in a height error in the tangential plane of approximately À7000 nm. As the undulator K value increases from 1 to 3 the contraction continues because the mirror temperature is in the negative CTE regime of silicon. (b) To a first-order approximation, the slope error of the mirror in the tangential plane is a constant À0.6 mrad for cool-down and all evaluated undulator K values. In both plots the manifold-substrate interface is at the far left (x = À150 mm), the beam center is at x = 0 and curves are given for the linear horizontal polarization mode of the undulator.
Figure 8
After removing the constant pitch of À0.6 mrad from the tangential-plane height-error curves plotted in Fig. 5 and zooming in to the central AE70 mm of the mirror, the thermoelastic distortion of the mirror for cooldown from 295 to 77 K (cold) and a range of undulator K values can be more easily compared. Because of the thermal tuning of the mirror system, the mirror temperature is entirely in the negative CTE regime for silicon up to K = 2.75 and is therefore concave. At K = 2.75 the maximum temperature crosses 125 K, which can be seen by the small convex bump at x ' 40 mm. At K = 3 the mirror temperature has increased and the central portion of the mirror begins to flatten. The curves are given for the linear horizontal polarization mode of the undulator.
Wavefront propagation simulations
To validate the Strehl ratio calculations and visualize the effect of the thermally induced mirror deformation on the spot size, we simulated a few cases with the wavefront propagation code WISEr (Raimondi & Spiga, 2015) on the open-source platform OASYS (Rebuffi & Sanchez del Rio, 2017;Sanchez del Rio & Rebuffi, 2019). WISEr is a physical optics simulation package which computes the complex electromagnetic field downstream of optical elements. It works across the X-ray spectrum and with grazing angles of incidence, using spatially and temporally fully coherent sources.
We simulated the cases with the lowest Strehl ratios (K = 1.5 in first and third harmonics, both polarization modes) and a case that gave a relatively high Strehl ratio (K = 2.5, first harmonic and linear horizontal polarization). For the purpose of understanding the effect of mirror deformation on the spot, we performed the simulation in the tangential direction only. Therefore we only considered two mirrors: M1 and the downstream focusing-plane elliptical mirror. M1 is located 13.73 m from the undulator source, while the elliptical mirror is 15.75 m downstream of the flat mirror (source to mirror = 28.752 m) with a focal distance of 4.775 m. Because WISEr works with diffraction-limited beams, the dimension at the source was adapted to give the same footprint on M1 as was used in the finite-element model.
For the five cases we calculated the spot at the focal location of the elliptical mirror for three conditions: a perfectly flat M1, a thermoelastically distorted M1, and a thermoelastically distorted M1 with the focal distance of the elliptical mirror corrected to minimize the spot profile. Because M1 is concave at low K values, the minimum spot dimensions are 5 to 7.5 mm upstream of the ideal focus. We then calculated the Strehl ratio for the thermoelastically distorted M1 with and without A typical distortion of the mirror over the central 200 mm is plotted, along with the beam footprint on the mirror. The mirror distortion over 6 with the cool-down pitch removed is shown in the figure inset. The profile of the deformation and the induced wavefront aberration are mostly spherical, and therefore we used the 6 (dashed green line) window instead of 2 Â FWHM (or 4.7) (dashed brown lines).
Figure 10
(a) The height error and (b) the Strehl ratio as a function of undulator deflection parameter K for the first (n = 1) and third (n = 3) harmonics and the linear horizontal (p = H) and vertical (p = V) polarization modes. Note that while the distortion of the mirror depends only on the undulator K and the polarization mode, the r.m.s. window is 6 of the spatial distribution of the photons of interest, and therefore depends on the harmonic number. The dip in the height error and the corresponding rise in the Strehl ratio at K = 1.5 for linear horizontal polarization is due to the nonlinearity of the material properties of silicon combined with the heat-load distribution. correction by comparing the peak intensity with that of the perfectly flat mirror (Fig. 11). We also compared the FWHM of the intensity distribution -the spot size -for the perfectly flat mirror with that of the distorted M1 with and without focus correction. In all calculated cases, the ratio of the distorted FWHM to the ideal FWHM is the same as the Strehl ratio. The height errors, Strehl ratios, spot sizes and focus corrections are summarized in Figs. 12 and 13.
Summary and conclusions
In this paper we have presented the novel cantilevered liquidnitrogen-cooled silicon mirror design that is being developed as the baseline M1 (or first mirror) for the Advanced Light Source Upgrade (ALS-U). Our calculations indicate that, without correction, this design will achieve a Strehl ratio greater than 0.85 for the entire energy and polarization ranges of the beamline. With a correction achieved by moving the focus 7.5 mm upstream, the minimum Strehl ratio is 0.99. This focal distance change corresponds to about 0.16% of its original value and, if required, can be accomplished with a single-actuator mechanical bender.
Several important conclusions can be made from the results presented in this paper. First, temperatures in the mirror system can be accurately calculated from a one-dimensional thermal-resistor model, which facilitates tuning the system for specified heat loads. Second, in this case it is not strictly necessary to operate the mirror at the so-called 'sweet-spot' temperature of silicon ($125 K), which means that additional heaters or variable apertures are not necessary. Third, the appropriate window size for calculating the r.m.s. height error (and Strehl ratio) for this particular deformation is 6 of the spatial distribution of photons of the wavelength of interest. Fourth, the Strehl ratio and the increase in spot size (the ratio between the FWHM of the ideal spot and the thermoelastically deformed spot) are in agreement, as expected, because the wavefront aberration is mostly spherical and does not change the Gaussian distribution of the beam. Fifth, although the lowest uncorrected Strehl ratio is 0.85, in reality a Strehl ratio close to or in excess of 0.9 is more than adequate for all the situations we currently envision for the ALS-U FLEXON beamline, especially because the spherical aberration of the mirror does not produce beam striation out of focus. Even for a beamline using imaging techniques, which requires a uniform beam in and out of focus, the presented cryogenically cooled mirror is the ideal solution.
Figure 13
We used a wavefront propagation simulation to compute the FWHM dimension of the spot for the uncorrected thermoelastically distorted M1 (red bars), an ideal flat mirror (orange bars) and after a focus correction (blue bars) for each of the five studied combinations of photon energy, polarization mode and undulator K (for example 620.7 eV, linear vertical polarization, K = 1.5).
Figure 12
For the thermoelastically distorted M1 without any correction, the Strehl ratio calculated directly from the finite-element analysis (FEA) results (green bars) is in agreement with that calculated using wavefront propagation (red bars) for each of the five studied combinations of photon energy, polarization mode and undulator K (for example 620.7 eV, linear vertical polarization, K = 1.5). In each case the Strehl ratio can be increased (blue bars) by correcting the shape of the elliptical M3 to move the focus À7.5 mm at K = 1.5 and À5 mm at K = 2.5, where the negative sign indicates the upstream direction. Note that 1862 eV is outside the optimal range of the FLEXON beamline, but was included in these calculations for comparison of calculation methods. | 7,000 | 2020-08-11T00:00:00.000 | [
"Physics"
] |
Optimized Deep Neural Network and Its Application in Fine Sowing of Crops
Winter wheat is one of the most important food products. Increasing food demand and limited land resources have forced the development of agricultural production to be more refined and efficient. The most important part of agricultural production is sowing. With the promotion of precision agriculture, precision seeding has become the main component of modern agricultural seeding technology system, and the adoption of precision seeding technology is an important means of large-scale production and cost saving and efficiency enhancement. However, the current sowing technology and sowing equipment cannot meet the requirements of wheat sowing accuracy. In this context, a differential perturbation particle swarm optimization (DPPSO) algorithm is proposed by embedding differential perturbation into particle swarm optimization, which shows fast convergence speed and good global performance. After that the DPPSO is used to optimize the convolutional neural network (CNN) to build an optimized CNN (DPPSO-CNN) model and applied to the field of crops fine sowing. Finally, the experimental results show that the proposed method not only has a faster convergence rate but also achieves better wheat seeding performance. The research of this paper an effectively improves the accuracy and uniformity of wheat seeding and lay a foundation for improving wheat yield per unit area and promotes the intelligent development of agriculture in the future.
Introduction
Food security is an important strategic issue concerning China's economic development and social stability. As a country with a large population in the world, China should attach great importance to food security at all times [1,2]. Since the beginning of the new century, the central government has successively issued no. 1 documents, which have made great achievements in agriculture and rural areas. In 2020, China's grain and other agricultural products will have a bumper harvest, and the total grain output will reach 1,339 billion Jin. At the same time, it is very difficult for farmers to feed their families only by growing grain without relying on sideline work or migrant work, and a large number of agricultural labor force has flooded into the cities, and China's food security depends on the left-behind people who struggle to make a living by growing grain, and it is increasingly unsustainable [3,4].
China is a big agricultural country, and wheat is one of the most important grain crops in China. e population whose staple food is wheat accounts for about 1/3 of the world's total population. erefore, ensuring high and stable wheat yield is of great significance to food security. Agricultural production is a necessary condition for the survival and development of human society, closely related to social stability and economic development, and is the most important social production activities of human beings [5]. e development of wheat industry is directly related to food safety and social stability in China. e annual consumption of wheat products accounts for about 20% of the total food consumption in China [6].
As the key link of wheat production, sowing affects the growth and development of wheat, and ultimately affects the yield of wheat [7]. In the process of wheat production, there are mechanical drill sowing, broadcast sowing, and set sowing, etc. In the actual production, due to the contradiction between rice-wheat rotation system and wheat seeding in South China, the production is mainly based on artificial broadcast sowing and extensive management, which increases the yield of wheat. Strengthening the research on new variety breeding and cultivation technology has a significant impact on the development of wheat productivity. e first is the success of wheat breeding, and the corresponding wheat breeding agronomy needs corresponding farming tools. Second, uniform plant distribution will increase the yield, which indicates the direction for the study of precision seeding in a plot. Precision sowing needs to be applied to the original seed quantity, quality, and other indicators to control, so as to complete the control of seeding quantity and quality to achieve the purpose of precision sowing [8]. Precision seeding device can complete the precise seeding process, but precision seeding is a complex organic combination, including the precise control of seeding depth and seeding position. Although the consistency of seeding depth can be achieved by seeding machine, the cost is too high. To sum up, precision sowing is the result of multiple factors, and a single analysis of seed metering device is not comprehensive. erefore, it is necessary to combine machine-learning methods to increase the description of iodine excess in the sowing process, which is of great significance to promote precision sowing.
Compared with western developed countries, China's wheat production mode is relatively backward, mainly in the traditional way of planting, sowing, and fertilization according to artificial experience [9]. Planting closely can lead to crowding of crop seedlings and insufficient light, thus increasing the labor density. . Too little sowing will lead to inadequate land use and affect crop yield. erefore, the realization of precision sowing and application of crops and the promotion of precision agriculture are not only of great significance to improve crop yield and reduce production costs, but also imperative. Precision agriculture is a modern agricultural production system based on modern information and space technology, which is based on remote sensing technology, geographic information system, and global positioning system to achieve precise agricultural operations [10,11]. According to the specific conditions of each unit inside the farmland area, the soil nutrition information and the spatial status of productivity, the rational use of crop input determine the production target.
At present, the acquisition of crop growth information technology with high accuracy, high speed, high density, and low cost is still the biggest obstacle to the implementation of precision agriculture [12]. e traditional method of field sampling is to understand wheat-sowing situation, but due to the large manpower and material resources consumption of sampling and experiment, the amount of information collection and sampling cost is contradictory. Traditional precision agriculture variable implementation to obtain target data time-consuming, high cost, and time lag, cannot reflect the real-time sowing of wheat. Deep learning technology can provide timely information for agricultural production decision-making and management and provide new approaches and methods for crop growth, quality, and yield monitoring and regional management. Deep learning technology is an important means to collect physical and chemical data of ground objects and their spatio-temporal change information [13,14]. It has been widely used, especially with the development of hyperspectral remote sensing technology. Because it can measure the main information needed for wheat seeding and fully display its growth characteristics, it can obtain more abundant information than the conventional method, so as to realize the fine monitoring of wheat seeding. To sum up, wheat fine sowing benefits the country and the people, and the development of deep learning brings convenience to the evaluation and analysis of seeding effect. On this basis, it is of great significance to analyze wheat growth and spatial variation [15].
Related Work
e water consumption of wheat from sowing to overwintering was mainly distributed in the shallow soil layer of 60 cm. e water-consuming layer moved from shallow layer to deep layer as the temperature increased from rising stage to mature stage. e water use efficiency decreased with the increase of planting density. If the sowing rate is too high or too low, the soil water storage in the early stage will be overused and the water consumption of winter wheat will be reduced throughout the growth period [16,17]. If the amount of sowing, the number of basic seedlings in the early stage, and the total tiller number and leaf area index were large too large, then all these factors lead to the decrease of leaf area in the middle and late stage than that in the low sowing. When the amount is small, the population per unit area is insufficient, resulting in low dry matter quality. e tillering capacity and material production capacity of wheat decreased when the amount of sowing was large, and finally the grain quality decreased [18].
With the increase of sowing amount, the number of grains per spike and 1000-grain weight of wheat decreased gradually over the small sowing amount, while the number of ears increased gradually with the sowing amount, and the number of ears was the highest under the large sowing amount. Under the condition of high sowing amount, the yield did not increase but decreased slightly with increasing sowing amount. e main effect of sowing rate on yield was panicle number, followed by grain number per panicle and 1000-grain weight. Increasing sowing amount could effectively increase panicle number, but grain number per panicle weight decreased, and the positive effect of increasing panicle number was greater than the negative effect of decreasing grain number per panicle weight. Nitrogen absorption efficiency and nitrogen production efficiency increased with the increase of sowing amount. With the increase of planting density, the assimilate transport decreased before anthesis, but the accumulation of assimilate and its contribution rate to grain increased after anthesis due to the influence of soil moisture and sunlight, and finally increased protein content. Medium and low sowing rate can not only increase the yield but also significantly increase the content of starch and protein in grain, so that the grain yield and quality can be improved synchronously. Suitable medium sowing amount could increase protein content at 2 Computational Intelligence and Neuroscience maturity stage, and before the suitable sowing amount, protein content gradually increased with the increase of sowing amount and decreased when the suitable sowing amount exceeded [19,20]. e appearance characteristics of granular fertilizer and wheat seed are similar, so the existing control system of granular fertilizer application amount has important reference significance to the research and development of wheat seeding amount control system. It can be seen from the current situation of foreign research that some seeding quantity control systems are still controlled by open-loop system, and even closed-loop system is controlled by indirect seeding quantity. In actual sowing operations, there is no breakthrough in the technology of accurate monitoring of large flow sowing quantity of wheat [21]. erefore, in the process of literature research, there is no sowing quantity control system that can feedback the actual sowing quantity.
rough the literature review, it can be seen that the domestic seeding quantity control system is mainly an openloop system. If the friction between the seeding shaft and the machine and tools is large or there is an installation error, there is an error between the rotation speed of the seeding shaft and its theoretical value, and the rotation speed of the seeding shaft is not uniform within one week which will seriously affect the accuracy of seeding uniformity and seeding amount [22].
According to domestic and foreign practical experience, the advanced agricultural technology depends on the progress of agricultural production machinery. e current pattern of wheat precision sowing in China is also the most recognized by farmers [23,24]. After that, the speed of seed wheel is controlled by the intelligent speed regulation system, so as to achieve uniform sowing of wheat seed. In this process, it is often necessary to have a fixed power source to provide power for the seed feeder, and the common power source is the ground wheel. However, due to the special properties of ground wheel drive, it has certain requirements on the size of seed besides the loss of seed and ridging, and only the wheat seeds that meet the requirements can be precisely sown. In particular, the poor stability of the power source has always limited the accuracy of seeding, so it is easy to form the instability of plant spacing, and serious shortcomings will also appear in ridging and lumps of seedlings. In the actual production process, due to the consideration of cost, the intelligent precision control system of this type of seeder is often missing, resulting in the adjustment of seeding quantity that is not accurate and cannot meet the most basic precision seeding requirements. At present, air-suction seeder is mainly oriented to large seeds, such as beans, cotton, which is generally economic crops and mainly applied to corn in the field of food crops [25,26]. However, because wheat belongs to small seeds, airsuction seeder is not suitable for large seeds, and the existing small-seed seeder is mainly used for rapeseed, pepper, and other cash crops, so the type and number of air-suction seeder suitable for wheat are not very common. e research on precision sowing in agricultural developed countries abroad is earlier, which can be traced back to the middle of the last century. Precision sowing can not only save seeds but also improve the quality of sowing, thus playing an important role in improving crop yield. erefore, precision sowing has become the development trend of the sowing industry once it came into being. e same type of precision planter is divided into different series to meet the requirements of different rows, spacing, and traction power. For example, the NC model of MONOSEM precision planter in the United States can realize 4-12 rows of simultaneous seeding. e spacing between rows is 35-80 cm and can realize the simultaneous sowing of 6-24 rows. e line spacing is 45-50 cm, and different types of fine seeding machine can meet the requirements of different ground conditions, soil conditions, and crops by replacing different structures or specifications of the working parts. Precision seeding is divided into mechanical type and pneumatic type. Compared with mechanical type, pneumatic type seed metering device pushes the seeds forward by the force of airflow [27,28]. It has the advantages of fast seed dividing speed and noninjury and can realize the sowing of different seeds through the replacement of the seed metering plate, with high versatility. In the 1980s, the agricultural developed countries represented by the United States focused their attention on the research of pneumatic precision seeder, and it has been widely used. With the development of research, many modern technologies have been applied to precision seeding machines. In the 1990s, Japan developed a seeder that could be controlled by solenoid valve and developed an electronically controlled precision seeder. e precision seeder has high precision and can control the amount of seeding in real time, which greatly improves the sowing efficiency. is study not only broadens the research idea for the researchers of wheat fine seeding but also has great significance for the development of wheat industry, since the CNN model proposed in this paper is a typically deep learning model, and it can effectively deal with big data situations.
e main contributions of this paper are the following: (1) DPPSO-CNN is applied in the field of fine sowing of crops for the first time in this paper. (2) e method in this paper not only has solid theoretical foundation but also has broad application prospect.
Deep CNN Model Introduction.
In recent years, CNN model is often used to solve complex image recognition problems [29,30]. Based on the traditional full-connection layer neural network, CNN adds convolution layer and pooling layer to form the deep CNN model, which is shown in Figure 1. As Figure 1 only shows the schematic diagram of CNN algorithm in this paper, it is impossible to know how many convolutional layers and pooling layers there are. In the algorithm of this paper, we set two layers of pooling layer and two layers of convolution layer, respectively. e function of the convolution layer lies in the extraction of image features. e essence of the convolution kernel is a filter matrix, which can produce many different Computational Intelligence and Neuroscience 3 effects on the original image.
e calculation process of convolution is as shown in equation (1): where, u ij is the input image, m and n are the sizes of the input image, w is the size of the convolution kernel, and b is the bias constant of the convolution kernel. CONV(ij) is the characteristic graph output after convolution operation.
CNN adds an activation function layer to the network and analyzes the model better by adopting the feature mapping method of nonlinear function. en, the mathematical expression of common activation function is introduced one by one. e mathematical expression of sigmoid function is Since formula (1) is an almost function, the value range of its independent variables is the whole real number, and the range of its dependent variables is [−1,1]. e mathematical expression of tanh function is e mathematical expression of ReLu function is e full name of ReLU function is rectified linear unit. e function is one of the commonly used activation functions, which are characterized by low-computational complexity and no exponential operation. However, it is worth explaining that ReLU function has certain defects in the calculation process. When the data passes through the negative range of ReLU function, the output value is equal to 0. e Leaky-ReLu function can solve the above problem.
erefore, the efficiency of the entire network operation can be improved to a certain extent. e corresponding equations of Sig and Tanh are as follows: e output layer adopts softmax function to normalize, and the probability value in the corresponding category is shown in equation (7). In the classification tasks, i is the cross entropy (CE) loss function that is often used to evaluate the gap between predicted value and true value. e CE formula is as follows: where y ji is the predicted value and y ji is the real value. e error calculated from the CE function needs to be calculated by back propagation, so as to realize the newer back propagation of model parameters.
e original form of the gradient descent method is shown in equation (9): In the experiments in the following sections, this paper also verifies that the use of Adam has faster convergence than SGD. e mathematical expression of a common Adam optimizer is given as follows:
Input and Convolution
Pooling Pooling Fully connected Convolution Computational Intelligence and Neuroscience (10) erefore, the updating rule of gradient descent is as follows:
Optimized CNN Model.
It is worth noting that differential perturbation is used in this paper to optimize the CNN model, but other optimization algorithms are feasible in this theory, but they are not optimal choices. Particle swarm optimization (PSO) is simple and easy to solve, but it is prone to local extreme points, low accuracy, slow convergence, and stagnation. In this section, the differential perturbation is introduced into the PSO to form the differential perturbation particle swarm optimization (DPPSO) algorithm, which makes use of the advantages of fast convergence speed and good global performance of difference, overcomes the shortcomings of low precision and local optimal caused by PSO, and builds an optimized CNN model. e multiobjective optimization model is s.t.
and 120 < x 1 < 180, where f 1 represents energy consumption target, f 2 represents the output target g_ 1 represent the packaging quality of four indicators: crushing strength, wear strength, drop strength, compressive strength, respectively. It is worth noting that the DPPSO algorithm used in this paper optimizes network parameters to obtain better model performance.
Based on the above discussions, the optimized deep neural network and its application in fine sowing of crops is shown in Figure 2. It mainly includes data preprocessing, CNN model training, and parameter optimization based on DPPSO model, and finally obtains the optimal model performance.
Experimental Data Introduction.
is area belonged to semiarid and semihumid winter wheat growing. e experimental site was a hilly dry land with an average annual rainfall of about 450 mm. e test field was flat and has one cropping system in a year, and the soil was medium alkaline clay loam. e water storage in the test area was mainly natural precipitation, which was concentrated in October and November of 2019. e experimental variety Linfeng no. 3 was provided by the County Agricultural Committee. e experiment used a two-factor experimental design. Furrow sowing (FS) was the main sowing area, and furrow sowing (FS), wide drilling sowing (WDS), and conventional drilling sowing (CDS) were the main sowing methods.
In addition, this study referred to the following data sources: China Rural Statistical Yearbook (1998-2019), China Statistical Yearbook (19982020), and National Agricultural Product Cost-Benefit Data Collection (1998-2020). Excel 2019 software and DPS7.05 software were used for statistical collation of data, Excel 2019 software was used for plotting, and least significant difference (LSD) method was used for significance test of difference, reaching significance level a � 0.05.
Experimental Results Analysis.
In order to demonstrate the universality of the proposed method, change curves of different activation functions of CNN model are presented in Figure 3. ey all have the following common characteristics: (1) differentiability: this property is a prerequisite when using gradient-based optimization algorithms to optimize models. (2) Monotonicity: when the activation function meets the monotonicity, the single-layer network is guaranteed to be convex so that the subsequent convex optimization operations can be carried out. But in this case, Computational Intelligence and Neuroscience the learning rate usually needs to be set to a small value, which inevitably increases the training time.
In order to verify the training performance of the model, different parameter updating methods are presented, as shown in Figure 4. e left is the batch gradient descent (BGD) algorithm, which refers to the calculation error every time, the gradient is obtained by the same batch as a whole, and the parameters are constantly updated until the error is zero or within the allowed range. e right is the stochastic gradient descent (SGD) algorithm, which means that the training of each sample is updated once, and the data order needs to be shuffled before each cycle. From Figure 4, we know that a big problem of BGD is that the whole data set needs to be scanned in each iteration of gradient calculation. erefore, when the data volume is large, it inevitably leads to a large amount of calculation and low efficiency, while SGD only needs to take one sample point in each iteration of gradient calculation, so it has computational advantages. Second, since the gradient calculated by SGD is very different from the real negative gradient, it is not very stable, which also explains one of the advantages of SGD, which can jump out of the local optimal solution, so as to find the real global optimal solution.
is is especially important in deep learning, where objective functions tend to be nonconvex. In conclusion, SGD model not only runs faster than BGD model in training time. In addition, SGD model solves the problem that BGD model can easily fall into local optimum. Hence, the SGD is used to update the model parameters in this paper.
In order to verify the effective control of the proposed method on wheat sowing range and sowing quantity, according to the determination method of seeding uniformity recorded in national standard GB/T 9478-2005, after the seeding operation is completed, a total of 30 sections of 10 cm were taken, and the number of seed particles in each section was counted, as shown in Figure 5. Sowing uniformity was calculated for each level of combination seeding operation. It can be seen from the figure that the relative frequency distribution of wheat grain number in each subsection presents positive distribution.
e experimental results were analyzed uniformly under the same theoretical sowing rate per hectare. Specifically, when the number of seeds is 50-60, both the sowing range and the sowing rate are the highest, so the result can be considered as the optimal sowing amount per unit area. Figure 6 shows the computational efficiency and actual complexity of the proposed method. As can be seen from the figure, the computational complexity and efficiency of the proposed method increase first and then decrease with the increase of iterations. In other words, the computational efficiency of the method in this paper reaches the maximum after iteration at 1000 hours. erefore, the proposed method has a large model fault tolerance rate, which can ensure good model performance within 1000 iterations. It also indirectly shows that the method proposed in this paper has good generalization and extensibility.
Because BP neural network is a shallow model, RNN is a classic deep learning model, and the method in this paper is based on CNN model. Hence, they are selected as the comparison algorithms and the simulation results are presented in Figure 7. As can be seen from the figure, the convergence rates of the four methods all were tended to increase first with the increase of data volume, but with the further increase of data volume, the convergence rate of BP neural network showed a downward trend, indicating that BP neural network is not suitable for processing a large amount of wheat sowing data in this paper. In contrast, the convergence rate of RNN and CNN models generally keeps increasing with the increase of data volume at any time.
e main reason is that both methods are deep neural networks with the ability to process big data. However, the convergence rate of these two methods is still not as high as that of PSO-CNN in this paper, which shows the effectiveness and practicability of the method proposed in this paper. Figure 8 shows the relationship between sowing accuracy and iteration times of different methods. We can figure out from the figure whether with the increase of iteration number, the sowing accuracy of the three methods shows an increasing trend. When the iteration number is 600, the three models all reach the highest classification accuracy, which is 81%, 87%, and 98%, respectively. erefore, the PSO-CNN model in this paper achieves the highest classification accuracy. In addition, even when the number of iterations is small, the proposed method also has the best model performance and achieves the highest classification accuracy throughout the training process, which demonstrates the effectiveness of the proposed method in wheat seeding monitoring.
To better demonstrate the effectiveness of the proposed method, the monitoring results of CNN and PSO-CNN are shown in Figure 9. Specifically, it can be seen from Figure 9(b) that PSO-CNN method not only achieved the lowest omission ratio of 18.88% but also detected abnormal sowing at the 163rd sampling point with a detection delay of 2, while the corresponding delay numbers of CNN method Seeding rate Planting range Computational Intelligence and Neuroscience were 24, respectively. It shows that the method presented in this paper can detect the sowing error quickly. In addition, when the error is detected, the statistical curve corresponding to the PSO-CNN model rarely falls below the threshold line, while the statistical curve corresponding to the CNN method always falls back to different degrees, resulting in a high failure rate, which further demonstrates the stability and persistence of the proposed method.
Conclusions
Sowing is a key link in wheat production. e performance of seeding machine directly affects the growth and yield of crops. With the promotion of precision agriculture and the development of precision seeding technology, precision seeding has become the main component of modern agricultural seeding technology system. Adopting precision seeding technology is an important means of large-scale production and realizing cost-saving and efficiency enhancement. e online precision measurement of seeding amount is the key to realize precision seeding and precise control and also the basis of realizing precision seeding in real sense.
In view of the shortcomings of the existing methods, this paper proposed an optimized deep learning model PSO-CNN, which not only achieved better model convergence rate and model parameters but also effectively improved the sowing accuracy and sowing range of wheat showing strong theoretical value and application potential.
is work is helpful to realize the fine sowing of wheat and improve the level of agricultural automation. Although the method proposed in this paper has achieved good results, the research in this paper does not consider the effects of planting weather and soil in the process of agricultural sowing. is will be the focus of future research.
Data Availability
e experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declared that they have no conflicts of interest regarding this work. | 6,546.4 | 2022-08-21T00:00:00.000 | [
"Agricultural And Food Sciences",
"Computer Science"
] |
Bayesian model averaging for nonparametric discontinuity design
Quasi-experimental research designs, such as regression discontinuity and interrupted time series, allow for causal inference in the absence of a randomized controlled trial, at the cost of additional assumptions. In this paper, we provide a framework for discontinuity-based designs using Bayesian model averaging and Gaussian process regression, which we refer to as ‘Bayesian nonparametric discontinuity design’, or BNDD for short. BNDD addresses the two major shortcomings in most implementations of such designs: overconfidence due to implicit conditioning on the alleged effect, and model misspecification due to reliance on overly simplistic regression models. With the appropriate Gaussian process covariance function, our approach can detect discontinuities of any order, and in spectral features. We demonstrate the usage of BNDD in simulations, and apply the framework to determine the effect of running for political positions on longevity, of the effect of an alleged historical phantom border in the Netherlands on Dutch voting behaviour, and of Kundalini Yoga meditation on heart rate.
1. "In the abstract section, the quantification proposed by the authors supporting the innovation point of either model is crucial to solve the main problem of the average of Bayesian models, because the prior probabilities of different models greatly affect the results of the Bayesian model averaging method, and the authors please describe them in detail in this section." The reviewer raises a valid point. We have updated the following text to the description of the BNDD framework to stress that where we have used a uniform prior over the models, alternative priors may be chosen, for instance to address the problem of multiple comparisons when multiple RD analyses are run in parallel: "This approach ignores the uncertainty in the model posterior where ( ) is the prior probability of model . Ignoring the uncertainty in this distribution results in an overconfident overestimate of the effect size, and consequently of too optimistic conclusions of the efficacy of an intervention." and "Note that for now, we assume a uniform prior over the models, such that ( 0 ) = ( 1 ) = 1/2, but this may be changed, for instance to account for multiple comparisons (Guo & Heitjan, 2010)." We then return to this statement in our discussion section, where we now write: "The Bayesian model averaging procedure that we use in BNDD depends on the model probabilities ( 0 ) and ( 1 ). Here, we have assumed a uniform prior on these model probabilities, as we have no reason to prefer either the continuous null model or the discontinuous alternative. However, it should be noted that prior beliefs may be incorporated to reflect our initial assumptions on the probability of an effect, as well as to adjust for multiplicity in case many hypotheses are tested simultaneously (Guo & Heitjan, 2010) (for instance, in (Lansdell & Kording, 2019) a regression discontinuity design is used to test the causal influence between neuronal populations)." 2. "Is this article more applicable to the discontinuity model? The study of continuous models is not comprehensive and detailed, does this paper focus on the description of discontinuous models?" Our method is intended to compare two regressions: one that contains a continuous latent function, and one that contains a discontinuous one. It is possible to use discontinuous covariance functions for either, such as a white noise (or constant) covariance function. In that case, BNDD is only able to detect a difference in the means pre-and post-intervention, which essentially recovers a z-test. This could be an interesting addition in cases where it is unclear whether there is any autocorrelation in the latent function.
3. "It is noteworthy that your paper requires careful editing of the format. There are many problems in the paper format, such as the first line of the paragraph, multiple syntax errors, etc." We apologize for the formatting errors in our manuscript. We have updated the manuscript throughout (please see the updates in orange). Specifically, we have -Removed the erroneous \todo{} comment in the first line.
-Listed our department before our institute in the author affiliations section.
-Added figure titles.
-Put some equations in display mode rather than in text mode (new equations (12) and (13)).
Reviewer 2
"One minor doubt for me is the paper assume the distribution is Gaussian distribution, and thus Gaussian process is used in the paper. Is possible real world simulation is not under Gaussian distribution, and thus can not be used in Gaussian process?" The reviewer raises an interesting question that can be approached in two ways: in light of either the observation model, or the latent function. We address this with a new section in our discussion, which reads: "Throughout this paper, we have assumed a Gaussian likelihood. This conveniently leads to an analytic solution of the GP posterior, because the GP prior is conjugate to this likelihood. However, using variational inference or the Laplace approximation, BNDD can be used in combination with non-Gaussian observation models (Rasmussen & Williams, 2005). For instance, one could use a Poisson likelihood to model observed count data (Adams et al., 2009), or a Bernoulli likelihood for binary observations (Williams & Barber, 1998). Furthermore, other nonparametric priors over the latent functions may be used, such as the Student t-process (Shah et al., 2014)."
Journal requirements
1. "Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming." We have renamed our submission files and updated our manuscript to meet the PLOS ONE style requirements. Please let us know if there are any mistakes.
2. "We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed: Note: we have contacted the editorial office regarding this matter (Carl Williams). The previous publication here concerns a preprint of the current submission, and has not previously been published. This preprint document is identical to the one we submit here (modulo the changes due to this revision).
3. "We note that Figure 5 in your submission contain map images which may be copyrighted." Note that we have also contacted the editorial office regarding this issue as well. Fig. 5 is our own original contribution. It is based on the municipality and country borders data as provided by the Dutch Central Bureau of Statistics (CBS; 'Centraal Bureau voor Statistiek'), which are free to use if the CBS is credited. Furthermore, the election results are provided at the government website https://data.overheid.nl under the CC0 1.0 license. This has been made more explicit in the manuscript with the added section: "First, the vote distribution per Dutch municipality were collected from the Dutch government website (Kennis-en exploitatiecetrum officiële overheidspublicaties, 2018). We then manually constructed an approximation of the phantom border (see the dashed lines in Fig. 5) and used this as a function to divide the available municipalities in either above or below the border. For visualization of country and municipality borders, data from the Dutch national georegister was used (CBS, 2022)." in addition, we have incorporated this information in the figure caption, which now reads: Phantom-border effects on populist voting. Discontinuity analysis along a two-dimensional boundary (indicated by the dashed line). A. Circles indicate the observed fraction of populist votes; municipalities are shaded according to the Gaussian process predictions. B. The distribution of effect size conditioned on 1 , ( | , 1 ), along the phantom border. The shaded interval indicates one standard deviation around the mean. The country and municipality border data are available at the website of the Dutch national georegister [46], and the superimposed populist voting fractions were derived from the 2017 election results at https://data.overheid.nl. | 1,775.2 | 2022-06-30T00:00:00.000 | [
"Computer Science"
] |
Lie algebra lattices and strings on T-folds
We study the world-sheet conformal field theories for T-folds systematically based on the Lie algebra lattices representing the momenta of strings. The fixed point condition required for the T-duality twist restricts the possible Lie algebras. When the T-duality acts as a simple chiral reflection, one is left with the four cases, A1, D2r, E7, E8, among the simple simply-laced algebras. From the corresponding Englert-Neveu lattices, we construct the modular invariant partition functions for the T-fold CFTs in bosonic string theory. Similar construction is possible also by using Euclidean even self-dual lattices. We then apply our formulation to the T-folds in the E8 × E8 heterotic string theory. Incorporating non-trivial phases for the T-duality twist, we obtain, as simple examples, a class of modular invariant partition functions parametrized by three integers. Our construction includes the cases which are not reduced to the free fermion construction.
Introduction
A salient feature of string theory is that physics on different background geometries can be equivalent due to duality symmetries. This allows us to think of geometries whose coordinate patches are glued by duality transformations, as well as by ordinary general coordinate transformations [1][2][3]. They are relevant in understanding the vacua and the symmetries of string theory, and may give clues to formulations of string theories where the duality symmetries are manifest.
When such stringy geometries involve T-duality, they are called T-folds [4]. They have been studied mainly in the framework of supergravity and Double Field They [5]. In order to go beyond and analyze their quantum aspects, one may need the world-sheet approach based on conformal field theory (CFT). The transitions in the target space by T-duality are represented on the world-sheet as the twists by the T-duality transformations. Since JHEP02(2017)024 they are generally left-right asymmetric, the world-sheet theories fall into a particular class of asymmetric orbifold CFTs. Such T-fold CFTs have been studied e.g. in [6][7][8][9][10][11][12][13][14][15][16][17].
As is generally the case for asymmetric orbifolds, the construction of the T-fold CFTs is not automatic. In addition to the modular invariance, there is an issue of the relative phases of the action of the T-duality twist on the left-and right-movers [7,8], which may be regarded as an analog of the discrete torsions. As an interesting consequence of the explicit construction of a class of T-fold CFTs, it is found that T-folds provide a simple setting to realize non-supersymmetric string vacua with vanishing cosmological constant at least at one loop [16]. The mechanism there is extended to a more general class of asymmetric orbifolds [17]. Through T-folds and related more general non-geometric backgrounds (monodrofolds [3]), one can also explore the possibility that the world-sheet conformal interfaces [18][19][20], which may be regarded as fundamental from the world-sheet point of view, can be applied to string theory [14]. For the applications of the conformal interfaces to string theory, see e.g. [21][22][23][24].
In spite of the developments on T-fold CFTs so far, we are still lacking their general construction, which is in contrast to the quite general analysis from the target space point of view by supergravity. The purpose of this paper is to advance a step in this direction and provide a systematic construction of the modular invariant partition functions of Tfold CFTs. A point of our construction is that we formulate the problem based on the momentum lattices in order to control the modular properties of the partition functions in the twisted sectors from the asymmetric T-duality twist. This allows us to consider the cases which are not reduced to the free fermion construction as well. 1 The condition that the T-duality twist acts in a single Hilbert space requires the background moduli of the torus compactification to be invariant under the T-duality transformation, i.e. at the fixed points, as is found also in the supergravity analysis [1,25]. Imposing this condition, we first give the modular invariant partition functions for T-folds in bosonic string theory, whose momentum lattices are associated with the Lie algebra lattices called Englert-Neveu lattices. The fixed-point condition restricts the possible Lie algebras. In the case where the T-duality acts as a simple chiral reflection in the right-mover, we are left with the four cases, A 1 , D 2r , E 7 , E 8 among the simple simply-laced algebras. Similar construction is also possible by using Euclidean even self-dual lattices. We then apply our construction to the T-folds in the E 8 × E 8 heterotic string theory. Including non-trivial phases in the T-duality twists, the twisted partition functions in the originally intact leftmover are represented by the building blocks which appeared in the bosonic-string case.
As simple examples, we explicitly construct a class of the modular invariant partition functions of the T-fold CFTs parametrized by three integers. The cases including the building blocks from A 1 and E 7 are not covered by the fermionization.
The rest of this paper is organized as follows: in section 2, we summarize the toroidal compactification and T-duality in bosonic string theory, which also serves as fixing our notation and conventions. In section 3, we set up our problem and analyze the fixed-point JHEP02(2017)024 (self-duality) condition of the T-duality transformations. In section 4, we construct the modular invariant partition functions for T-folds in bosonic string theory, based on the Lie algebra lattices. In section 5, we apply our construction to the T-folds in the heterotic string theory. In section 6, we conclude with a summary and discussion. In appendix A, we summarize the characters of the affine Lie algebras and our building blocks for the modular invariant partition functions.
Toroidal compactification and T-duality
Let us consider the bosonic string theory where d coordinates are compactified on a ddimensional torus T d . We basically follow the conventions in [26,27]: the compactified coordinates X i (i = 1, . . . , d) have a periodicity X i ≈ X i + 2π. The constant background fields, the metric G ij and the anti-symmetric tensor B ij , are organized into a matrix, implying d a,b=1 δ ab e * i a e * j b = 1 2 G ij . The space-time indices i, j are converted to those of the tangent space a, b by e a i / √ 2, √ 2e * i a , and they are lowered and raised by G ij , G ij and δ ab , δ ab . The world-sheet Hamiltonian takes the form, with p 2 L/R = p L/Ra δ ab p L/Rb and n i , w j ∈ Z being the momentum and winding numbers. The dependence on the background E ij has been indicated explicitly in the momenta p L/R .
The remaining terms N,Ñ are the number operators for the oscillator modes, which take the values of non-negative integers. The partition function then takes the form
JHEP02(2017)024
where q = e 2πiτ and τ = τ 1 + iτ 2 (τ 1 ∈ R, τ 2 > 0) is the modulus of the torus. The sum over the zero-modes are regarded as that over the Lorentzian lattice Λ which is formed by the pair of the momenta (p L , p R ), and equipped with the Lorentzian inner product, (p L , p R ) • (p L , p R ) = p L p L − p R p R . Since this lattice is even self-dual, i.e. (p L , p R ) 2 ∈ 2Z and Λ = Λ * (dual lattice), the above partition function is modular invariant. The zeromode part of H is concisely expressed as where v := (w i , n j ) t and This in turn implies the map of the metric, 12) and that for the vielbein, up to orthogonal transformations in the tangent space. Here, we have defined and O := (O t ) −1 for any invertible matrices, and used ( The canonical map acts on the oscillators as These are valid also for the zero-modes with the level m = 0, By these transformation rules, the number operators are mapped as whereas
Partition functions for T-folds
We now consider the asymmetric orbifolds by the O(d, d, Z) T-duality transformations discussed in the previous section. In particular, we start from the target space, and twist the strings on it by the operator Here T 2πR stands for the shift by 2πR in R, and g ∈ O(d, d, Z) for the T-duality twist acting on T d . M is other non-compact part. Consequently, we are considering a class of non-geometric backgrounds, i.e. T-folds, where R twisted by T 2πR provides the 'base' circle S 1 R with radius R , while T d is its 'fiber'.
World-sheet partition functions
In order to construct the world-sheet torus partition functions describing the strings on the above T-folds, we start with the partition function for the R × T d part with the m-fold temporal twist, Here, L 0 ,L 0 and c are the Virasoro generators and the central charge, respectively, and the trace is taken over the untwisted Hilbert space. The trace in the base part is evaluated as is the partition function for a free boson on S 1 R in the winding sector with the spatial and temporal winding number w, m ∈ Z, respectively. η(τ ) is the Dedekind η function. If the twist acted on the R and the T d part independently, the partition function in the base part would be w,m∈R Z R,(w,m) , giving the ordinary partition function for a compactified free boson. Denoting the fiber part as Z T d (0,m) , the trace in (3.3) is written as Z (0,m) = Z R,(0,m) Z T d (0,m) . The partition functions in the base part transform covariantly under the modular transformations, satisfies the same form of the modular covariance, they give Z T d (w,m) with general winding numbers. Summing up all, the total partition function in such a case, becomes modular invariant. Here, the first factor Z M is the contribution from M in the background, which is assumed to be modular invariant itself. In this argument, a non-trivial step for constructing the modular invariant Z(τ ) is to find the fiber partition functions with the desired covariance (3.7). We see that a formulation based on the momentum lattices is useful to control the modular properties of the fiber part for that purpose.
Fixed points of T-duality transformations
In the fiber part, the twist operator σ acts as a T-duality transformation. In general, T-duality connects different (but equivalent) world-sheet theories, and thus in order for the twist to be well-defined in a single Hilbert space, it has to be self-dual. 2 In other words, the CFTs for T-folds are defined at the fixed points of the moduli space under the T-duality transformations. This also conforms to the supergravity analysis [1,25]. Given the transformation rule (2.10), this condition is represented as
JHEP02(2017)024
for g of the form (2.11). This also implies the invariance of the metric, Denoting the momentum squared as one finds that p 2 L/R is separately invariant in the self-dual case. To read off the form of the O(d, d, Z) element implementing the self-dual transformation, we rewrite the transformation (2.17) in the form, where v = (w i , n j ) t as before, P (E) : After the map in the above, one hasp L/R (E) instead ofp L/R (E ) due to the self-duality. Comparing this to the map of the Hamiltonian (2.20), one finds that g SD in the above represents the corresponding O(d, d) element. Its explicit form is [15,28] g SD = P t ΓP , and the invariance of G (3.10). This gives a necessary condition on the form of the self-dual transformation. Using the invariance of G, one can check that g SD ∈ O(d, d, R). Thus, if its components are integer-valued, g SD provides a proper O(d, d, Z) self-dual transformation.
Fiber twist
As a simple example of (3.13), we consider in this paper the case where and hence . One can explicitly check that it induces a self-dual transformation E → E = E. A sufficient condition for the integer-valuedness of g SD is E ij , G ij /2 ∈ Z, which follows from the product form in (3.13). When E ij is triangular e.g. B ij = G ij (i > j), it is also sufficient JHEP02(2017)024 that E ij , G ij ∈ Z. This is confirmed by noting that E ij ∈ Z implies 2G ij , G ii (no sum) ∈ Z and that the products of B ij and matrices are rewritten as We note that in general the above g SD does not correspond to the G ↔ G −1 (R ↔ 1/R) duality, in spite of the forms of γ L , γ R . In this case, from the transformation (2.15), i.e. (α m ,α m ) → (α m , −α m ), it follows that the oscillator contribution to the twisted partition function in the fiber part becomes In the zero-mode part, the right momenta are projected out, p R = 0, which implies that n = −E t w and hence we are left with the Euclidean lattice sum in the left-moving sector with this constraint. Taking into account g m SD = 1 (m ∈ 2Z) and g m SD = g (m ∈ 2Z + 1) in the untwisted sector (w = 0), we have for (m ∈ 2Z + 1), where we have used θ 2 θ 3 θ 4 = 2η 3 and defined Below, we show that further choosing appropriate backgrounds yields the partition functions with the desired modular covariance (3.7), and thus the modular invariant total partition functions.
T-folds from lattices
In this section, we show that one can systematically construct the fiber partition functions with the desired modular covariance by choosing the background moduli E ij associated with the Lie algebra lattices, namely, sublattices of the weight lattice of a semi-simple Lie algebra. We first discuss the case of Englert-Neveu lattices [29] for simply-laced Lie algebras and then the case of Euclidean even self-dual lattices, both of which are straightforwardly realized by the momentum lattices of bosonic strings. For a review on the lattices in relation to string theory, see for example [30].
Lie algebra lattices and Englert-Neveu lattices
We consider the background with an affine symmetry of level one X 1 for a semi-simple simply-laced Lie algebra X which is realized by [26,31] Here, C ij is the Cartan matrix of X, and the indices are not summed in the middle equation.
The simple roots are normalized so that their norms are equal to two. In this background, e i · e j = 2G ij = C ij (for any i, j), and thus e i are the simple roots, whereas the duals e * i are JHEP02(2017)024 the fundamental weights. Since E ij ∈ Z, the sum over the momenta in (2.4) becomes that over the weight lattice. Furthermore, since p Ra − p La = e a j w j , the weights in the left-and the right-movers belong to the same conjugacy class. Up to this constraint, one can confirm by using the inverse of P (E) in (3.12) that the summation reduces to the independent ones in each of the left-and right-movers. This gives an explicit realization of the Lorentzian even self-dual (Narain) lattice (p L , p R ) of the type called the Englert-Neveu lattice [29].
Thus, without twists, the relevant partition function is given by the sum of the diagonal combinations of of the level-one affine Lie algebra characters for X, where and r is the rank of X. The summation is taken over the weights λ α belonging to a conjugacy class Λ X (α) , i.e. an element of the coset Λ * X /Λ X labeled by α, where Λ * X and Λ X are the weight and the root lattice of X, respectively. A conjugacy class Λ X (α) also corresponds to an integrable representation of the affine Lie algebras at level one X 1 .
For our purpose, a useful fact on the Lie algebra lattices is that these characters form a finite dimensional representation of the modular group, which is summarized as The modular matrices here are given by in terms of the weight vectors λ α ∈ Λ X (α) and the number of the conjugacy classes N c . Now let us return to the construction of the partition functions for T-folds. First, we note that, since E ij ∈ Z, the constraint wE ∈ Z d in (3.17) is automatically satisfied, and hence the summation becomes that over the root lattice. This enables us to utilize the above modular covariance to derive the partition functions in the twisted sectors.
Next, we note that the condition discussed in the previous section that g SD in (3.13) with γ L/R in (3.14) be integer-valued constrains the possible Englert-Neveu lattices. In particular, due to the condition that G −1 = 2C −1 is integer-valued, we are left with among the simple simply-laced Lie algebras. Since E ij is triangular and its elements are integral, E ij ∈ Z, one finds that g SD is indeed integer-valued for the algebras in (4.6) and for their products, as discussed in section 3.3. Since any background realized by (4.1) is a fixed point under some non-trivial O(d, d, Z) transformation [26,32], one may consider other simply-laced Lie algebras. It may also be possible to consider the Z N elements of O(d, d, Z) as in [15]. However, the corresponding twists are more involved than (3.14).
JHEP02(2017)024
By starting from the twisted partition functions Z T d (0,m) given in (3.17) and using the modular properties (4.4), we can now uniquely determine the whole building blocks Z T d (w,m) for the cases (4.6) including the suitable phase factors to achieve the modular covariance (3.7). We concisely call this prescription and the resultant blocks as the 'modular completions' 3 in the arguments below. Combining these Z T d (w,m) with other parts, we obtain the modular invariant partition functions of the form (3.8) for the T-fold CFTs. One can also utilize products where each factor corresponds to any of the algebras in (4.6).
We list the result of Z T d (w,m) in each case below. The corresponding Lie algebras are explicitly denoted there. Among the list, the cases for D 2 and D 4 appeared e.g. in [16,17]. The appearance of A 1 and E 7 may also be of interest, since such a case is not covered by the ordinary fermionization. We note that the action of g SD in the partition functions below is Z 2 in the untwisted Hilbert space with a = 0, which is in accord with the supergravity picture. In the twisted Hilbert spaces with a = 0, this is however not the case, except for D r (r ∈ 8Z) and E 8 . A related discussion on the modular covariance in the A 1 case is found in [34]. In the following, we denote the fiber torus corresponding to X by T d [X].
Partition functions for D r (r : even). There are four conjugacy classes Λ Dr (α) , which include the vacuum, vector, spinor or conjugate-spinor representation. We label these by α = 0, v, s, c, respectively. A representative weight in each conjugacy class is (0, . . .
Partition functions for E 8 . There is only one conjugacy class including the vacuum representation, which we label by α = 0. The norms of the weights is λ 2 0 = 0 (mod 2). This is an even self-dual lattice and hence the modular property is trivial up to the phases coming from the eta functions for the oscillator part, satisfying the modular covariance (3.7). The explicit form of the character χ E 8 0 (τ ) is given in (A.12).
Euclidean even self-dual lattices
Another class of the lattices for which the modular properties of the Euclidean lattice sum in (3.17) are well controlled is those associated with Euclidean even self-dual lattices. Precisely, we start from a Lorentzian lattice (p L , p R ) which is specified by the basis e * i of a Euclidean even self-dual lattice. The vielbein e i and the metric G ij are determined by (2.2). The matrix corresponding to the Cartan matrix is defined in this case by C ij = 2G ij , which fixes the background moduli E ij by adopting the relations (4.1). With this setting, after the T-duality twist we are left with the sum over the Euclidean even self-dual lattice in (3.17). 4 The Euclidean even self-dual lattices are allowed only for dimensions d ∈ 8Z. At d = 8, the unique lattice is the E 8 lattice, which is already discussed in the previous subsection. At d = 16, there are two. One is the E 8 × E 8 lattice and the other is the Spin(32)/Z 2 lattice. At d = 24, there are twenty four. These are called Niemeier lattices.
For an even-self dual lattice, e * i and e i span the same lattice since it is self-dual. Moreover e * i · e * j , e i · e j ∈ Z (i = j) and they are even for i = j, since it is even. Thus, G ij /2, E ij ∈ Z and the integer-valuedness of g SD for (3.14) is satisfied.
The even self-duality also means that the modular property of the lattice sum is trivial, as in the E 8 case. Thus, denoting the corresponding character by χ ESD , the fiber partition function for a d-dimensional even self-dual lattice reads satisfying the modular covariance (3.7). The explicit forms of χ ESD 's are found by using the relation of these even-self dual lattices and Lie algebra lattices [30]. For example for d = 16, the Spin(32)/Z 2 lattice is realized as the D 16 sublattices with the vacuum and the spinor conjugacy class, and thus in this case. Furthermore, by the identity of the Eisenstein series E 8 (τ ) = E 4 (τ ) 2 , one has As in the case of the Englert-Neveu lattices, combining these with other parts, we obtain the modular invariant partition functions of the form (3.8) for the T-fold CFTs.
The action of g SD on Z in this case is Z 2 both in the untwisted and twisted Hilbert spaces. 4 A typical example of Euclidean odd self-dual lattices is Z n , which is also unique for dimensions n ≤ 8 (see e.g. [35]). It is also realized as the Dn lattice with the conjugacy classes (0) and (1). However, the partition function is not compatible with the modular covariance of the form (3.6): starting with Z T n (0,1) = ϑ n/2 34 θ n 3 /η n , and assuming the covariance (3.6), successive transformations ST ST S( = 1) would give Z T n (0,1) = ϑ n/2 34 θ n 4 /η n , in contradiction. Choosing the basis so that ei · ej = Cij = δij, the integer-valuedness of gSD is not satisfied either if we adopt (4.1) and (3.14).
Twists with phases
In acting with the T-duality transformation, the relative phase between the left and the right mover are not unique. Such a phase is strongly constrained when one requires that the full operator product expansion, not only the chiral one, of the vertex operators respects the invariance under the twist [7,15]. For the A 1 lattice, the phase in this case becomes (−1) nw , with which the T-duality acts as an inner automorphism of su(2) L ⊕ su(2) R [7,8]. In section 5, the possibility of including such phases is explicitly discussed, when we apply our construction to the T-fold CFTs for the heterotic string. In the partition function, the above phase is implemented by the shift τ → τ + 1/2 since p 2 L − p 2 R ∈ 2Z. It would be an interesting problem if the phases in higher dimensional cases [15] are also interpreted from the current algebra or the lattice point of view.
Application to heterotic string theory
So far, we have discussed T-folds in bosonic string theory. Our construction can be applied straightforwardly to the case of superstrings. In particular, applying the results of the Englert-Neveu lattices for D 2 , D 4 to type II superstrings reduces to the analysis in [16]. Its generalization has also been discussed [17]. A notable point in these analyses is that, combined with further twists, our T-fold CFTs simply realize the non-supersymmetric vacua with vanishing cosmological constant at least at one loop.
In this section, we apply our construction to the heterotic string theory. In our set up, the left-mover is the bosonic string with the E 8 × E 8 -lattice, while the right-mover is the superstring including the fermionic one. We focus on the supersymmetric models preserving 8 space-time supercharges. Namely, we assume that the chiral reflection acts on the right-movers along a four dimensional fiber torus, which we choose to be T 4 [D 4 ] for simplicity. We briefly comment on the case T 4 [D 2 × D 2 ] ≡ T 4 [(A 1 ) 4 ] later on. The T-fold CFTs for the heterotic string have been discussed e.g. in [6,11].
However, since we are considering asymmetric orbifolds, we still have a large variety of possibilities for the heterotic vacua: the orbifold group may act non-trivially on (i) the left-mover of T 4 [D 4 ] and (ii) the 16-dim. internal torus with the E 8 × E 8 lattice, while maintaining the modular invariance. We demonstrate how we can systematically construct the modular invariants describing a large number of such heterotic string vacua, by utilizing the modular covariant blocks (4.8), (4.10), (4.12) given in section 4. Above all, we uncover a fairly non-trivial phase factor that realizes the manifest modular covariance of the total building blocks.
Orbifold action for heterotic T-folds
Let us elaborate on a concrete construction of the models of heterotic T-folds. We start with the E 8 × E 8 heterotic string compactified on T 4 [D 4 ] (X 6,...,9 -directions). As in the previous section, we consider the orbifolding by σ ≡ T 2πR ⊗ g. Here, g acts along the T 4 -direction as the chiral reflection, 6,7,8,9), Table 1. list of X r . which preserves 1/2-SUSY, while T 2πR denotes the shift operator acting on the 'base' X 5direction, We use the notation Z R,(w,m) (τ ) defined in (3.5) to write down the partition function for the X 5 -direction. We further allow g to act non-trivially on the left-mover as the 'chiral shifts' along various compact directions characterized by three integers (r 1 , r 2 , r 3 ), where r 1 (≤ 4), r 2 , r 3 (≤ 8) are associated with T 4 [D 4 ] and the two E 8 -directions. Requiring the modular covariance, it turns out that this orbifold action provides extra phases mentioned in section 4.3. We now separately specify the orbifold action g on these sectors.
Action on left-mover of T 4 [D 4 ]-direction. As mentioned above, g acts as the chiral reflection (−1 R ) ⊗4 for the right-mover.
To specify the left-moving action, we consider the decomposition of the conjugacy classes of D 4 for a fixed integer r 1 (0 ≤ r 1 ≤ 4), where α = 0, v, s, c for D 4 , and α = 0, 1 for A 1 as in section 4.1. We also denote by α = 0 the conjugacy class for the basic representation (the root lattice itself) for any algebra X, i.e. Λ X (0) ≡ Λ X . We can uniquely determine the (semi-simple) Lie algebra X 4−r 1 of rank 4 − r 1 on the R.H.S. by imposing the following conditions; (i) X 4−r 1 is composed only of the irreducible components given in (4.6), that is, A 1 , D r (r: even), E 7 , E 8 . acting on the A 1 -currents {J a } (a = 1, 2, 3) as 5 This operator is actually interpreted as the chiral half-shift along the direction of lattice Λ A 1 ( * ) (up to some phase factor), when the A 1 -currents J a are bosonized in the standard fashion.
We next consider the relevant partition sum with the orbifold twist g inserted. To this end, we recall that g acts on the right-mover as the chiral reflection (−1 R ) ⊗4 , which leaves the sum over the root lattice in the left-mover as in (3.17). Together with the condition (ii) given above as well as the definition of ρ A 1 , it is then obvious that only the basic representation of A 1 r 1 ⊕ X 4−r 1 can yield non-vanishing contributions. The right-mover just gives ϑ 34 (τ ) 1/2 4 , as already described in section 4. On the other hand, the [ρ A 1 ] ⊗r 1twist in the left-mover acts as sign factors on the relevant charge lattice, while leaving the oscillator parts unchanged, which again provides ϑ 34 (τ ) 1/2 r 1 eventually. (See e.g. [8] for detail.) In this way, we obtain Here, the building blocks χ Xr (0,1) (τ ) ≡ χ Xr 0 (τ ) from the lattice andχ A 1 (0,1) (τ ) ≡ ϑ 34 (τ ) 1/2 for the ρ A 1 -twist already appeared in (4.8), (4.10), (4.12), and are summarized in appendix A.1 explicitly. For later convenience, we have also rewritten ϑ 34 1/2 in the right-mover asχ A 1 (0,1) , although it does not necessarily originate from the 1 -symmetry.
For example, in the case of r 1 = 1, the relevant decomposition is 6) and the trace (5.5) becomes where χ A 1 0 (τ ), χ D 2 0 (τ ) are the characters of basic representations of ( A 1 ) 1 , ( D 2 ) 1 respectively. 5 ρA 1 is explicitly written as on the integrable representation of spin /2 ( = 0, 1). The phase factor e −iπ 2 is necessary to make ρA 1 involutive. Note that the simpler inner-automorphismρA 1 ≡ e iπJ 3 0 is not involutive; which would play the role of the 'Z4-chiral reflection' appearing in [16,17]. It is presumably an interesting possibility to extend the heterotic vacua given in this section so as to include the Z4-actionρA 1 , and we would like to discuss it elsewhere.
JHEP02(2017)024
Action on two E 8 -directions. Let us first focus on one of the E 8 -factors. There, we have a unique conjugacy class, that is, the root lattice itself. We fix an integer r 2 (0 ≤ r 2 ≤ 8), and consider the decomposition of the root lattice Λ E 8 as . (5.8) The decomposition (5.8) is again uniquely determined by essentially the same conditions as for T 4 [D 4 ], i.e. (i) and (ii) with D 4 , X 4−r 1 replaced by E 8 , X 8−r 1 , respectively. The result of X 8−r 2 is listed in table 1. 6 We then define the g-action in this sector by [ρ A 1 ] ⊗r 2 associated with the lattice component [Λ A 1 ( * ) ] r 2 . Since the relevant trace has contribution only from the basic representation of X 8−r 2 , we have The g-action for another E 8 -factor is defined in the same way with an integer r 3 (0 ≤ r 3 ≤ 8).
Construction of heterotic T-folds
Now, let us discuss how to construct the full building blocks characterized by the three integers (r 1 , r 2 , r 3 ), which are modular covariant. In other words, we would like to construct the modular completions of (5.5) and (5.9). For this purpose we recall the modular covariant blocks (4.8), (4.10), (4.12), and consider their extensions to the r-dim. torus T r [X r ] composed of their products. For the 'odd sector' with a ∈ 2Z + 1 or b ∈ 2Z + 1, they are organized into the form, (a,b) denotes the phase factor assuring the modular covariance, which can be directly read off from (4.8), (4.10), (4.12), and generally expressed as Note that the peculiar factor κ (a,b) affects only for odd r. 6 The uniqueness of the decomposition (5.8) would be slightly non-trivial, even though it is almost trivial for the D4-case (5.3). For instance, in the case of r2 = 1, one might think that another decomposition would be allowed. However, Λrem here includes the conjugacy class such as Λ A 1 (0) ⊕ Λ A 1 (1) ⊕ Λ D 6 (s) . Thus, this possibility is excluded by the condition (ii), and we obtain the unique decomposition with X7 = E7.
JHEP02(2017)024
Furthermore, it is useful to note the following observations: satisfies the modular covariance of the form (3.6).
Based on these facts, one can construct the building blocks with the expected modular properties as follows: (1) T 4 [D 4 ]-sector. Fix an integer r (0 ≤ r ≤ 4), and set By construction, F [r] (a,b) (τ ) is obviously modular covariant as (b,−a) (τ ). (5.14) (2) E 8 × E 8 -sector. For a single E 8 -factor, fix an integer s (0 ≤ s ≤ 8), and define a chiral building block as To describe the total modular invariant, we still need to describe the free fermion chiral block in the right-mover, which is twisted by (−1 R ) ⊗4 . This has been presented e.g. in [16,17], and can be concisely expressed as (a ∈ 2Z + 1 or b ∈ 2Z + 1), (5.17) in terms of the notation adopted here. Here the trivial cancellation appearing in the bracket [· · ·] just means the existence of supersymmetry. A more explicit form of f (a,b) (τ ) is given in appendix A.2. The modularity of f (a,b) (τ ) is expressed as
We add a few comments: • In the cases when all r i are even, only the D 2r -lattices (or the E 8 -lattice itself) come into the above construction. For these cases, our heterotic T-fold vacua can be reproduced by the free fermion construction. However, when at least one of r i is odd, our construction does not reduce to the free fermion construction.
• It is straightforward to apply the above construction to the case of (w,m) (τ ) in (5.21), and also χ D 4 j (τ ) with JHEP02(2017)024
Unitarity in each winding sector
In the heterotic string vacua we constructed above, the action of the orbifold twist σ = T 2πR ⊗ g is simple in the untwisted sector, namely, the unwound sector along S 1 R , because g is involutive on the untwisted Hilbert space, g 2 = 1. However, the situation gets much more complicated in the twisted sectors, especially in the winding sectors with odd winding w ∈ 2Z + 1 due to the existence of the non-trivial phase factor (w,m) given in (5.11). It is thus not so obvious whether or not the string spectrum is unitary in each winding sector, which is read off by the standard technique of the Poisson resummation with respect to the temporal winding m. Namely, after summing over m and rewriting the total partition function in the form, (w,m) , we can perform the Poisson resummation analysis in a manner following [14,16,17]. After that, we can confirm that the above heterotic vacua are indeed unitary for an arbitrary choice of (r 1 , r 2 , r 3 ). We here briefly sketch how it works as follows: • For the sectors with w ∈ 2Z, it is easy to see the spectrum is unitary. Indeed, since the fermion chiral block f (a,b) (τ ) given in (5.17) (or (A.19) for a more explicit form) with a ∈ 2Z, b ∈ 2Z + 1 vanishes because of the cancellation only within the NS-sector, we find where Z • For the sectors with w ∈ 2Z + 1, we have (5.25) The equality in the second line follows just because of the covariance of total building blocks under the modular T -transformation.
JHEP02(2017)024
• Nextly, we evaluate Z (NS) w (τ )| even m , (w ∈ 2Z + 1) by using the Poisson resummation. The relevant computation is now straightforward, since the phase factor [ * ] (w,m) is relatively simple, (w,2m ) = e iπr 4 wm , where we setr ≡ i r i . Other types of phase factors may come from χ Xr (w,2m ) (τ ) as in (A.13), (A.14) and (A.15). In any case, however, the relevant phase factors always have the form such as e 2πiαm with some rational number α. This yields the shift of the KK momentum, 1 2R n → 1 2R (n + α), and no extra phases are left. We thus obtain the q-expansion with positive coefficients belonging to 1 2 Z. 7 • Finally, we pick up the remaining sector, Z As pointed out in [14], this is Poisson resummed into almost the same form as Z (NS) w (τ )| even m , but with an extra minus sign in each term with the level mismatch h −h ∈ 1 2 + Z. In the end, we conclude that the total partition sum for the odd winding sector (5.25) is indeed q-expanded only with the coefficients belonging to Z ≥0 .
Conclusions
We demonstrated that one can systematically construct the modular invariant partition functions for the T-fold CFTs by using the Lie algebra lattices. We first discussed the case of bosonic strings. By the condition that the background moduli is at a fixed point for a simple T-duality transformation realized as a chiral reflection, the possible Lie algebras for the Englert-Neveu lattices are restricted to the four cases listed in (4.6) among simple simply-laced ones. Based on the fact that the characters of the level-one affine Lie algebras form a finite dimensional representation of the modular group, the partition functions for the fiber torus part are found to satisfy the modular covariance of the form (3.7). The results are listed in (4.8), (4.10), (4.12) and (4.14). Together with the base part, summing up these gives the desired modular invariants for T-folds. Similar constructions are possible also by using the Euclidean even self-dual lattices.
We then applied the above construction to the T-folds in the E 8 × E 8 heterotic string theory. As an example, we took a fiber torus representing the D 4 Englert-Neveu lattice. Incorporating the non-trivial twists/phases in the left-moving sector, we obtained a class of modular invariant partition functions of the T-fold CFTs which are labeled by three integers. In the twisted sectors, the partition functions in the left-mover are given by the building blocks obtained in the bosonic-string case, which are composed of the characters of the affine Lie algebras at level one. After the Poisson resummation, one can also check the unitarity of the spectrum. The case of the D 2 × D 2 torus was briefly discussed.
Our construction in the bosonic-string case formally resembles the truncation of the bosonic-string spectrum to the heterotic-string spectrum, which is used to study the Tduality of the latter [26,32]. Indeed, one can start with a (d + d )-dimensional torus whose JHEP02(2017)024 background moduli takes the same form as in the truncation, . . , d, µ, ν = 1, . . . , d ), (6.1) and proceed as in section 3 and 4. An interesting possibility in this case is that the additional moduli A µk may be incorporated in the T-fold CFTs. For this to be the case, one needs to check the fixed-point condition of the T-duality and also to confirm that the twisted partition functions with non-trivial A µi indeed satisfy the modular covariance of the form (3.7). We leave these as future problems.
It is worthwhile to remark that the heterotic T-folds we constructed include novel cases which contain rather non-trivial phase factors and are not reduced to the free fermion construction. It would thus be interesting to apply our construction to building the 'realistic' heterotic vacua of asymmetric orbifolds, since recent attempts so far are mainly based on the free fermion construction e.g. as in [11] for the SUSY vacua and in [36][37][38][39][40][41][42] for the SUSY-breaking ones. Especially, it is indeed possible to extend the present construction to a variety of the non-SUSY heterotic T-folds by following [16,17]. It would also be interesting to figure out the moduli space of such a class of vacua. We would like to return to these issues in a future work. where q := e 2πiτ , y := e 2πiz . We use abbreviations, θ i (τ ) ≡ θ i (τ, 0) with θ 1 (τ ) ≡ 0. | 9,702 | 2017-02-01T00:00:00.000 | [
"Physics"
] |
Energy Performance Assessment of Virtualization Technologies Using Small Environmental Monitoring Sensors
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions.
Introduction
Recent years have witnessed the continuing development of the Internet from its original communication purpose (e.g., email) and content provision (e.g., Web) to an application deployment platform, where increased computing and storage capabilities are constantly being made available to end users. In parallel, an unprecedented number of personal computers are deployed worldwide OPEN ACCESS according to a recent Gartner report, as worldwide PC shipments have reached 82.9 million units just in the second quarter of 2010, representing a 20.7% increase from the second quarter of 2009. At the same, however, enormous energy has been wasted due to idle resources. A recent report from the NRDC [1] similarly confirmed that most idle servers consume approximately 69-97% of the total power consumption when they are fully loaded, and often when power management function is enabled. With energy costs increasing as the size of IT infrastructures continue to grow, it is apparent that keeping the running costs down is quickly becoming a top priority for many IT centric organisations. In this paper, we will address how to integrate environmental monitoring sensors and the cutting-edge virtualisation technologies to cut power consumption of IT infrastructure.
Recently, cloud computing paradigm has emerged as an energy efficient approach which enables ubiquitous, on-demand network accesses to a shared pool of flexibly reconfigurable computing resources including networks, servers, storage, applications, and services that can be rapidly deployed with minimal management effort or service provider interactions. In particular, so called virtualisationbased cloud computing platforms are becoming very popular in providing a new supplement, consumption, and delivery model for network software application (NetApp) over the Internet. Here, virtualisation refers to the abstraction of computer resources, such as the process of running two or more operating systems on a single set of physical hardware.
Originally developed for the IBM mainframe operating systems in the 1960s, the virtualisation technology enables a system administrator to combine disparate physical computing systems into virtual machines in a maximally energy-efficient manner, thus minimizing idle hardware and hence the overall power consumption. Moreover, virtualization can assist in distributing workload in such a way that servers are either busy, or put in a low power sleep state. This has led to server consolidation, with heightened computer elasticity as well as significantly reduced electricity bills. Based on a software cloud model, a virtualized, scalable and energy-efficient resource management strategy can be developed to facilitate integration of loose-coupled resources, with significantly improved utilisation, and with the added advantage that users can be freed from the often costly administration work including software deployment and maintenance.
With 2% of the world's carbon emissions currently being produced by the IT sector according to a Gartner Press Release [2] and with further estimates to reach 3% by 2020 [3], it is explicable that there have been in depth studies which raise the awareness of data centre energy usage [4]. However, there has been little research on the reduction of power usage and carbon footprint through the deployment of server virtualization technologies and more efficient air flow management methods. This is rather interesting when considering that the cost of data centre electricity costs in the UK has doubled between the years of 2003 and 2007 [5].
The objective of this paper is to investigate the current trends of Green IT awareness and how the deployment of small environment monitoring sensors and Site Infrastructure equipment optimization techniques can offer a solution to a global issue by reducing carbon emissions. In this paper, we (1) use small environment monitoring sensors to explore the implications of air temperature on the power consumption of the IT equipment. (2) explore how server virtualization offers a solution through two categories (Hypervisors and OS) and identify the important factors which define virtualization as a Green technology; (3) investigate the site infrastructure components of the Data Centre using small sensors and how their efficiency could significantly contribute to Green IT; (4) monitor and record the power consumption of physical servers sunder different processing loads; and (5) observe the implication of virtual servers on power consumption under different processing loads.
This rest of paper is organised as follows: Related Work on server virtualization is presented in Section 2. The experiment system design is described in Section 3. The experimental results are analysed and discussed in Section 4. Finally, the conclusion is given in Section 5.
Related Work
The term hardware virtualization is the process of presenting a set of logical computing resources which could be accessed and shared regardless of geographic location or physical configuration [6]. Although this technology is currently under constant exposure by the media and large organisations as a contributor towards Green IT, it was back in the 1960s when it was first introduced by the IBM Corporation as a method of simultaneous timesharing of mainframe computers [7].
This idea was then further developed to incorporate a hardware abstraction layer or else known as a Virtual Machine Monitor (VMM) which provides interaction between the hardware and software layers [6]. However, Szubert [8] explains that it was not until 1999 when virtualization was adopted by VMware that the concept was finally transferred from being strictly used for mainframes to industry standard 86x hardware. As a result of this, a standard 86x server would then have the capabilities of being partitioned into several virtual machines that use virtualized components. This would then allow the concurrent processing of different Operating Systems and software applications in an independent fashion. Although Panek and Wentworth [9] claim that the ability to run multiple VMs on a single server could reduce hardware costs and IT department overhead, Kappel et al. [10] argue that this potentially creates a single point of failure as these VMs are solely depending on the physical server to function correctly.
Goldberg [11] classifies the two types of VMMs as: Type I Hypervisor (OS Level Virtualization) and Type II Hardware Virtualizer (Hypervisor Virtualization).
OS Level Virtualization is considered as one of the common methods for running several independent production VMs on the same physical server [12]. The architecture of this technique uses the Host OS installed under the Virtualization Layer to manage a pool of hardware resources. This architecture is also known as OS sharing as the direct interaction with the hardware resources gives the Host OS the capability of sharing these resources among the VMs. Additionally, research [13] suggests that due to this architecture, greater flexibility is achieved as applications could either run on the Host OS or virtually on the Guest OS. However, Marinescu and Kroger [14] explains that this dependency on the Host OS represents a SPOF which could cause a bottle neck and reduced performance that could be 30% less than a non virtualized environment.
Hypervisor Virtualization is increasingly becoming popular for dedicated servers with a primary purpose of running virtual servers. In contrast to the OS Level Virtualization technique, the Hypervisor Virtualization does not rely on a Host Operating System as its Virtualization Layer directly interacts with hardware resources. With the Virtualization Layer directly connected to the hardware resources, it is able to act similarly to the Host OS within the OS Level Virtualization. This means that the Virtualization layer is able to share resources such as the NIC, CPU, RAM and DISK among the VMs whilst avoiding the unnecessary overhead created by the Host OS [12]. Examples of Hypervisor Virtualization include VMware ESX/ESXi and Microsoft Hyper-V.
The literature has so far identified the different methods which could be applied within a Data Centre (DC) subsystem to promote Green IT. From the research and review, it could be concluded that server virtualization technologies and efficient air flow management could contribute to the Site Infrastructure and IT optimization.
System Design
Although this is supported by a number of studies which demonstrate the benefits of server virtualization deployment to reduce power consumption [6], there is currently little research which compares the difference in power consumption between different server virtualization architectures. Furthermore, there are also little studies which look into the effects of room temperature on the power consumption of virtualized and non virtualized servers by using environment monitoring sensors.
This study fills the gaps in literature by firstly testing the difference of power consumption between physical, Operation System (OS) level virtualized and Hypervisor virtualized servers under different workloads. Secondly, the experiment is replicated within two different room temperatures to explore how site infrastructure components could affect the power consumption. With the experiment containing a combination of hardware and software components, these will be discussed in the following sections.
Software Components
The software components of the experiment remained constant throughout the entire period of time with the only modified variable being the workload on each server. These software components consisted of Operating Systems, Virtualization Infrastructures and Workload generator.
Firstly, it was decided that the chosen Operating System for the virtual and non virtual machines was Microsoft's Windows XP. Since its emergence in 2004, this operating system's speed, reliability and performance has won it a huge popularity which sets it as the currently most deployed Operating System worldwide [15]. Thus, the use of Windows XP makes it the most suitable choice for the replication of a real world production network. Secondly, VMware technology was chosen for virtualization as it offered a wide range of products such as VMware Workstation and VMware vSphere 4. Additionally, VMware's 80% share of the sever virtualization market reflects its popularity and sets it as the mostly deployed virtualization technology [16]. With the VMware Workstation software chosen to implement OS Level Virtualization, the vSphere 4 Infrastructure holds the components required for Hypervisor Level Virtualization. These components are as follows: • VMware ESX/ESXi: the virtualization platform for vSphere • VMware vCenter Server: the central point for the configuration and management of virtualized environments • VMware vSphere Client: refers to the locally installed client interface to allow users to connect remotely to vCenter Server or ESX/ESXi • VMware vSphere Web Access: refers to the web interface used for managing virtual machines Finally, in order to make the virtualized experiment as close to the real world as possible, each server will have to process various tasks over a period of time. Through the use of traffic generation software, a number of workloads could be configured to place the server under different states as illustrated in Table 1.
Hardware Components
The hardware components of this experiment consist of the following:
Experiment Design
Having identified the software and hardware components required for the testing environment, the OS Level and Hypervisor Level virtualization solutions should be implemented. However, in order to fully simulate a real world environment, the VMs will be configured to automatically obtain an IP address from a DHCP server. This will be required for the testing phase which will also incorporate several network traffic processing workloads.
The experiment will initially compare the two server virtualization architectures to a standard physical architecture. This will examine the potential optimization achieved from IT equipment and in particular hardware virtualization. Furthermore, it is proposed that the experiment will be conducted under two different room air conditions to examine the optimization achieved from Site Infrastructure components such as Computer Room Air Conditioner (CRAC) units. The first room condition is intended to have a high temperature as it is representing a DC server room experiencing from over utilized CRAC units. In contrast, the second room condition of cooler temperature is intended to reflect a DC server room which is being managed by a much more efficient CRAC unit and air flow management techniques. In order to identify the two periods of the week which reflect the intended room temperatures, a probe sensor was installed within the computer room as seen in Figure 2. By monitoring the air temperature of the computer room for a period of seven days, it was then possible to identify the days with the highest average and lowest average temperatures by using the sensor. This is illustrated in Figure 3 where temperatures are recorded for each day of the week. Through this, it was clear that Wednesday experienced the highest average room temperatures measured by the small sensor. This could be due to a number of factors such as number of students attending classes which resulted in a higher number of computers and network peripherals being used. In contrast, Sunday experienced the lowest average room temperature measured by the small sensors as the room remained mostly vacant due to no teaching schedules.
Primary Data Collection
The primary data of the experiment was collected from a monitoring procedure using power plug monitors and temperature sensors. Firstly, by positioning the power plug in monitors between the AC adapter of the server and the mains power supply, it is then feasible to measure power consumption over periods of time under different workloads. Although there are software based tools for measuring the power consumption, it is understood that the use of physical plug in power meters is cost effective [17] and offers the most accurate method of measuring power of servers running several workloads [18]. Furthermore, by distributing temperature sensors within the computer room, it is then feasible to measure the air temperature over periods of time. The sensors were directly connected to the RJ-45 sensor port switch within the SensorProbe 8 as this is where data will be collected and stored periodically. Finally, with the monitored environment successfully set up, the previously discussed workloads will be performed on each server.
Monitoring
This phase of the experimental procedure encompasses the rotation of processing workloads and air conditions to measure the impact on the power consumption. The experiment compromised two environments (Cool, Warm), with three devices monitored under three workload phases for a period of 90 seconds. This was designed to enable any trends in power consumption to be detected.
Replications
In order to verify the accuracy of the measured data, it is critical to ensure that a fair monitoring procedure is being carried out. Thus, it is proposed that test replications will be carried out as they offer increased confidence in the accuracy of the produced results [19]. Basili et al. [20] cateogrizes replications into different types as illustrated below: • Replication that do not alter the hypothesis.
• Replication that alter the hypothesis.
• Replications that reformulate the goals of the experiment.
For the purpose of this experiment, it has been decided that two types of replications will be adopted. Firstly, the Replications that do not alter the hypothesis will be conducted for the purpose of verifying the accuracy of the results. This includes the repetition of the original experiment 3 times as closely as possible without any alterations to the other variables. Secondly, the Replications that alter the hypothesis will be based on the alteration of air conditions to verify the impact of room temperature/ computer processing workloads on the power consumption.
Results and Discussion
The results of the power consumption were split into two sections; cool air temperature data and warm air temperature data. This was designed with the intent of exploring any similar trends within each environment. In addition, this would allow the comparison of results between the environments to clearly identify any effects experienced by the variant air temperature.
Cool Air Temperature Data
Figures 4-8 show conclusively that each server is drawing an increasing amount of electrical power whilst under significant processing workloads. In addition, it has been observed that the recorded data for each server whilst idle has experienced the most stability and lowest range of fluctuation. In comparison, the highest magnitude of fluctuation was observed once each server has been configured with workload 3.
ESXi
The ESXi server running two VMs has experienced an incremental power response to the 3 workloads as illustrated in Figure 4. Whilst idle, the power consumption of the server remaining consistent averaging, 101.3 Watts with a fluctuation range of 0.6. The measured data shows that once the system was configured with processing workloads equaling to 10-15% Utilization rate (Workload 1), the power consumption increased more than 3.5 Watts. With the power intake now averaging 104.8 Watts, you could see from Figure 4 that the values are still following a similar pattern from the previous configuration but with a higher fluctuation equalling 1.23 Watts.
This trend in incremental power consumption was further evident as the server experienced higher utilization rates. The measured data suggest that the highest increase in power consumption occurred as the server transited from Workload 1 to Workload 2. This was measured at a 6.83 Watts increase as the server processing Workload 2 averaged 111.6 Watts over the 90 second period. Furthermore, measured data suggests that once the server was configured with processing tasks equalling utilization rates of 25-30% (Workload 3), the measured data showed a slight increase of 2.83 Watts by now averaging 114.5 Watts. Figure 5 shows that the Workstation server running two VMs has also experienced an incremental power response to the three workloads. In addition, Figure 5 suggests that the Workstation server representing OS level virtualization has experienced similar power consumption values to the ESXi which represents the Hypervisor level virtualization. Figure 6 visualizes the similarity of consumption between the two server virtualization technologies as each workload was configured. Firstly, whilst idle, the measured data shows that the power consumption of the Workstation server sustained stability averaging 102 Watts, with a small fluctuation of 0.67. When the Workstation data was compared to the measured data for the ESXi server, a difference of 0.7 Watts was evident and thus proving a strong degree of similarities in consumption between the two technologies. Moreover, the trend in similarity between Workstation and ESXi was also found as the Workstation server was configured with workloads 1, 2 and 3. Workload 1 for the Workstation server displayed a power consumption averaging 103.7 Watts. When compared to the power consumption of Workload 1 for the ESXi server a difference of 1 Watt was observed. Workload 2 for the Workstation server displayed a power consumption averaging 112.2 Watts. When compared to the power consumption of Workload 2 for the ESXi server a difference of 0.5 Watts was observed. Furthermore, Workload 3 for the Workstation server displayed a power consumption averaging 116.1 Watts. When compared to the power consumption of Workload 3 for the ESXi server, a difference of 1.7 Watts was observed. Figure 7 visualises the two physical servers' combined incremental power response to the 3 workloads. Again, similarly, the measurement data shown in Figure 7 demonstrate another strong correlation between the power consumption and workload. The measured data also suggest that the physical servers experienced further similarities. Firstly, it was observed that the measured consumption whilst idle experienced the most stability. Moreover, the results also suggest that the highest increase in consumption occurred during the transition from workload 1 to workload 2. It is notable that the scale of consumption is the most apparent difference between the physical servers and previously discussed server virtualization technologies. This is illustrated in Figure 8 where the difference in power consumption between the physical and virtualized servers is demonstrated. This energy inefficiency of physical servers is evident as they are almost doubling the power intake of virtualized servers.
Warm Environment Data
Figures 9-15 visualize the measured power draw of each server functioning within a warmer air environment than the previous set of data. Again, it is notable that each server is drawing an increasing amount of electrical power whilst under significant processing workloads. Similarly to Figures 4-8, it was observed that the power consumption for each server whilst idle experienced the most stability as it recorded the lowest range of fluctuation. However, unlike data shown in Figures 4-8 where the highest increase in power consumption was experienced in the transition from workload 1 to workload 2, this trend was no longer evident within this data set. Figure 9 shows the ESXi server experiencing an incremental power response to the 3 workloads. Whilst idle, it is observed from the measured data that the power consumption of the server remained stable averaging 104.2 Watts. This was suggested from a low fluctuation range of 1.03 Watts. Now, although this stability whilst idle was also evident in the previous environment, a number of differences were observed. When the power consumption of the ESXi server whilst idle is compared to the power consumption from the colder environment, a 2.2 Watts increase in the average consumption was observed. While this may give an indication of the effect of temperature on the power consumption, it was important to identify further trends to support this hypothesis. After further examination of the remaining data, the existence of correlation between the air temperature and power consumption was further supported as illustrated in Figure 10. It is notable that a similar pattern was highlighted which contains the collected data of workload 1. Although the increase in power consumption was expected due to the configured workload, the server's average consumption of 111.1 Watts experienced a 6.3 Watts increase from the cooler environment. Similarly, the increase in power consumption was evident from the collected measurement data and thus suggesting that the increase in air temperature has certainly affected the level of power consumed by the ESXi server. Figure 11 shows the Workstation server experiencing an incremental power response to the three workloads. Again, it is suggested from the measured data that the server's power consumption is most stable whilst idle. The figure shows the power consumption of the workstation server whilst idle averaging 104.8 Watts with a low fluctuation of 1 Watt. The correlation between the server's workload and power consumption was further evident shown in Figure 11. However, it is notable from Figure 12 that an average increase of 5.5 Watts in power consumption was observed once the data was compared to the measured power consumption from the cooler environment.
Workstation
Similar to the cooler environment where the power consumption between the two server virtualization technologies experienced very little difference, Figure 13 shows that the measured data of the workstation within the warm environment also suggest power consumption similarities with the ESXi server. Figure 14 visualises the two physical servers' combined incremental power response to the three workloads. Again, similarly to the physical server within the cool environment, the measured data series demonstrate another strong correlation between the power consumption and workload. Figure 14. Physical Server functioning within a warm environment. Figure 15 illustrates the main difference between this data set and the previously collected data of the physical server from the cooler environment. Further comparison between the physical server in a warm environment and physical server in a cool environment suggest that the physical server within the warmer environment experienced an average increase of 12.4 Watts in power consumption. With this increase evident throughout the experiment as the workloads were configured, it is clear that the difference in power consumption between the two environments has been largely influenced by the air temperature.
Discussion
This section of the paper will discuss a number of observations which were made upon the completion of the experiment and analysis of the results.
Virtualization Technologies and Physical Servers
Firstly, it is clear from the experiment that the use of server virtualization technologies has improved the power efficiency in comparison to the physical servers. Although this data also displays a strong correlation between the power consumption and workload, the results of this experiment are not considered important beyond to what they contribute to the following observation: There is a 103.1 Watts difference between two virtualized servers and two physical servers whilst idle. This observation as seen in Table 2 proves important as it demonstrates the high levels of power being wasted by inactive physical servers waiting for tasks to process. Therefore, by virtualizing underutilized servers into a single physical server, this offers significant power savings as physical servers could be removed. For example, by virtualizing five physical servers running at 15% utilization into five virtual machines within a single physical server, this would potentially eliminate the running costs of four servers.
However, it could be argued that a single server running five virtual machines with a total utilization rate of 75% would require additional processing and therefore consume as much power as the five physical servers. This would not prove the case as Central Processing Units (CPUs) consume approximately ¼ of the server's total power consumption whilst the rest is shared among the other components [18]. This means that the CPU's 60% utilization increase from 15% to 75% only accounts for ¼ of the power and thus, an 18% increase in power consumption is expected from the CPU's additional processing. Furthermore, other components such as the Fan and PSU will also experience an increase in power consumption but this is regarded insignificant when compared to the power that is being saved. This is clearly illustrated from the measured data where the average of two virtualized servers produced a 51.7% savings in power usage from two physical servers processing the same workload. To demonstrate the potential savings from adopting server virtualization technologies, the following scenario could be assumed from the collected data. By running two underutilized servers at utilization rates of 10-15% (workload 1), an approximate total of 418.2 Watts in power is required. Through server consolidation, the CPU usage of the single server is expected to increase to 25-30% due to the two VMs dependence on shared resources. With the virtualized server running two VMs now averaging 121.4 Watts, the following savings could be achieved: 418.2 W -121.4 = 296.8 W Assuming that the cost of electricity is around 10p per kWh as quoted by British Gas [21], a DC that operates 24 hours a day, 7 days a week, will benefit from server virtualization as follows: Furthermore, through the use of server consolidation, a number of additional savings beyond just the electricity cost of the server could be achieved. With Bianchini and Rajamony [22] and the cooling infrastructure, it is critical to understand the implications of inefficient servers on the CRACs. The use of inefficient physical servers that consume more power tends to produce more heat into the server's room. This creates the requirement for a larger and more sophisticated cooling infrastructure to efficiently remove the heated air from the server. Therefore, by consolidating underutilized servers into VMs, the lesser heat produced will reduce the workload on the CRAC units and essentially lower the electrical power required to cool the air. However, with virtualization now reducing the IT workload and produced heat, it is important to avoid the risk of running the cooling infrastructure with more power than required [23]. It is suggested that rightsizing in this scenario should be carefully applied as it will reduce fixed costs and increase efficiency [23].
Conclusions
Recent advances in sensor technologies and virtualization technologies are providing exciting opportunities to make significant progress in understanding and solving the real-world challenge on reduction of power usage and carbon footprint. This paper has highlighted the emergence of Green IT as a result of the increasing trends in power consumption and discussed a number of measures for efficiency improvements. With these measures divided to either the IT equipment or Site infrastructure subsystem of the DC, it was decided that one of each will be examined for efficiency improvements through direct experimentation. Server virtualization was chosen as part of a solution for IT equipment optimization. In comparison, the efficiency of the CRAC was used as part of a site infrastructure measure for improving the power efficiency of the DC by using small environment sensors.
The measured power consumption obtained from the use of server virtualization technologies appear to save more than half of the power required by physical servers. This can potentially be attributed to the server's primary power consumers being more efficiently utilized through server consolidation. In addition, through the testing of both server virtualization technologies, very similar data was gathered and thus suggesting little difference between the two technologies in the context of power consumption. However, this may not be the same in terms of measuring performance levels as the both technologies have differing virtualization architectures.
Furthermore, it appears that the efficiency of site infrastructure components such as CRACs have a direct effect on the power consumption of IT equipment. This is suggested from the comparison of server power draw between two contrasting air conditions. The first test which was conducted within a warm environment which detected by an environment monitoring sensor recorded the highest power consumption. This could be attributed to the temperature sensitive components of the server. Although the measured data suggest that temperature will affect the overall consumption of the server, it could be concluded that the CRAC units are also expected to experience an increase in power as their workload increases for cooling the environment.
In summary, it could be concluded that although server virtualization technologies provide a method of reducing physical space, carbon footprint and most importantly electrical costs, overall Green IT and cost savings could be achieved through a combination of IT equipment and Site infrastructure. | 6,958 | 2012-05-18T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Biofilm Specific Activity: A Measure to Quantify Microbial Biofilm
Microbes growing onto solid surfaces form complex 3-D biofilm structures characterized by the production of extracellular polymeric compounds and an increased resistance to drugs. The quantification of biofilm relays currently on a number of different approaches and techniques, often leading to different evaluations of the ability to form biofilms of the studied microbial strains. Measures of biofilm biomass were carried out with crystal violet (CV) and a direct reading at 405 nm, whereas the activity was assessed with the XTT ((2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide) method. The strains of four pathogenic species of the genus Candida (C. albicans, C. glabrata, C. parapsilosis and C. tropicalis) and of Staphylococcus aureus were employed to determine the effective relatedness among techniques and the specific activity of the biofilm, as a ratio between the XTT and the CV outcomes. Since the ability to form biomass and to be metabolically active are not highly related, their simultaneous use allowed for a categorization of the strains. This classification is putatively amenable of further study by comparing the biofilm type and the medical behavior of the strains.
Introduction
The microbial biofilm is a complex structure of cells not comparable to a tissue but rather to an association, also defined as a "city of microorganisms" [1]. Biofilms are formed by sessile cells growing onto biotic and abiotic surfaces [2,3] and are typically embedded in a matrix of extracellular material [4][5][6]. A unique definition of biofilm is not possible because with this term, both 3-D structures growing onto solid surfaces and floating flocs of associated microbial cells without the need of a solid substrate can be indicated [7]. Both bacteria and fungi can form biofilm not only in nature [8] but also in several anthropized environments such as food industries, where the food contact-surfaces represent a major source of contamination [9,10], or hospitals, where the tissues of patients and staff, medical equipment and other available liquid and solid surfaces can be easily colonized by biofilm [11,12]. In all these environments, this structure is considered of primary importance for the diffusion of infections and for the success of the species able to form biofilms [11,13]. The specific architecture of biofilms [14], the extracellular polymeric substance (EPS) matrix with its complex structure (polysaccharides, enzymatic components, amphiphilic compounds and other macromolecules of a different nature [15]), the nutrient limitation and slow growth, the very active or overexpressed efflux
Strains and Growth Conditions
Thirty strains belonging to the four major pathogenic species of the [11], bacterial strains were all isolated from diabetic foot in Pisa hospital and identified by rDNA analysis and MALDI-TOF mass spectrometry.
Biofilm Assay
Biofilm presence and activity were assessed with colorimetric methods based on crystal violet and XTT reactions [25,39], with slight modifications. Briefly, each strain was grown over night in bottles containing the appropriate medium (YEPD for yeasts and BHI for bacteria) at 30 • C in an orbital shaker at 150-180 rpm and then harvested and centrifuged at 3000× g for 5 min at 4 • C. The supernatant was removed, and the pellet was washed twice with Phosphate Buffered Saline (PBS). The washed cells were then resuspended in an RPMI-1640 medium (Sigma Aldrich, Saint Louis, MO, USA, used for yeasts) or TSB (Tryptic Soy Broth, Biolife, Milan, Italy, used for bacteria) in order to obtain a final density of 1.0 × 10 6 cells/mL, by adjusting the density after spectrophotometrical readings at OD 600 and the calculation with the regression equation of the species-specific curves [30]. One hundred µL of this standardized cell suspensions were seeded in each selected well of a 96-well microtiter plate; one unseeded well acted as the negative background control for the subsequent steps. Three different replicates for each strain were set into each plate. Two different plates for yeasts and two for bacteria were prepared. These microtiter plates were then closed, sealed and incubated for 2 h at 37 • C. After biofilm surface priming, the medium in each well was removed carefully with a multichannel pipette, taking care to not disrupt the biofilm; each well was subsequently washed three times with PBS. At this stage, it was not possible to spectrophotometrically evaluate the biofilm formation because its optical density was under the detection limit of the plate reader [30]. After these washing steps, 100 µL of the appropriate medium was added to the wells. Each plate was then closed, sealed again and incubated for 24 h at 37 • C to permit the biofilm development. The plates were then recovered, and the same washing procedure described above was applied. One plate for yeasts and one for bacteria was stained with 1% solution of crystal violet (100 µL each well) for 15 min; they were then washed three times with water and dried at room temperature, and the absorbance of adherent biofilm cells was measured with a TECAN Infinite F200 plate reader (Tecan Trading AG, Mannedorf, Switzerland) at 570 nm. The second plate prepared for yeast and bacteria was stained with a XTT/menadione solution prepared as described by Pierce and colleagues [25] (100 µL each well); the biofilm activity was then measured with a TECAN Infinite F200 plate reader at 492 nm. Each strain was tested for biofilm production in triplicates, and the assay was repeated three times.
Data Analysis
Densitometric data were recovered from the TECAN interface and transferred to MS Excel. Given the popularity of this software, we prepared a simple template to automatize the analysis of each plate. The template contains three 8 × 12 layouts, reporting the disposition of strains and conditions in 96 well plates, the readings with CV and the ones with XTT, respectively. A summary table in the same sheet reports the statistical analyses and gives an automatic definition of the ability of forming biofilm based on the Student's t-test. The experiments described in this paper had three technical and three biological replicas. In each plate, the three technical replicas were placed in three sectors of four columns, each including 30 strains and two control wells. The CV and XTT values were normalized according to Equation (1) and Equation (2) to determine the normalized values of CV and XTT, CV N and XTT N respectively, where CV and XTT are the averages of the three readings and Cont CV and Cont XTT are the averages of the readings of the control wells without cells. This procedure normalizes the data, allowing for a good comparison among different experiments and taking into consideration the various effects (e.g., little variations in the dye solution, different plastics of the plates, etc.) that can influence the reading values. The data are then used to calculate the XTT to CV rate (XCR), the Biofilm Specific Activity (BSA) and the Biofilm Metabolic Volume (BMV), according to Equations (3)-(5).
The XCR parameter is a measure of the metabolic activity referring to the biofilm biomass and suffers of producing high values when the CV N is relatively low. It was, therefore, changed in the BSA by multiplying the XCR by the average of the XTT N and CV N , obtaining a descriptor of the XCR weighted on the average of the CV and XTT values. Finally, the BMV combines both the activity and biomass of the biofilm matrix in a single metric.
ROC Curves Calculation
Receiver Operating Characteristics (ROC) curves for each descriptor were calculated with an MS Excel template. For each chosen threshold value of the descriptors, four metrics are calculated according to Equations (6)- (9).
False Positive Rate = Σ False Positive/(Σ Condition Negative) Accuracy = (Σ True Positive + Σ True Negative)/Σ Total population (8) The Sensitivity parameter coincides with the True Positive Rate because the greater the sensitivity of the test, the better will be the ability of identifying True Positive observations.
Crystal Violet and XTT as Independent Descriptors of the Biofilm Forming Ability
The biofilm formation of 31 strains of Staphylococcus aureus and 30 pathogenic yeasts of the genus Candida was detected using both crystal violet (CV) and XTT. The color formation was observed visually to define the biofilm forming ability of the bacterial and yeast strains. In particular, the yeast biofilm data confirmed the results on biofilm formation previously published [11]. The same plates were read in triplicate with a TECAN plate reader at 570 and 492 nm for CV and XTT, respectively. One of the major problems when using accurate reading is to establish thresholds to define when a given value of CV or XTT indicates biofilm formation. The change of the threshold level influences also the yield of false positives and negatives, as displayed in ROC curves. For these reasons, we used the row and normalized values of the CV and XTT readings to produce ROC curves and studied the trend of accuracy, sensitivity and specificity with the change of the threshold value. In Candida spp., crystal violet (CV) behaves as a "perfect" descriptor (i.e., a classifier able to produce a perfect discrimination between True Positives and True Negatives without False Positives or Negatives) with and without normalization (Figure 1a,b). However, the accuracy and sensitivity non-normalized metric drops at very low values of the threshold, whereas they remain stable up to threshold 1.3 with the normalization. (Figure 1c,d). XTT, as well, is a "perfect" descriptor according to the ROC curves of both normalized and not normalized data (Figure 1e,f). The sensitivity and specificity show a larger threshold range in normalized data, although less evident than in CV (Figure 1g,h). In S. aureus, CV does not display the ROC curves of a "perfect" descriptor although the curves are quite distant from the diagonal, indicating a very good descriptor (Figure 2a,b). The sensitivity and accuracy decreased steeply without a plateau as observed in yeasts. The normalization, however, made the decrease of these two indicators less steep; in fact with a threshold = 4, the accuracy of the non-normalized indicator was 0.2, whereas that of the normalized counterpart was 0.344. Similarly, the sensitivity at threshold = 4 were 0.0 and 0.192 for the non-normalized and the normalized data, respectively (Figure 2c,d). The XTT behavior is more similar to that observed in yeasts; in fact, the ROC curves are typical of a "perfect" descriptor ( Figure 2e,f), and the accuracy, sensitivity and specificity remain at their maximum up to a 1.3 threshold with normalized data, whereas the non-normalized decreased steeply without any plateau (Figure 2g,h). These findings indicate that indeed the normalization plays a role with both CV and XTT either in Candida or in Staphylococcus. The two descriptors resulted rather independent, as demonstrated by the low Pearson correlation r values: 0.68 and 0.72 in Candida and Staphylococcus, respectively. These figures are quite comparable with those obtained in different studies [32] and show that the two assays give different types of information on the microbial biofilm. CV is known to be a good indicator of the amount of biomass, while XTT is bound to the activity of the cells forming the biofilm.
CV and XTT Work Together with Derived Indexes
Since the two assays bring different types of information, their ratio XCR (XTT over CV Ratio) is expected to indicate how much metabolic activity is present per unit of biomass, as detected by the CV assay. Unfortunately this index had a very poor performance in both yeast and bacteria, as shown by the ROC curves mostly under the diagonal line (Figure 3a,b). Furthermore, the ratio suffered extremely high values when the CV was relatively low, producing very high figures. In order to fix these problems, the XCR was multiplied by the average of the CV and XTT values, obtaining an index called BSA (Biofilm Specific Activity) with better performances and without the problems related to low CV values. In Candida, BSA showed the typical ROC curve of a "perfect" descriptor ( Figure 3c). Even in Staphylococcus, the ROC was an almost "perfect" descriptor ( Figure 3d) with better than that displayed by the CV alone. BSA in Candida displayed a very high specificity at any threshold, whereas the sensitivity and accuracy remained high up to the threshold 0.662 (Figure 3). In Staphylococcus instead, the specificity increased from 0.2 to 1 with thresholds respectively of 0 and 2. The accuracy and sensitivity decreased extremely slowly that a threshold = 4 displayed values of 0.78 and 0.73, respectively (Figure 3f). Another index to put together the information of XTT and CV was the Biofilm Metabolic Volume, resulting from the product of the XTT and CV data. Once again, this index had the ROC of a "perfect" descriptor in Candida, whereas it performed less in Staphylococcus (Figure 4a,b). In Candida, the three performance parameters remained at their maximum up to a threshold of 1.37 (Figure 4c), Another index to put together the information of XTT and CV was the Biofilm Metabolic Volume, resulting from the product of the XTT and CV data. Once again, this index had the ROC of a "perfect" descriptor in Candida, whereas it performed less in Staphylococcus (Figure 4a,b). In Candida, the three performance parameters remained at their maximum up to a threshold of 1.37 (Figure 4c), whereas in Staphylococcus, there was a less steep descent of the accuracy and sensitivity that showed figures of 0.56 and 0.46 at threshold 4 ( Figure 4d). whereas in Staphylococcus, there was a less steep descent of the accuracy and sensitivity that showed figures of 0.56 and 0.46 at threshold 4 ( Figure 4d).
Crystal Violet and XTT used Jointly to Classify Different Types of Biofilm
The independence between the CVN and XTTN assays gives the opportunity to classify biofilm producers into four categories according to their distribution in a Cartesian bidimensional space. Considering the performances of the two descriptors and the range of the threshold at which all three performance parameters are at their best, the threshold was set at 1, meaning that values below 1 indicate a poor if any metabolic activity (XTTN on the y axis) and a low biomass (CVN on the x axis). Since the CVN and XTTN values derive from a normalization, this threshold is not expected to suffer for different settings, as the overall background readings. The separation at threshold 1 of both axes produced four panels designated as no biofilm (panel 1: lower left), inactive producers (panel 2: lower right), active producers (panel 3: upper right) and active low producers (panel 4: upper left). In Candida, the strains that were known as no biofilm producers clustered all in panel 1 very close to the origin (Figure 5a). The rest of the strains were scattered throughout panels 2 and 3, and none was found in panel 4. Vice versa, in Staphylococcus sp., strains were present in panels 1, 3 and 4 ( Figure 5b). These data suggest that, in general, the Staphylococcus biofilm is more active.
Crystal Violet and XTT Used Jointly to Classify Different Types of Biofilm
The independence between the CV N and XTT N assays gives the opportunity to classify biofilm producers into four categories according to their distribution in a Cartesian bidimensional space. Considering the performances of the two descriptors and the range of the threshold at which all three performance parameters are at their best, the threshold was set at 1, meaning that values below 1 indicate a poor if any metabolic activity (XTT N on the y axis) and a low biomass (CV N on the x axis). Since the CV N and XTT N values derive from a normalization, this threshold is not expected to suffer for different settings, as the overall background readings. The separation at threshold 1 of both axes produced four panels designated as no biofilm (panel 1: lower left), inactive producers (panel 2: lower right), active producers (panel 3: upper right) and active low producers (panel 4: upper left). In Candida, the strains that were known as no biofilm producers clustered all in panel 1 very close to the origin (Figure 5a). The rest of the strains were scattered throughout panels 2 and 3, and none was found in panel 4. Vice versa, in Staphylococcus sp., strains were present in panels 1, 3 and 4 ( Figure 5b). These data suggest that, in general, the Staphylococcus biofilm is more active. Microorganisms 2019, 7, x 10 of 14
Discussion
An analysis of the microbial biofilm can be carried out with a number of different assays, including crystal violet and XTT methods [25,39]. The detection of the biofilm in vitro can be performed visually by experienced personnel or automatically with multi-well plate readers. The former system is affected by the sensitivity and experience of the analyst, whereas the latter produces continuous data of absorbance that must be interpreted. In the case of a "yes-or-not "output, a single
Discussion
An analysis of the microbial biofilm can be carried out with a number of different assays, including crystal violet and XTT methods [25,39]. The detection of the biofilm in vitro can be performed visually by experienced personnel or automatically with multi-well plate readers. The former system is affected by the sensitivity and experience of the analyst, whereas the latter produces continuous data of absorbance that must be interpreted. In the case of a "yes-or-not "output, a single threshold must be chosen in order to discriminate the positive over the threshold from the negatives below. In other cases, it has been chosen to divide the range of values in quartiles [32], leading to a more articulated description of the strains according to their ability to produce biomass, if CV is used or to be metabolically active, according to the XTT assay. The lack of normalization could cause scarce reproducibility among laboratories and sometimes even among different experiments within the same laboratory. Changes of the absolute values of the readings could be due to different absorbances of the plates, to different levels of nonspecific binding of the dye (particularly CV) to both the plastic of the multi-well plate and microbe cells wall and to different energies of the lamp or absorbance of the filter. All these factors can change significantly the background and, therefore, the actual reading of the biofilm. One possibility is simply to subtract the background from the readings, the other is the normalization presented in this paper.
There is an obvious linear relationship between the results of the two corrections, and in both cases, readings at the background level will be transformed in 0. However, the normalization has the advantage of expanding the range of values at the rate of the inverse of the background. For example, with a background level of 0.125, the range of the normalized values will be expanded by 1/0.125 = 8 times. This enlarges the differences between the negative and the positive results, easing the search of an optimal threshold and, in general, the analysis of the data. The fact that both CV and XTT behave as "perfect" descriptors justifies their large use as witnessed by some 25,000 and 3500 citations in the literature for these two assays respectively. However, the problem of using either of the two assays is that each of them describes the biofilm differently. This makes biofilm data not fully comparable among different papers. In fact, some will describe the biofilm as for its ability to form biomass and others for its metabolic activity.
The solution proposed in this paper is to produce two derivate indexes of which one, the BSA, indicates the specific activity of the biofilm and the other, the BMV, is a sort or complex score of the biofilm ability to grow and be active. The BSA is potentially useful when the devitalization of the biofilm is under study, i.e., in those cases in which the biomass is not supposed to vary, but the activity should decline due to the effect of the devitalizing agent. On the other hand, the BMV can be a rapid index to assess the real effectivity of the biofilm and, therefore, its threat in the environment and in the patients. The classification into four classes presented in this paper is based on the combination of the activity and ability to produce biomass with a single threshold value at 1.00. It can be made more complex and informative by setting more than one threshold level and, for instance, dividing the CV and XTT range in four quartiles, applying to each assay the procedure proposed for the CV alone [32]. It is obvious that under the pressure of a rapid diagnosis, the question is simply whether or not the cells produce a biofilm. This means that in hospital settings, only one of the two assays could be considered enough to decide on the therapy. However, the ability of the biofilm to resist drugs [6,40,41] and to be the most important factor for the survival of the species in harsh environments [11,42,43] requires a deeper knowledge of its features for a full understand of its dynamics and, therefore, for the development of fully rationale responses aiming at its limitation or management.
Conclusions
The analysis of microbial biofilm formation can be carried on with a variety of methods, most of which based on automatic readers. In this paper we have demonstrated that the normalization, with the introduction of synthetic indexes, can lead to significant benefits in the evaluation of biofilm biomass and activity. Furthermore, we showed that it is possible to take advantage of the statistical independence of the biomass evaluation with XTT or CV, by producing a distribution of the strains with both synthetic indexes, with an improvement of the resolution obtainable among strains. Future studies could be helpful to add other descriptors for a better description of biofilm-forming strains.
Funding: This research received no external funding. | 5,055.4 | 2019-03-01T00:00:00.000 | [
"Biology"
] |
Methodology for assessing the budgetary security of regional infrastructure provision (case study of the Komi Republic)
The aim of the study is to analyze methodological approaches to assessing the budget component of financial and budgetary security and to develop a methodology for assessing the budgetary security of a region on their basis. In the process of research, statistical and mathematical methods were used, a comparison method, an indicative method, analysis and synthesis methods. A brief description of the main approaches and methods for assessing the regional budget security is given, and the most significant indicators are selected that allow diagnosing threats to the budget component of the regional financial and budget security. The article presents the author’s methodology for assessing the budget component of the financial and budgetary security of the region, a set of indicators and threshold values for its implementation is formed. The proposed methodology has been tested on the example of the analysis of budget security indicators of the Komi Republic.
Introduction
In recent years, with the introduction of Western countries economic sanctions against Russia, global instability and new challenges associated with the emergence of massive diseases and infections that paralyze the economy, the role of the formation and development of the economic security system has sharply increased. Under the new conditions, especially after the spread of Covid -19 in the world, which led to a reduction in the GDP of a number of countries, including developed economies of the world, the formation of an effective system of financial and budgetary security of the state and its regions, which is an important part of economic security, is of particular importance .
One of the key components of the financial and budgetary security of the region is budgetary security, since the budget is the most important institution, without the normal functioning of which the development of the economy of the regions and the state is impossible [1] . The budgetary security of the region is a state of solvency and stability of the region, which involves the effective and balanced formation of budgets and the use of budgetary funds. The creation of an effective mechanism for ensuring budgetary security at all levels, whether it is a state, region, enterprise or an individual person, is a necessary prerequisite for the development of Russia as a great economic power, as well as the conditions for economic growth and development of the region, increasing the level of social protection of the region's inhabitants and ensuring national interests of the country.
An increase in the relevance of issues of ensuring budgetary security at the regional level is mentioned in the main document of strategic planning of Russia in the field of security, approved in the Decree of the President of Russia dated December 31, 2015 No. 683 "National Security Strategies of the Russian Federation", in which as one of the priority directions in national security called the challenge to address the risks associated with disproportion development of the Russian Federation. One of the most important ways to stimulate the independent economic development of regions is to strengthen the budgetary security of the region.
The existence of such system is impossible without creating an effective mechanism for assessing the state of the budget component of financial and budgetary security, which includes monitoring and express diagnostics of the budget component of financial and budgetary security using a system of indicators and their threshold values [1]. The state of the entire economy of the country ultimately depends on how efficiently the assessment is carried out. Early warning of the occurrence of threats allows preventing their implementation, eliminating the factors that generate them at the stage of occurrence, this significantly reduces the cost of measures to ensure financial and budgetary security.
Materials and methods
At present, in Russia there is no generally accepted method for assessing the financial and budgetary security of the region, although some scientists have attempted to create such methods.
Methods for assessing the budgetary security of a region include the following: 1) the method of assessing financial and budgetary security using a system of indicators. In this case, the compliance of the indicator in the region with the threshold values of the indicator is analyzed; 2) assessment of the region's economic growth rates based on basic macroeconomic and integrated indicators and their dynamics; 3) expert method for ranking regions by threat; 4) assessment of caused quantitative damage; 5) applied mathematics methods, such as multivariate statistical analysis, providing data with a high level of reliability, but require a lot of time and quite complex operations; 6) fiscal control [1]. In our opinion, it is advisable to carry out the budget security assessment using the indicative method, which allows you to most accurately diagnose threats to financial and budgetary security. Compared to the methods of applied mathematics, it is much easier to use, and comparing to the expert method, the final assessment looks more objective, since statistics are analyzed during its execution, which exclude the possibility of error in the subjective assessment of a particular threat.
There are various approaches to the formation of a system of indicators of budgetary security. Financial security indicators are divided into 4 groups depending on the type of threat. Some indicators characterize the state of budgetary security in the region. Then, using the threshold values of the indicators with the use of the appropriate weights of each indicator, the overall level of financial and budget security is calculated. Indicators: surplus (deficit) of the consolidated budget per capita; the share of gratuitous receipts in the total amount of budget sources; the share of gratuitous receipts in the total amount of sources is budgetary.
L.B. Mokhnatkina
The main threat to budget security is an imbalance of the budget system. Budget security assessment is based on an analysis of this threat. Indicators: budget surplus (deficit) in million rubles, % of GRP, The volume of public debt, billion rubles.
Criteria of choosing indicators: 1) accessibility of statistical databases. 2) ratio with relevance to the basic directions strategic development. To assess budget security, the method of weighted average annual data according to the following formula is used:
Results
Based on the considered approaches and methods, an integrated approach was developed to assess the budget component of financial and budget security. On the basis of common requirements and developed system of indicators it appears to be possible to identify 17 indicators of budget security. It is advisable to subdivide all budget security indicators into five projections: 1) Indicators of the security of the budget of the region -characterize the ratio of income and expenditure of the region, as well as the amount of public debt; 2) Indicators of regional budget independence -characterize the independence of the region from cash receipts from other levels of the budget system; 3) Indicators characterizing the relationship of the regional budget with the level of GRP; 4) Indicators of social orientation and effectiveness of the region's budget -characterize the degree of social protection of the region's population and its provision with budget funds; 5) Indicators showing how well the budget line items for income and expenses are being implemented.
The threshold values of indicators are also defined. The list of indicators and their threshold values is presented in the table 2. The ratio of budget revenues to expenses,% >= 100
А2
Coefficient of covering expenses with own tax and non -tax revenues,% >= 75
А3
The ratio of public debt to total expenditure,% <= 30
А4
Share of expenses on servicing regional debt in the total volume of expenses ,% <= 13
А5
The ratio of public debt to own income, % >= 20% II Budget Independence Indicators
B1
The ratio of own income to total income, % >= 75%
B2
The share of taxes and fees credited to the consolidated budget of the total amount of taxes collected in the region, % >= 50%
B3
Share of gratuitous transfers from budgets of other levels in the region's total revenues, % <= 25%
B4
The share of tax revenues in total budget revenues, % >= 50% III.
The relationship of the budget with GRP C1 The ratio of budget revenues to GRP, % >= 38
Indicators of social orientation and budget performance D1 Share of expenses on social items to the total amount of expenses >= 62 D2 Budget revenues per capita, thousand rubles >= 20
D3
The growth rate of the volume of financial resources to ensure social policy, % >= 100%
V. Budget execution indicators E1
Budget execution ratio by income >= 100% E 2 Cost performance budget >= 100% Further, for each projection of indicators, indicator values are calculated that are compared with threshold values as follows: if an increase in the indicator value rises the level of budget security, then it is necessary to divide the indicator threshold value by the actual value of the indicator under study, if an increase in the indicator value lowers the budget security level, it is necessary to divide the actual value of the indicator under study on the threshold value of the indicator. Based on this comparison, the risk zone is determined by the risk assessment points (primary point), and then, based on the risk zone, each indicator is assigned a score in accordance with the risk zone on a five-point scale. The correspondence of risk assessment points and risk zones is given in the table 3. The total projection score is calculated by determining the arithmetic mean of the indicators included in the projection. Next, using the arithmetic mean of the projections of budget security, the final budget security score is determined. Table 4 presents the indicators necessary for calculating the values of indicators of the budget component of the fiscal security of the region. Table 6 shows the correspondence of the final score to the level of budget security.
Discussion
Thus, based on the analysis, it should be concluded that, in general, the level of budgetary security of the Komi Republic corresponds to the minimum values of the final score corresponding to the stability zone. There are significant risks of lowering the level of budgetary security to a zone of moderate risk. In the stability zone there is a level of budgetary security for the projections "indicators of budget independence", "and indicators of social orientation and budget efficient", "indicators of budget execution", a moderate risk zone corresponds to the level of security for other projections. In general, only one indicator C1 "Correlation of budget revenues to the gross regional product" registered a significant level of risk. Thus, the main risks of the budget component of the financial and budgetary security of the Komi Republic are associated with insufficient budget revenues and non-fulfillment of the budget for expenditures, which leads to the region not fulfilling some of the measures stipulated by the plan.
In the process of searching for data to calculate indicators of statistical information, the data of Rosstat, Komistat, the Federal Tax Service were used. The calculation of indicators was performed using the program using Microsoft Office Excel. Today, a serious stake in science is placed on the development of intelligent technologies for processing big data, applicable both in marketing research of risks and indicators of the effectiveness of the development of regional economic systems, presented in the works of a number of authors [11][12][13][14][15], and, in our opinion, possible for use in the process of diagnosing threats and indicators of financial and economic security at the regional and national levels.
Conclusions
The feasibility and effectiveness of the proposed methodology was confirmed by testing this approach on the example of assessing the budgetary security of the subject of the Russian Federation -the Komi Republic using the proposed system of indicators. The analysis was performed on 17 indicators. The developed methodology can be used by state and regional authorities in order to assess existing threats in a timely manner, which allows timely measures to be taken to eliminate them and to prevent consequences harmful to the economy of the region. | 2,805.6 | 2020-01-01T00:00:00.000 | [
"Economics",
"Computer Science"
] |
A Generalized Hierarchy of Combined Integrable Bi-Hamiltonian Equations from a Specific Fourth-Order Matrix Spectral Problem
: The aim of this paper is to analyze a specific fourth-order matrix spectral problem involving four potentials and two free nonzero parameters and construct an associated integrable hierarchy of bi-Hamiltonian equations within the zero curvature formulation. A hereditary recursion operator is explicitly computed, and the corresponding bi-Hamiltonian formulation is established by the so-called trace identity, showing the Liouville integrability of the obtained hierarchy. Two illustrative examples are novel generalized combined nonlinear Schrödinger equations and modified Korteweg– de Vries equations with four components and two adjustable parameters.
Introduction
Lax pairs of matrix spectral problems [1] play a central role in the study of mathematical integrability and soliton theory, providing powerful tools for understanding and solving nonlinear partial differential equations arising in physics and mathematics [2,3].Particularly, one can construct infinitely many symmetries and conserved quantities from associated Lax pairs.Integrable models arise in various areas of physics, including classical mechanics, quantum mechanics, nonlinear optics, fluid dynamics and plasma physics.Examples of integrable models include the Korteweg-de Vries equation, the nonlinear Schrödinger equation, the sine-Gordon equation, and the Toda lattice equation, among others.
Integrable models come in hierarchies and typical examples of integrable hierarchies are the Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy [4] and its various hierarchies of integrable couplings [5].Matrix Lie algebras are the key to formulate meaningful Lax pairs [6,7], generating integrable models.In mathematics, it has always been intriguing to identify and classify matrix spectral problems that yield integrable hierarchies.There are many examples with one or two potentials but few examples with multiple potentials.In this paper, we would like to present a new matrix spectral problem based on a specific matrix Lie algeba and construct an associated integrable hierarchy with four potentials.
It is known that the zero curvature formulation is a powerful approach for constructing integrable hierarchies, which is briefly stated as follows (see [7,8] for more details).In our discussion, we denote the spectral parameter by λ and a q-dimensional column potential vector by u = (u 1 , • • • , u q ) T .First, take a given loop matrix algebra g with the loop parameter λ, and formulate a spatial spectral matrix: where the elements h 1 , • • • , h q are linear independent in g.We assume that the above element h 0 is pseudo-regular: Im ad h 0 ⊕ Ker ad h 0 = g, [Ker ad h 0 , Ker ad where ad h 0 denotes the adjoint action of h 0 on the Lie algebra g.This condition is helpful in determining a Laurent series solution Y = ∑ n≥0 λ −n Y [n] to a stationary zero curvature equation in the underlying loop algebra g.Second, we introduce an infinite sequence of temporal spectral matrices where ∆ m ∈ g, m ≥ 0, as the other parts of a sequence of Lax pairs, to generate a hierarchy of integrable models: via the zero curvature equations These zero curvature equations represent the solvability conditions of the spatial and temporal matrix spectral problems: Finally, we furnish Hamiltonain formulations by the so-called trace identity: where δ δu is the variational derivative with respect to u, and κ is a constant, independent of λ, determined by for the resulting hierarchy (5).Further, a hereditary recursion operator Φ, which is determined from the recurstion relation X [m+1] = ΦX [m] , enables us to establish a bi-Hamiltonian formulation and show the Liouville integrability (see, e.g., [7,9]) for the obtained hierarchy (5).Many hierarchies of Liouville integrable models have been constructed via the zero curvature formulation (see, e.g., [4][5][6][7][8][9][10][11][12][13][14][15][16]).When q = 2, namely, in the case of two potentials, we have the AKNS hierarchy [4], the Heisenberg hierarchy [17], the Kaup-Newell hierarchy [18] and the Wadati-Konno-Ichikawa hierarchy [19].All of the corresponding spectral matrices are 2 × 2 and contain two potentials, whose spectral problems are of the second order and solvable within the theory of special functions.
In this paper, we would like to construct an integrable hierarchy of combined Liouville integrable models with four potentials via the zero curvature formulation.The key point is to introduce a specific 4 × 4 matrix spectral problem.The corresponding Hamiltonian formulations are established by an application of the so-called trace identity, and, further, a hereditary recursion operator is computed and used to furnish a bi-Hamiltonian formula-tion and thus show the Liouville integrability for the resulting hierarchy.Two illustrative examples of novel combined integrable nonlinear Schrödinger and modified Korteweg-de Vries models are presented, together with their uncombined reductions.The final section gives a conclusion and a few concluding remarks.An open question is how to generalize the presented four-component integrable models to six-component or more-component integrable Hamiltonian equation models.
A Matrix Spectral Problem and Its Four-Component Integrable Hierarchy
Let δ be an arbitrary constant, r an arbitrary natural number and T a square matrix of order r, whose inverse is given by its negative.Obviously, a set g of block matrices forms a matrix Lie algebra, while the matrix commutator [A, B] = AB − BA is taken as its Lie bracket.We will use a special case of this Lie algebra with r = 2 and to formulate a specific spectral matrix below.Let u = u(x, t) = (u 1 , u 2 , u 3 , u 4 ) T (x, t ∈ R) be a four-component potential vector, and α 1 , α 2 and δ 1 , δ 2 , two pairs of arbitrary constants.Assume that Motivated by recent studies on matrix spectral problems with four potentials (see, e.g., [20][21][22] by us and [23,24] by other authors), let us introduce and consider a matrix spectral problem of the form: where λ is again the spectral parameter.This spectral matrix is from the matrix Lie algebra previously defined, with r = 2 and T by (11).The spectral problem is not any reduction of the matrix AKNS spectral problem (see, e.g., [25]), but it enables us to generate an integrable hierarchy, each of which is bi-Hamiltonian and possesses a combined structure.As usual, to construct an associated Liouville integrable hierarchy, we first solve the corresponding stationary zero curvature Equation (3).A solution Y is assumed to be of the form: where all basic objects are taken to be of Laurent series type: We take a solution of the above form, because this is the form that the commutator between any matrix in g and the spectral matrix M takes.Clearly, the corresponding stationary zero curvature Equation (3) leads equivalently to These equations exactly generate the initial conditions and the recursion relations to determine the Laurent series solution Y where n ≥ 0. To compute the Laurent series solution concretely, let us take the initial data where β and γ are a pair of arbitrary constants, and assume the constants of integration to be zero Under those restrictions, one can work out that , . All these computations allow us to impose ∆ r = 0, m ≥ 0, to introduce which are the temporal matrix spectral problems within the zero curvature formulation.The conditions that guarantee the solvability of the spatial and temporal matrix spectral problems in ( 13) and ( 24) are the zero curvature equations in (6).They lead to a hierarchy of integrable models with four potentials: or more precisely, Taking advantage of the previous derivations, we can present some particular examples.The first nonlinear example is the model of combined integrable nonlinear Schrödinger equations: and the second one is the model of combined integrable modified Korteweg-de Vries equations: These provide two typical coupled integrable models, which extend the category of coupled integrable models of nonlinear Schrödinger equations and modified Korteweg-de Vries equations, presented recently (see, e.g., [21,26,27]).One interesting characteristic is that every model contains two linear derivative terms of the highest order, and so, we call them combined models.Two special cases of β = 1, γ = 0 and β = 0, γ = 1 in the obtained hierarchy are of interest and produce reduced hierarchies of uncombined integrable models.
If we take α = −δ 1 = δ 2 = 1, β = 1 and γ = 0 in the model (27), we obtain a coupled integrable nonlinear Schrödinger-type model: If we take α = −δ 1 = δ 2 = 1, β = 0 and γ = 1 in the model (27), we obtain another coupled integrable nonlinear Schrödinger-type model: Similarly, if we take α = −δ 1 = δ 2 = 1, β = 1 and γ = 0 in the model ( 28), we obtain a coupled integrable modified Korteweg-de Vries-type model: If we take α = −δ 1 = δ 2 = 1, β = 0 and γ = 1 in the model ( 28), we obtain another coupled integrable modified Korteweg-de Vries-type model: These models are different from the vector AKNS integrable models [25].In each pair, the two models just exchange the first component with the second component, carrying two sign changes, and the third component with the fourth component, carrying no sign change, in the vector fields on the right hand sides.Moreover, all those four models still commute with each other and so they are symmetries to each other.
Recursion Operator and Bi-Hamiltonian Formulation
To establish a bi-Hamiltonian formulation [28,29] and show the Liouville integrability for the resulting hierarchy (26), one can make use of the so-called trace identity (8) associated with the spatial matrix spectral problem (13).Substituting the spectral matrix M by (13) and the Laurent series solution Y determined by (14) into the trace identity engenders since we have Checking with n = 2 determines κ = 0, and consequently, one arrives at where the Hamiltonian functionals are computed as follows: This enables us to furnish the folllowing Hamiltonian formulations for the resulting hierarchy ( 26): where J 1 is the Hamiltonian operator: and H [m] are the functionals given by (36).It follows from the Hamiltonian theory that there exists an interrelation S = J 1 δH δu between a symmetry S and a conserved functional H of the same model.
It is a common characteristic property that the vector fields X [n] consitutes an abelian algebra: which can be derived from an abelian algebra of Lax operators: Such a commutative property of vector fields still holds true under reciprocal transformations [30], and more discussions about the isospectral zero curvature equations is given in [31].
Furthermore, based on the recursion relations in ( 19)-( 21), directly from the recursion relation X [m+1] = ΦX [m] , where X [m] , m ≥ 0, are defined by (25), we can derive a hereditary recursion operator Φ = (Φ jk ) 4×4 [29] for the hierarchy (26) as follows: The hereditariness of the operator Φ [32] means that Φ satisfies where the Lie derivative L X Φ is defined by in which X and Z are arbitrary vector fields.Oberve that an operator Ψ = Ψ(x, t, u, u x , • • • ) is a recursion operator of an evolution equation u t = X(u) [33] if and only if the operator Ψ needs to satisfy In the above example, we can easily verify that the autonomous operator Φ is a recursion operator of the first model u t 0 = X [0] , i.e., we have L X [0] Φ = 0.Then, based on these two facts, we can have It then follows that Φ provides a common recursion operator for all models in the obtained hierarchy (26).
With some additional analysis, we can see that J 1 and J 2 = ΦJ 1 constitute a Hamiltonian pair.This means that an arbitrary linear combination of J 1 and J 2 is again Hamiltonian, i.e., it satisfies (Z [1] ) T J ′ (u)[JZ [2] ]Z [3] dx + cycle(Z [1] , Z [2] , Z [3] where Z [1] , Z [2] and Z [3] are arbitrary vector fields, and thus the hierarchy (26) possesses a bi-Hamiltonian formulation [28]: Moreover, we can observe that the associated Hamiltonian functionals also commute with each other under the corresponding two Poisson brackets [7]: and In summary, each model in the obtained hierarchy ( 26) is bi-Hamiltonian and Liouville integrable, possessing infinitely many commuting symmetries {X Two specific examples of such novel nonlinear combined Liouville integrable Hamiltonian models are the two models in ( 27) and (28), which involve two pairs of arbitrary constants.
Concluding Remarks
A Liouville integrable hierarchy with four potentials has been derived from a specific 4 × 4 matrix spectral problem, along with its hereditary recursion operator and bi-Hamiltonian formulation.The success comes from a particular Laurent series solution of the corresponding stationary zero curvature equation.The resulting integrable models involve two arbitrary constants and contain diverse specific four-component examples of integrable models, both combined and uncombined.However, it is still open to us how to generalize the presented 4 × 4 matrix spectral problem so that integrable models with more potentials can be generated.
Integrable models and Lax pairs are closely related.There is a huge diversity of multi-component integrable models, which have close connections to various areas of mathematics, including algebraic geometry, Lie groups, Lie algebras and Riemann surfaces.Identifying and classifying multi-component integrable models from Lax pairs is crucial for advancing our understanding of complex nonlinear mathematical and physical problems.It enables us to uncover dynamical behaviors of nonlinear waves and gain insights into a wide range of nonlinear phenomena across different branches of science and mathematics.
Funding:
The work was supported in part by NSFC under the grants 12271488, 11975145, 11972291 and 51771083, the Ministry of Science and Technology of China (G2021016032L and G2023016011L), and the Natural Science Foundation for Colleges and Universities in Jiangsu Province(17 KJB 110020). | 3,186.4 | 2024-03-21T00:00:00.000 | [
"Mathematics"
] |
Estimation of Critical Components of Internet Infrastructure
Electronic communications and Internet plays a significant role in the current public life. Beside energy, transport, water supply and other sectors, Internet is considered to be an especially important infrastructure. Currently, more and more users, service providers and public institutions rely on security of Internet network. Network accessibility can indeed determine the parameters of quality service supply. A failure in network supply due to e.g. cyber attacks, results in service unavailability. As a result, the studies on the reliability and safety of Internet network infrastructure operation, and their continuity remain topical. The article [1] analyses regional Internet network as an integrated system formed of stochastically connected subnets, and suggests methods for analyzing the topology of such system. The article further analyses one of the fundamental characteristics of a network – Internet network connectivity – on the basis of network topology analysis. The methods suggested in the article are aimed at identifying the critical elements of network infrastructure. Eventually, constant monitoring of such elements would allow real-time assessment of network status.
Introduction
Electronic communications and Internet plays a significant role in the current public life.Beside energy, transport, water supply and other sectors, Internet is considered to be an especially important infrastructure.Currently, more and more users, service providers and public institutions rely on security of Internet network.
Network accessibility can indeed determine the parameters of quality service supply.A failure in network supply due to e.g.cyber attacks, results in service unavailability.As a result, the studies on the reliability and safety of Internet network infrastructure operation, and their continuity remain topical.
The article [1] analyses regional Internet network as an integrated system formed of stochastically connected subnets, and suggests methods for analyzing the topology of such system.The article further analyses one of the fundamental characteristics of a network -Internet network connectivity -on the basis of network topology analysis.The methods suggested in the article are aimed at identifying the critical elements of network infrastructure.Eventually, constant monitoring of such elements would allow real-time assessment of network status.
Problem identification
Cyber attacks have been classified by different impact aspects and some of them have a direct effect on the stability and reliability of Internet network.The number of such attacks on the Internet is increasing, which results in an increased effect on the normal network operation.The network has to process the flows generated by the attacks; and very often such attacks are targeted at the elements of network infrastructure [2].Normally, as a response to such attacks, an incident management model (a.k.a.detect-clean-recover) -Computer Emergency Response Team (CERT) -is used [3].The nature of such model operation is exceptionally reactive, i.e. an action is generated upon the fact of an attack.CERT has a shortterm effect, i.e. dealing with a specific attack, and responding to the outcomes [4,11].Due to anonymity on the Internet, the identification of the source of an attack is not always possible using CERT, therefore, attacks from the same source may recur.Therefore, we presume a need for new proactive (preventive) measures to be employed directing them rather towards protection than towards defense as in the case of using CERT.
Another very important aspect is telecommunication.Internet Service Providers (ISP) forms their network infrastructures individually according to their business objectives, network expansion possibilities and user needs.Each ISP has its own routers and inter-network formation policy.Every ISP monitors its network perimeter, and controls the network security as well as its operation reliability.Connections to other networks are also arranged under the initiative of the very ISP using Border Gateway Protocol (BGP) for compiling Autonomous System (AS) routing tables.Such inter-network connections form a hierarchical structure of the Internet network [5].The general reliability of stochastically formed Internet network segment depends on various factors, including the reliability and topology of separate AS elements.
This article is aimed at shaping the methodology for analyzing the Internet network infrastructure identifying the critical elements of the infrastructure the disturbances of which are influencing functionality of the entire network operation.
Methodology and Criteria
When analyzing the Internet network, a graph theory is usually applied [6].Works [7,8] demonstrates the adoption of graph theory for networks traffic analysis and traffic engineering while practice for Internet interconnections assessments is still lacking.
A segment of Internet network is represented by a graph G net , at the vertexes of which are Autonomous Systems (AS).A stationary network status is represented by a connected graph.Such graph contains at least one route between the i th AS and any other AS belonging to G net .The article published [1] presents the topology and the respective graph of the Lithuanian National Internet Network infrastructure.
The following elements of graph are of especially high importance: critical node -V c and critical link -E c .
The descriptions of these critical elements vary among authors.
By the strict rules node is critical if its removal will disconnect the graph into two components.Extended characterisation of critical node presented in paper [9] as a node V c whose failure or malicious behaviour disconnects or significantly degrades the performance of the network.
The vague dual definition of node criticality aggravates the identification of critical nodes.In reality, the variations defined as "disconnecting or significantly degrading the performance" are identified using different methods.Therefore the following definitions are used in this article: critical node and Ș-critical node.
A node shall be considered to be critical when its elimination or disturbance dissolves the original graph into two or more disconnected graph.
Ș-node shall be considered to be critical when its elimination significantly degrades the network performance for the majority of users (ȘA).
The nodes defined as matching the first description are applied the formal method of removing graph vertices.In case the elimination of i th AS creates separate subgraphs having no interconnection, such AS is considered to be V c .
On the purposes of this article and specifying the definition of Ș-critical node, the criticality of a node shall be assessed in relation to the number of users A i connected to the i th AS.The criticality index of a node Ș is a relative value where A i is the number of users of the i th AS; ȈA j is the total number of Internet users in the network.For convenience, the expression of Ș-critical node shall be divided into two categories: Ș i 0.1 and Ș i < 0.1.Respectively, the criticality Ș i 0.1 shall be considered to be the highest in the general network infrastructure.
The definitions of a Critical link E c also vary.One of the definitions is as follows: "a link AB is critical if both endpoints A and B are critical nodes".Broader E c description is the link connecting two critical nodes so that, when this link is eliminated from the graph, the graph becomes disconnected [9].
When identifying E c , G net is considered to be formed of all the ISPs operating on the Internet network corresponding to the node vertices.It is important to note the links the eliminations of which would disconnect small ISP (having no AS) from the National Internet network.
By analogy with the concepts of a critical node used in this article, the following definitions are used: critical link and N-critical link.
A link shall be considered to be critical when its elimination or disturbance forms several subgraphs having no interconnection (edges).
N-critical link shall be considered to be critical when its elimination or disturbance significantly degrades network connectivity.
Identification of E c according to the first definition is performed by the analogous V c principle -method of removing graph edges.In case the elimination of n th creates separate subgraphs having no interconnection, such line is considered to be E c .The graph in question corresponds to the regional Internet network with N int connections [1].N int are the links connecting the AS of the regional network with the AS of the International Internet network provider.In such case, applying the method of removing, N int shall correspond to E c .Specifying the concept of N-critical link, we suggest linking it with the interconnection bandwidth ǻ.The maximum installed bandwidth ǻ max of the link belonging to the i th AS shall be assessed in relation to the total bandwidth ȈBw of connections managed by i th AS.This relation is expressed by the capacity coefficient where ǻ max is installed connections capacity of the i th AS, Gb/s; ȈBw is the overall bandwidth of the i th AS for all connections of this particular AS, Gb/s.The estimation of Ș AS shows the criticality of the link for the i th AS connectivity compared to other links of i th AS.N-critical link shall be divided into two categories: Ș AS 0.9 and Ș AS < 0.9.Respectively, the criticality Ș AS 0.9 (criticality) of the lines shall be considered to be the highest for the total connectivity of i th AS.Essentially, the presence of the above-mentioned condition shows disproportionate distribution of i th AS resources.
Analyzing N-critical links (E cN ), their traffic (bandwidth) intensity is also important to consider.The relation of the data flow ǻ traffic (Gb/s) of the n th link (n = 1, 2, ..., E cN ) and ǻ max shows the line traffic expressed by the traffic coefficient Ȝ n , Ȝ n = ǻ traffic /ǻ max .It is a dynamic parameter different from the above-mentioned parameters which are more or less static.ǻ traffic is one of the most significant network parameters often monitored by ISP.
In a real network, given the normal status, connection links are not overloaded and usually have some reserves.However, subject to data flows generated due to user activeness or cyber attacks, traffic intensity may exceed the installed bandwidth.When Ȝ n 0.8, it alerts the critical level of resources used of the link, the critical bandwidth limit reached by more than one line may signal a cyber attack, which in turn may result in significant degradation of the whole network connectivity.
Application
The above-described metrics were applied to identify the critical nodes and lines of the Lithuanian national Internet network [1].
Having completed the experiment using the method of removing the vertices, 4 critical nodes were identified (V c ), whereas the number of Ș -critical nodes satisfying the condition Ș i 0.1 was 3. Increasing the Ș i (presented at table 1) will result in to the increase of number of V c respectively.It should be noted that one of that 3 nodes coincides with the respective critical node.
The identification of critical lines (E c ) in the graph representing the Lithuanian Internet network was slightly more complicated since E c search must take place among several hundreds of connection lines.Using the method of line removal, 26 critical lines were identified.7he search of N-critical lines (E cN ) was performed for every ISP separately.Only 2 ISP (independent from E c ), including E cN were identified as satisfying the condition Ș AS 0.9.Decreasing the level of Ș AS will result the increase of number identified E cN .
Monitoring
We suggest monitoring the above-mentioned V c and E c in order to identify the failures of the critical elements of the network or critical levels of link traffic resources.Monitoring is very important for timely identification of the failures of the critical elements since the loss of such elements affects the whole network performance.For the troubleshooting, we shall use detectors in the subgraph G c consisting of vertices and edges E c .These detectors perform network monitoring through constant intercommunication.
The simple way to perform monitoring would be routine checks carried on network switching nodes (V c ).Those could be simple ping, tracepath, pathping or traceroute commands, which would continuously (for instance, at 1-5 minutes intervals) check the response from all the critical nodes and the process itself would be automated and screened on the network topology map.The positive characteristic of such a method is its independence, since there would be no need for agreements with router administrators regarding placement of sensors.However, the method itself lacks flexibility.In addition, some ISP prohibits reception of the said commands in their networks.
Our approach is to use for monitoring purposes the Simple Network Management Protocol (SNMP).SNMP is an application layer protocol that facilitates the exchange of management information between network devices.It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.SNMP enables network administrators to manage network performance, find and solve network problems.As most ISPs use SNMP as de facto standard for network supervision, idea is to monitor some parts of national network identified as critical nodes of network infrastructure.
To get information about critical nodes functionality, dedicated cyclical algorithm invented and presented at Fig. 1.
Generally, monitoring needs to follow several major steps: 1. Send request using SNMP protocol to V c (SNMP Agent).
2. Get response to monitoring system (SNMP Manager) using SNMP protocol from V c (SNMP Agent).3. Calculate and store that data using scripts or tools in central monitoring server with database.We suggest selecting the Ethernet Statistics Group MIB necessary for Ȝ n evaluation at SNMP Agent [10] where ¨in -the difference between two poll cycles of collecting the SNMP ifInOctets objects, which represents the count of inbound octets of traffic in bytes [10]; ¨outthe difference between two poll cycles of collecting the SNMP ifOutOctets objects, which represents the count of outbound octets of traffic in bytes [10]; ǻ max -the speed of the interface, as reported in snmpifSpeed object in bits/s [10]; ¨t -time period.Time period ¨t = 60 s.Implementation of the structural algorithm presented in Fig. 1 return.SNMP agents can be software-configured so that alarm messages are sent to the monitoring system not only in the case of total failure of the line (Fig. 1) but also when the critical limit of line traffic is reached, i.e. when Ȝ n 0.8.Thus the monitoring is performed even more expeditiously.
Conclusions
The assessment of an infrastructure of a network consisting of a large number of stochastically connected subnets (e.g.Internet) in an aspect of reliability is a difficult task due to network complexity.The metrics compiled during the study allows identifying the critical elements of such network: critical and Ș-critical nodes and critical as well as N-critical links.The analysis of these elements simplifies the above-mentioned task.
Having applied the above-described metrics to the Lithuanian Internet Network infrastructure, 4 critical nodes (V c ) were identified, whereas the number of Ș-critical nodes satisfying the condition Ș i 0.1 was 3. Also, 26 critical links and 2 ISPs, including N-critical links satisfying the condition Ș AS 0.9, were identified.Thus we can make a conclusion that the majority of subnets in the infrastructure of the national internet network distribute their resources proportionally.In this way the risk of being dependant on the reliability of N-critical links' operation is reduced.
We have proved that monitoring of critical network elements is possible on the basis of SNMP protocol using detectors in the critical network nodes and a monitoring system.Since SNMP is commonly used among ISP, there is no need to install a new system; an additional software installation is enough.The algorithm of network monitoring and its realization code were composed.All this allows for a real-time centralized monitoring of network status, analysis of network operation failures, etc.We suggest implementing such model, e.g. at the institutions managing electronic communication.
Table 1 .
Critical elements calculation results.
. To calculate Ȝ n for fullíduplex connections, we propose formula taking the largest of the in and out traffic values | 3,596.6 | 2011-06-08T00:00:00.000 | [
"Computer Science"
] |
Biodegradable Magnesium Biomaterials—Road to the Clinic
In recent decades, we have witnessed radical changes in the use of permanent biomaterials. The intrinsic ability of magnesium (Mg) and its alloys to degrade without releasing toxic degradation products has led to a vast range of applications in the biomedical field, including cardiovascular stents, musculoskeletal, and orthopedic applications. With the use of biodegradable Mg biomaterials, patients would not suffer second surgery and surgical pain anymore. Be that as it may, the main drawbacks of these biomaterials are the high corrosion rate and unexpected degradation in physiological environments. Since biodegradable Mg-based implants are expected to show controllable degradation and match the requirements of specific applications, various techniques, such as designing a magnesium alloy and modifying the surface characteristics, are employed to tailor the degradation rate. In this paper, some fundamentals and particular aspects of magnesium degradation in physiological environments are summarized, and approaches to control the degradation behavior of Mg-based biomaterials are presented.
Introduction
It has been a long time since metallic biomaterials gained clinical significance [1]. Biomaterials are expected to be biocompatible in the human body's internal environment containing aggressive ions. Some researchers, as a result, suggest using permanent metallic biomaterials, such as Ti-based alloys, CoCr alloys, and stainless steel [2][3][4][5]. These biomaterials are excellent choices for various medical applications, as they show high corrosion resistance, high strength [6], high hardness [7], and high fracture toughness [8]. On the other hand, the elastic modulus of most orthopedic implants made of these materials is greater than that of the natural bone, resulting in the stress-shielding phenomenon [9,10]. Several ions released from permanent biomaterials can also deteriorate biocompatibility. They may either be removed through a second surgery or remain in the human body; accordingly, several permanent biomaterials used in the market do not meet the requirements of the patient, leading to the development of degradable biomaterials [11].
Nowadays, degradable biomaterials play a crucial role in therapeutics, as they offer a steady resorption rate and, consequently, the best healing process. After providing adequate biomechanical support, resorbable biomaterials degrade gradually with no residues [12][13][14]. They fulfill the mission of promoting the healing process before being replaced by the host tissue [15,16]. No secondary operation is required, thereby eliminating the morbidity of the patient, extra costs, and the risk of new symptoms [17]. The reduction of mechanical support following the degradation process leads to transferring the loads from the orthopedic implants to the bones, thereby plummeting the risk of the reduction in bone density [18]. Even though bioresorbable polymers are candidate materials in tissue engineering and drug delivery, biodegradable metallic biomaterials offer an enhanced alternative for load-bearing Bioengineering 2022, 9, 107 2 of 20 applications [19,20]. Therefore, biodegradable metallic biomaterials are much more suited for use in load-bearing medical devices [21].
The most well-known biodegradable metals are iron (Fe), zinc (Zn), and magnesium (Mg), all of which are essential nutrients for human health [22,23]. The mechanical properties of Fe are the closest to that of a traditional permanent metallic implant, and its degradation rate is remarkably slow. Much as the degradation rate of Zn is moderate, the ductility and strength of this metal are low [24,25]. Studies following the implantation of Mg biomaterials indicate that the biocompatibility of Mg is desirable, and the degradation products of Mg can cause no disorder, inflammation, or allergic reactions to the human body [26][27][28][29][30]. However, the high corrosion rate, unexpected degradation, and structural failure of Mg-based biomaterials may trigger implant failure in some cases [31]. Numerous techniques, hence, have been utilized to alleviate such problems. The most important methods are adding non-toxic alloying elements to pure Mg and modifying the surface of these biomaterials [32][33][34][35]. By taking these methods into consideration, Mg-based biomaterials can be designed to degrade in a tailored behavior at different degradation rates to suit the requirements of a specific biomaterial for various applications [36][37][38]. This review article mainly focuses on the degradation behavior of Mg and its alloys for different biomedical applications.
Biodegradation Behavior of Magnesium-Based Materials
As a biodegradable material, magnesium oxidizes in contact with water on the grounds that the standard electrode potential of −2.372 V contributes to low corrosion resistance compared to other metals [39]. In the absence of water, an oxide film of MgO forms on the surface of Mg at room temperature (Equation (1)) [40]. Owing to this formed film, Mg indicates higher corrosion resistance in dry air. The thickness of this film is about 2.65 nm after one minute of exposure time to air [41]. Humidity can convert MgO film to Mg(OH) 2 layer that is stable in pH values higher than 7 (Equation (2)) [42]. Both of these films on the surface of Mg are partly soluble in water; for this reason, they cannot protect the surface of Mg in acidic and neutral solutions. In contrast to MgO, Mg(OH) 2 , which is slightly soluble, precipitates on the surface of Mg and causes the alkaline pH shift of the solution. Mg Magnesium degradation in aqueous media begins with an anodic partial reaction: Mg loses two electrons to form Mg 2+ Equation (3). As electrons are neither created nor destroyed in a chemical reaction, H 2 O gains these electrons to produce hydrogen gases and hydroxide ions Equation (4), resulting in the production of gas cavities and an increase in the pH of the solution in the surrounding tissues. Note that the overall reaction, Equation (5), yields one molecule of H 2 for each atom of Mg dissolved. Finally, following this chemical reaction, a partially protective film forms on the surface of Mg, which limits the further migration of ions [43,44]. However, the production of hydrogen gases at the corrosion sites triggers the split of the deposited Mg(OH) 2 precipitations from the surface and therefore prevents the formation of a uniform Mg(OH) 2 film on the surface of Mg. The degradation of Mg is not, as a result, self-inhibited, and it continues until the complete degradation of the substrate [40,45,46].
Reduction reaction: This metal is significantly susceptible to corrosion in most inorganic acidic, neutral, and slightly alkaline solutions with a speed that decreases as the pH level increases [47]. In other words, Mg has a high affinity to react with H 2 O at different pH. At low pH levels, the corrosion potential matches the region where hydrogen is stable, resulting in the production of hydrogen gas [48]. At a pH level between 8.5 and 11.5, a protective layer of oxide or hydroxide forms on the surface of Mg. While in the presence of alkaline solutions, this metal is covered in an Mg(OH) 2 layer, which protects it from fast corrosion. In fact, the corrosion resistance of magnesium and its alloys is closely linked to the passive layer [49].
As mentioned above, the formed magnesium hydroxide layer cannot preserve the surface of Mg from rapid corrosion, especially in an environment that contains a considerable amount of chloride ions. The reason for this is that Mg(OH) 2 is converted into more soluble MgCl 2 , and the dissolution of Mg(OH) 2 film accelerates the dissolution process [45,50]. The reactions can be expressed as below: It is noteworthy that, in a solution containing HCO 3 − and HPO 4 2− , the corrosion products also consist of Mg/Ca carbonates and phosphates that might increase the precipitations on the surface of Mg, thereby decreasing the degradation rate of Mg-based materials. The reactions are presented as follows: The distribution of degradation products of Mg is hardly uniform during the degradation process. Whereas Ca 3 (PO 4 ) 2 may appear preferentially, Mg 3 (PO 4 ) 2 may locate homogeneously at the corrosion sites. The main reason for this is that a large concentration of Mg ions avoids the nucleation of Ca 3 (PO 4 ) 2 [51,52]; it is, as a consequence, easier for Mg 3 (PO 4 ) 2 to precipitate all over the surface. Following the coverage of the Mg surface with a protective layer of Mg 3 (PO 4 ) 2 , the nucleation of Ca 3 (PO 4 ) 2 occurs, and a non-uniform distribution of Ca 3 (PO 4 ) 2 forms at the product layer [51]. Finally, the complete degradation of Mg is caused by the equilibrium between the production and dissolution of degradation products, besides the conversion of the active layer into a passive one [25].
Mg Corrosion in Simulated Body Environments
One of the most important factors in evaluating the degradation behavior of magnesiumbased biomaterials is finding a suitable physiological fluid, as the degradation rate of these biomaterials differs significantly in various types of simulated body fluids. To simulate a human body environment, different media, notably physiological saline (0.9% NaCl) solution, Ringer's solution (RS), phosphate-buffered saline (PBS), simulated body fluid (SBF), Hank's balanced salt solution (HBSS), Earle's balanced salt solution (EBSS), and Dulbecco's Modified Eagle medium (DMEM), are widely used. Each simulated body solution contains a specific amount of components that could trigger the formation of different degradation products, pathways, and mechanisms [53]. By way of illustration, the degradation product layer formed on the surface of Mg exposed to Ringer's solution mainly consists of magnesium calcite and brucite, as opposed to the layer formed on the surface of Mg immersed in Hank's solution, which included calcium phosphate, calcite, and brucite [54]. However, an XPS investigation carried out on the surface of Mg revealed that the same components, including MgO, Mg(OH) 2 , and MgCO 3 , were formed after exposure to SBF, HBSS, and DMEM [55].
By and large, a suitable simulated body solution ought to consist of three main parts: inorganic salts, buffering systems, and organic elements. To measure the degradation behavior of Mg and its alloys, physiological saline (0.9% NaCl) solution was used in several studies, most of which showed a striking difference between in vitro and in vivo results [56], compared to SBF and HBSS that indicated more reliable results [57]. RS is a solution with at least three different recipes: with lactate, with HCO 3 − , and without HCO 3 − [58]. The composition of this solution is not well-defined for corrosion testing of metallic implants [59], resulting in substantially different corrosion resistance. In most cases, the corrosion rate would be high due to the insufficient inorganic ions in Ringer's solution, as opposed to interstitial and human body fluids. In the case of magnesium, the corrosion rate would decelerate owing to the combination of HCO 3 − , Ca 2+ , and alkaline pH at the Mg interface, which forms CaCO 3 [60].
Despite the fact that PBS has been extensively used as the corrosion testing medium of Mg and its alloys [61][62][63][64], it is not generally a suitable solution to simulate or predict the in vivo degradation behavior of Mg, since phosphate with Mg 2+ can create insoluble precipitation on the surface of the metal, which can produce inaccurate results [53,65]. Mena-Morcillo et al. [66] investigated the degradation of AZ31 and AZ91 Mg alloys in SBF, Hanks', and Ringer's solutions. They found out that the corrosion products precipitated on the surface of Mg alloys in Hanks' media showed higher stability compared to SBF and Ringer's solutions; as a result, those Mg alloys exposed to Hanks' media were less affected. SBF, HBSS, and EBSS mainly include similar inorganic components with slightly different concentrations [67]. Although SBF has been used to test the apatite-forming ability of biomaterials [68,69], the absence of organic compounds makes it difficult to obtain accurate results, in that the degradation performance of Mg and its alloys is considerably different under the cell culture environment [70,71]. Moreover, in different studies in which the corrosion rate of pure Mg was assessed in SBF, radically different results were obtained [48,[72][73][74][75], reducing the popularity of this solution for corrosion testing. HBSS is reported to be simple compared to DMEM, which contains organic components [76]. In a recent study, pure Mg was exposed to SBF, HBSS, and DMEM under cell culture conditions with CO 2 gassing. The results indicated that SBF-and DMEM-based media indicated a higher buffering capacity than HBSS, and the degradation rate of Mg was highest in HBSS [76]. In another research study, the corrosion rate of pure Mg exposed to HBSS was very high [77].
EBSS has been used widely for in vitro testing of Mg and its alloys [78][79][80][81]. It is believed that the degradation rate of Mg biomaterials in EBSS is comparable to in vivo conditions [82][83][84][85][86]. Walker et al. [87] immersed pure Mg and five Mg alloys in EBSS, MEM, and MEM-containing BSA (MEMp) and implanted the samples in Lewis rats. After 21 days, the results indicated that the corrosion rate of samples immersed in EBSS buffered with sodium bicarbonate was similar to that obtained in vivo. In addition to EBSS, cell culture media, such as DMEM and MEM, are preferable to investigate the corrosion behavior of Mg-based biomaterials [88][89][90].
Another crucial factor in simulated body solutions is the buffering system. A natural buffer system, which consists of plasma protein buffers, HPO 4 2− , and HCO 3 − , controls the pH level in the human body [91,92]; by the same token, an appropriate buffering system can control the pH of a buffer solution. NaHCO 3 /CO 2 buffer, 4-(2-hydroxyethyl)-1piperazineethanesulfonic acid (HEPES), and Tris-HCl (Tris Hydrochloride) are the most frequently used buffers for in vitro studies of Mg [37,84,93]. HEPES buffer increases Mg corrosion by a factor of up to four times compared to NaHCO 3 buffering alone in DMEM, EBSS, and simple salt solutions under the same conditions [94]. HEPES in testing solutions affects the nucleation process and reduces the formation of carbonate and phosphate in the degradation layer; in this way, the protective layer on Mg is destabilized, a less dense degradation layer is produced, and the progressive diffusion of aggressive ions is allowed [95,96]. Besides that, HEPES is reported to be a selective dissolution of Cacontaining phases on glass-ceramics. When pure Mg is exposed to Tris-HCl Buffer in SBF, it is more sensitive to pitting corrosion. For one thing, Tris-HCl prevents the formation of corrosion products on the surface of Mg alloy. For another thing, Tris increases the degradation rate of pure Mg by a factor of ten during earlier stage exposure. The presence of Tris-HCl buffer in simulated body fluid makes pure Mg extremely susceptible to pitting corrosion [93].
Unlike Tris and HEPES, the HCO 3 − /CO 2 buffering system is preferred for in vitro assays on the grounds of the similarity to the regulation of the pH of the body. CO 2 in the testing system not only promotes the formation of carbonate on the surface of Mg but also triggers a stable pH through the equilibrium of HCO 3 − /CO 2 . A carbonated film formed in the presence of CO 2 under aqueous conditions is thicker than an Mg(OH) 2 film formed in the absence of CO 2 , thereby decelerating the degradation rate [53]. Törne et al. [97] compared the effect of HEPES and HCO 3 − /CO 2 on the degradation of Mg. They found out that m-SBF(HEPES) increased the corrosion rate of Mg, whereas the corrosion mechanism of Mg in m-SBF(CO 2 ) was similar to in vivo corrosion mechanism.
A number of cell culture media with small molecule organic compounds and proteins have been designed to evaluate the corrosion behavior of Mg. With the appearance of these compounds in the solutions, the complexity of corrosive media increases because the corrosive media resembles the real body fluid more closely. The corrosion resistance of Mg, in most cases, could increase [60]. Yan et al. [98] evaluated the synergistic effects of protein and glucose on the degradation of Mg. They reported that the degradation of Mg was inhibited significantly, as the synergistic effect of protein and glucose limited the adsorption of aggressive Cl − to a certain extent.
An investigation assessed the stress-corrosion-cracking susceptibility of Mg-1Zn alloy in PBS, bovine calf serum (BCS), modified simulated body fluid (m-SBF), and DMEM as a case in point [99]. It was reported that those samples immersed in PBS showed serious pitting corrosion, whereas those samples exposed to BCS and DMEM indicated higher resistance to corrosion. In another research, Mei et al. studied the corrosion of Mg exposed to albumin-containing HBSS. It was demonstrated that the presence of BSA resulted in rapid corrosion of pure Mg as the formation of the protective film on the surface of corroded Mg decelerated during the first hours of immersion [90]. One of the reasons behind these results may be the influence of organic compounds on the degradation product layer. Hou et al. [100] chose fetal bovine serum (FBS), L-alanyl-L-glutamine (L-Ala-LGln), L-glutamine (L-Gln), and L-ascorbic acid (L-AA) to illustrate the influence of organic molecules on the degradation behavior of pure magnesium under cell culture conditions. It was found that organic components have a major influence on the formation of the degradation layer. In the "inner" layer, the addition of organic components promoted the formation of phosphate (Mg-PO 4 and Ca-P salts) during immersion; conversely, in the "outer" layer, these components assisted the precipitation of nesquehonite rather than hydromagnesite. However, the effects of many other organic compounds and proteins on the degradation behavior of Mg have yet to be explored.
Current Status of Mg-Based Biomaterials
Biomaterials, ideally, ought to degrade following tissue healing, and, furthermore, the biodegradation process should have no adverse effects on human health. Magnesium as a biodegradable material can play an important role in the biomedical field. Be that as it may, the degradation of untreated Mg in the physiological environment would indicate a high degradation rate, hydrogen evolution, and an increase in the pH of local tissues, which could harm surrounding tissues [101][102][103][104]. Accordingly, Mg resorption must be controlled, normally, by introducing particular alloying elements to magnesium and modifying the surface of biomaterials. Using these techniques, modified Mg-based devices can be utilized for cardiovascular [105][106][107][108], musculoskeletal, and orthopedic applications [109][110][111]. It can also be used in other oral and general applications [112].
Selection of Alloying Elements for Controlling the Degradation Behavior
The addition of alloying elements has a direct influence on the degradation behavior of Mg biomaterials. A case in point is the degradation rate of ZJ41 Mg alloy, which is very fast compared to AZ31 Mg alloy [113]. By and large, the design of Mg-based biomaterials needs meticulous care. For one thing, alloying elements might react with magnesium and create intermetallic phases, which dissolve in the Mg matrix or distribute along the grain boundaries, leading to different microstructures and degradation rates [114]. For another thing, the metallic ions released from Mg alloys must be biocompatible. Considering these two factors, we deem that the most popular alloying elements for Mg are calcium (Ca), zinc (Zn), manganese (Mn), strontium (Sr), lithium (Li), zirconium (Zr), yttrium (Y), and aluminum (Al). The effect of these alloying elements on the degradation of Mg is summarized in Table 1. Table 1. Summary of the effect of most common alloying elements on the degradation behavior of Mg alloys.
Ca
Ca concentration in magnesium alloys should be less than~1 wt.%; excessive addition of calcium in pure magnesium deteriorating corrosion resistance. [115,116] Zn Improving corrosion resistance of Mg alloys mostly at a content below~5 wt.%. [117][118][119] Mn Improving corrosion resistance by decreasing impurities with a small quantity (less than~1 wt.%) of Mn addition. [120]
Sr
The effect on corrosion resistance; optimum content below~2 wt.%. [121] Li Improving corrosion resistance at a concentration less than~9 wt.% in pure Mg; reducing corrosion resistance with higher Li addition. [122] Zr Zr content below~2 wt.% improving the corrosion resistance. [123] REEs Generally enhancing the corrosion resistance of Mg alloys. The corrosion resistance of Mg-light REE alloys was normally better compared to Mg-heavy REE alloys. [124][125][126] Al With increasing Al-content (the maximum is reached at solubility limit of 12.7 wt.% Al), the corrosion rate of homogeneous α-phase decreases. [127] Ca is the main part of human bones and is vital for the life of human beings [128]. Ca is mainly found in bones and teeth [129][130][131]. The release of calcium ions regulates the activation of osteoclasts and osteoblasts, thereby facilitating bone regeneration in vitro and in vivo [132,133]. The addition of this element to magnesium alloys can enhance the corrosion resistance, mechanical properties, microstructure, and electrochemical behavior of Mg-Ca alloys [134][135][136]. Ca has an impact on the development of texture during rolling or extrusion, causing weaker textures without a strong alignment of basal planes. Such textures are known to show lower anisotropic mechanical behavior and also higher ductility [137]. The in vitro and in vivo degradation behavior of binary Mg-xCa alloy (x = 0.5 or 5.0 wt.%) was determined by Makkar et al. [116]. The in vitro study showed that the degradation rate differed linearly, with the Ca content indicating higher degradation, increased pH, and more hydrogen gas evolution in Mg-5.0Ca alloy. Moreover, in vivo studies revealed rapid degradation, prolonged inflammation, and higher initial corrosion rate in Mg-5.0Ca compared to Mg-0.5Ca alloy.
Zinc is an essential trace element that people need to stay healthy. This element can help in the normal functions of many enzymes, the normal growth of gonads, the treatment of bacterial infections, the improvement of cognitive abilities, neurotransmission, and synapse formation [138][139][140]. Studies have indicated that Mg-Zn alloys possess great mechanical properties, biocompatibility, and higher corrosion resistance [141]. Apart from that, the addition of Zn to Mg alloys can significantly reduce H 2 evolution [142,143]. However, depending on the Zn content in binary Mg-Zn alloys and the phase distribution, the corrosion resistance of Mg-Zn alloys extensively differs. Zhang et al. [144] implanted Mg-6Zn alloy rods in the body of rabbits. The results indicated that the Mg alloy could be gradually absorbed in vivo at the degradation rate of 2.32 mm/yr, obtained by the weight-loss technique, with no disorders of the heart, liver, kidney, and spleen. Also, six weeks after implantation, subcutaneous H 2 gas accumulated by the degradation of the alloys disappeared without discernable adverse influences.
In the human body, Mn is required for the normal functionality of the brain, nervous system, enzyme, and cellular homeostasis [145][146][147]. In Mg alloy implants, Mn plays the role of enhancing the corrosion resistance of the alloys without deteriorating mechanical integrity [148] Yu et al. [149] investigated the texture, microstructure, and mechanical properties of Mg-3Mn alloys. It was indicated that the samples showed weakened basal texture, refined microstructure, good yield strength, and high tensile elongation.
Strontium is considered one of the potential candidates for orthopedic applications in that this element can promote the growth of osteoblast cells [150][151][152]. A certain amount of Sr in Mg alloys can enhance the corrosion resistance [153] and mechanical strength of the alloys [154]. Jiang et al. [155] examined the degradation performance and biocompatibility of four binary MgSr alloys (Mg-xSr, x = 0.2, 0.5, 1, and 2 wt.%), together with four ternary MgCaSr alloys (Mg-1Ca-xSr, x = 0.2, 0.5, 1, and 2 wt.%) through direct culture with bonemarrow-derived mesenchymal stem cells (BMSCs). It was indicated that Mg-1Sr and Mg-2Sr alloys showed the lowest degradation rates in comparison with the other binary MgSr and ternary MgCaSr alloys. Ternary MgCaSr alloys revealed an enhanced BMSC adhesion on their surfaces in comparison with binary MgSr alloys, except for Mg-1Ca-0.2Sr alloy. Furthermore, Mg-1Sr, Mg-1Ca-0.5Sr, and Mg-1Ca-1Sr alloys presented the best performance concerning the degradation and BMSC performances between the above mentioned alloys.
Chen et al. [156] prepared Mg-2Sr-Zn and Mg-2Sr-Ca alloys and then investigated their degradation behavior. In this study the addition of Zn and Ca improved the in vitro and in vivo corrosion resistance compared to the binary Mg-2Sr alloys. While the in vivo corrosion rates for Mg-2Sr-Zn and for Mg-2Sr-Ca were 0.85 mm/year and 1.10 mm/year, this one for Mg-2Sr was 1.37 mm/year. The degradation of these rods via the threedimensional reconstruction of the femora with implants and two-dimensional cross-sectional micro-CT images is shown in Figure 1. As is demonstrated, one week after implantation, localized degradation of the biomaterials at the surface of the rod can be seen in both trabecular and cortical bone areas. In the bone-marrow-cavity region, more rapid degradation occurred compared with the distal regions. Moreover, the in vivo degradation of rods made of Mg-2Sr-Ca alloy was faster than that of Mg-2Sr-Zn alloy rods.
Although lithium is not officially considered a micronutrient [157], it is remarkably effective against a wide spectrum of bacteria and has potent immune-stimulating capabilities [158]. It is said that lithium can be utilized as a promising bioactive element so as to promote the osteogenesis process because Li-based scaffolds could improve bone regeneration and stimulate bone-marrow mesenchymal stem cells' osteogenesis [159]. This element is used as augmentation therapy for depression and as a typical mood stabilizer for the treatment of bipolar disorder [160]. While low Li could reduce life expectancy, cause problems in behavior, impair the reproductive function of the organism, and slow down the growth of the cells, high doses might trigger intoxication and result in pathological functional changes of individual organs or body systems [161]. The addition of Li in Mg alloys facilitates the activation of the prismatic slips and enhances the microstructures of Mg-Li alloys [36,162]. The most prominent properties of the Mg-Li alloys are their superior ductility and formability, which make them a great candidate for cardiovascular stent applications. Zhou et al. [163] studied Mg-3.5Li and Mg-8.5Li binary alloys to evaluate their degradation behavior for cardiovascular stent application. However, the strength of Mg-Li binary alloys was not adequate, owing to the presence of Li. Accordingly, Al and REEs were added to produce Mg-Li-Al ternary and Mg-Li-Al-RE quarternary alloys. The results of cytotoxicity tests revealed that the Mg-3.5Li-2Al-2RE, Mg-3.5Li-4Al-2RE, and Mg-8.5Li-2Al-2RE alloys suppressed vascular smooth-muscle cell proliferation five days post-incubation, whereas the Mg-3.5Li, Mg-8.5Li, and Mg-8.5Li-1Al alloys did not cause any problems. The Mg-Li-based alloys in the case of human umbilical vein endothelial cells indicated no considerable reduction in cell viabilities except for the Mg-8.5Li-2Al-2RE alloy, with no clear contrasts in cell viability between various culture periods. As is indicated, localized degradation of the bio-materials at the surface of the rod can be seen in both trabecular and cortical bone regions one week after implantation. In the bone-marrow-cavity area, more rapid degradation was found in comparison with the distal areas, and the in vivo degradation of Mg-2Sr-Ca alloy rods was faster than that of Mg-2Sr-Zn alloy rods. Reprinted with permission from ref. [156]. Copyright 2020, KeAi. [156]. As is indicated, localized degradation of the bio-materials at the surface of the rod can be seen in both trabecular and cortical bone regions one week after implantation. In the bone-marrow-cavity area, more rapid degradation was found in comparison with the distal areas, and the in vivo degradation of Mg-2Sr-Ca alloy rods was faster than that of Mg-2Sr-Zn alloy rods. Reprinted with permission from Ref. [156]. Copyright 2020, KeAi. [156].
In a number of studies, it has been shown that Zr presents desirable osteocompatibility, biocompatibility, corrosion resistance, and low ionic cytotoxicity [164][165][166]. The addition of Zr into Mg alloys can effectively refine the Mg grain size [164]. Mg alloys containing Zr often show good damping properties, lower hot-cracking tendency, corrosion resistance, and mechanical property [167]. Sayari et al. [168] investigated the effect of 0.7 wt.% Zr addition on the superplastic behavior and microstructure of extruded Mg. They found that the Mg-0.7Zr alloy indicated superplastic behavior after moderate deformation imposed by the extrusion process for all improved strength. They also reported that a bimodal microstructure was developed and the grain size was decreased due to the addition of Zr.
Extensive use of REEs is reported to impact human health [169]; however, several studies have shown the antibacterial and antifungal activities of these elements [170,171]. In Mg alloys, REEs have indicated great potential in improving formability, enhancing ductility, weakening sharp basal textures, and refining grains [172]. REEs also could improve the corrosion resistance of Mg alloys, as a stable corrosion product layer could be formed on the surface of Mg [173]. Azzeddine et al. [174] studied the corrosion behavior of Mg-1.43La, Mg-1.44Nd, Mg-0.63Gd, Mg-0.41Dy, and Mg-0.3Ce (wt.%) alloys. It was shown that the corrosion resistance of the alloys was decreasing in the following order: Mg-0.41Dy, Mg-0.63Gd, Mg-0.3Ce, Mg-1.44Nd, and Mg-1.43La. It is reported that the presence of a high fraction of the Mg 12 La phase acted as an anodic phase along the grain boundaries in the Mg-1.43La alloy and triggered severe pitting corrosion, while the formation of the Dy 2 O 3 oxide inhibited the Mg-0.41Dy alloy from pitting corrosion and led to high corrosion resistance. In another study, Liu et al. [125] individually added sixteen types of REEs into pure Mg to compare the impact of each type of REEs on the corrosion behavior, mechanical property, microstructure, and biocompatibility of Mg materials. The results indicated that the addition of various REEs with suitable concentrations into Mg could enhance the general behavior of Mg from several aspects. The corrosion resistance of Mg-light REE alloys was enhanced compared to Mg-heavy REE alloys. The mechanical properties of Mg-RE binary alloys were significantly adjusted, and Mg-RE sample alloys indicated no cytotoxic influence on MC3T3-E1 cells.
While Al was believed to be nontoxic, recent studies indicate that this metal can negatively affect human health, such as brain diseases (multiple sclerosis, Parkinson's disease, and Alzheimer's disease) [175][176][177]. Moreover, it could disrupt the pro-oxidant/antioxidant balance in tissues resulting in physiological and biochemical dysfunctions on the grounds of an excessive reactive oxygen species generation [178]. Al, however, has the most favorable influence on Mg alloys. It can enhance corrosion resistance, fatigue strength, castability, and hardness [179][180][181][182].
Surface Treatment for Controlling the Biodegradation Behavior of Mg and Its Alloys
Surface modification is a major approach to decelerate the degradation of Mg alloys for cardiovascular applications [101,183]. A shining example is AZ31 coronary stents laser-cut, acid pickled, and dip-coated in the solution of PCL with 1% TiO 2 . In this research, the degradation rate of AZ31 uncoated control stents was higher than AZ31 coated stents. While uncoated stents in flowing Hank's solution lost ∼27% in weight, coated stents lost only ∼9% in weight after four weeks of dynamic degradation [184]. For cardiovascular applications, drug-eluting coatings might reduce the incidence of restenosis and optimize the corrosion profiles of Mg substrate. Tang et al. [185] applied paclitaxel incorporated in poly (trimethylene carbonate) on the surface of Mg. This coating, which was uniform, gradually degraded from surface to inside and provided long-term protection; as a result, it could be a good candidate as a drug-eluting coating for Mg-based stents. In another research, an asymmetric coating consisting of an inner PEI single layer and an outer sirolimus-loaded PLGA/PEI double layer was developed on the surface of the WE43 Mgalloy stent. It was shown that the PEI coating layer had desirable adhesiveness to the surface of the substrate and significantly enhanced in vitro endothelial cell compatibility and the corrosion resistance of the Mg alloy, whereas the PLGA/PEI double-coating layer ensured a stable surface morphology and a low release rate of sirolimus during the drug-release process; therefore, this system could have the potential to suppress in-stent restenosis and improve re-endothelialization in vascular stent applications [186]. Chen et al. [187] applied a rapamycin-eluting polymer coating on the surface of biodegradable Mg-Nd-Zn-Zr alloy stents. An in vivo test of the optimized coated stents was performed in the iliac artery of New Zealand white rabbit with quantitative coronary angiography, optical coherence tomography, and micro-CT observation at one-, three-, and five-month follow-ups (Figure 2). According to angiography exams, neither early in-scaffold restenosis nor thrombus was observed, and the coated stents allowed for arterial healing and supported the vessel effectively before degradation. Regarding optical coherence tomography, strut embedding into the vessel wall and endothelialization occurred at one-month post-implantation. The following optical coherence tomography observation indicated that the attenuations of signal around the edges of the struts remained sharp and the lumen area increased by three months. As can be seen in micro-computed tomography scanning of the entire scaffolded-segments vessels, the degradation process of the coated stent was insignificant at one month, whereas, after five months the mechanical integrity was lost and the stent degraded significantly. Finally, these results revealed that the degradation of this stent was layer by layer from the outside to the inside.
Generally, an ideal stent needs to fulfill not only anti-restenosis and fast endothelialization but also anti-inflammation and suitable durability. By way of illustration, Ye et al. [188] fabricated a multifunctional stent by using atorvastatin calcium (ATVC) loaded into the surface-eroding poly (1,3-trimethylene carbonate) (PTMC) on the surface of AZ31 wire to obtain vascular remodeling, target drug delivery, and well-controllable degradation performance. They indicated that the degradation rate of the coated Mg was reduced in the microfluidic-chip, electrochemical, in vitro, and in vivo tests. The in vivo rat test showed that the PTMC-ATVC coating reduced intimal hyperplasia and inflammation and regulated endothelial and smooth muscle cells. Moreover, the target atorvastatin delivery demonstrated a promising dual-function coating for enhancing the early endothelialization and the durability of these stents.
Having the ability to promote in vivo bone healing and regeneration and the mechanical properties similar to that of bones, Mg alloys with suitable coatings have the potential for use as biodegradable orthopedic implants [189][190][191]. These materials coated with calcium phosphate coatings based on hydroxyapatite and its various chemical analogues can further enhance biocompatibility [192], bioactivity [193], wear resistance [194], bone conduction, bone induction, and the degradation resistance of Mg biomaterials [195]. Gao et al. [196] deposited calcium phosphate coating containing dicalcium phosphate dihydrate on an AZ60 alloy via the chemical conversion technique. The in vitro and in vivo results indicated that this coating significantly improved the biocompatibility and biodegradation behavior of the Mg alloy. To provide a solid basis for further clinical translation, the safety and effectiveness of Mg-Nd-Zn-Zr alloy screws coated by Ca-P coating for the treatment of medial malleolar fractures was evaluated [197]. In this study, these modified Mg screws were used to treat nine patients with medial malleolar fractures (Figure 3). Postoperative radiography showed that obvious degradation occurred twelve months postoperatively and all patients achieved good medial malleolar fracture alignment. No one experienced malunion, failure of internal fixation, infection, or breakage of the screws before fracture healing. These results confirm that Ca-P-coated Mg-Nd-Zn-Zr alloy has excellent prospects for clinical translation and can be an alternative internal fixation device for fracture treatment. In a study, Husak et al. [198] applied hydroxyapatite coatings on the surface of Mg alloy with the contents of Mg (96.25 wt.%), Al (1.85 wt.%), Nb (1.25 wt.%), and Zr (0.65 wt.%). The in vitro and in vivo results indicated that the number of adherent cells on the surface of uncoated Mg alloy was significantly less than that on the surface of hydroxyapatite-coated samples, and the degradation rate of this Mg alloy was decreased by hydroxyapatite coating. It is reported that the efficiency of hydroxyapatite-coated Mg alloys can be further improved by using a kind of antimicrobial agent, along with hydroxyapatite [199]. Generally, an ideal stent needs to fulfill not only anti-restenosis and fast endothel alization but also anti-inflammation and suitable durability. By way of illustration, Ye e al. [188] fabricated a multifunctional stent by using atorvastatin calcium (ATVC) loade into the surface-eroding poly (1,3-trimethylene carbonate) (PTMC) on the surface of AZ3 wire to obtain vascular remodeling, target drug delivery, and well-controllable degrada tion performance. They indicated that the degradation rate of the coated Mg was reduce in the microfluidic-chip, electrochemical, in vitro, and in vivo tests. The in vivo rat tes showed that the PTMC-ATVC coating reduced intimal hyperplasia and inflammatio The distribution of the diameter along the iliac artery. (d-f,j-l,p-r) OCT photographs in the scaffolded segment, showing the complete endothelialization and strut embedding into the vessel wall after one month of implantation. By three months, the attenuations of signal around the edges of the struts remain sharp and the area of the lumen increased. White arrows demonstrate the bright-dark-bright three-layered appearances corresponding to intima, media, and adventitia. The asterisks show the homogeneous signal-rich regions corresponding to fibrous plaques. The double arrows indicate the degraded implant, normal arterial structures, and some calcific plaques after five months. Right side (B): µ-CT images. (a,b) One month after implantation, degradation was insignificant. (c,d) By three months, minimal volume loss could be seen. (e,f) At five months, OPT stent considerably degraded. Reprinted with permission from Ref. [187]. Copyright 2019, Elsevier.
Other ceramic coatings could effectively suppress the rapid degradation of magnesium alloys. Lin et al. [200] used the Ti and O dual-plasma ion immersion implantation (PIII) method to fabricate a multifunctional TiO 2 based nano-layer on ZK60 Mg alloy to improve the antimicrobial activity, osteoconductivity, and corrosion resistance of the Mg alloy. The in vitro study indicated that this TiO 2 /MgO nano-layer could control the degradation rate of Mg alloy, and the in vivo assay showed that at eight weeks post-surgery, 94% of the implant volume was still maintained, thus proving that this nano-layer not only could regulate its implant-to-bone integration effectively but also could control the degradation of Mg alloy. To stimulate bone formation and enhance osteogenic activity, osteocompatibility, and corrosion resistance of Mg-based implants, Xiong et al. [201] introduced a novel coating on the surface of Mg-1Ca. They employed bioactive Ca, Sr/P-containing silk fibroin layers on the surface of the Mg alloy. excellent prospects for clinical translation and can be an alternative internal fixation device for fracture treatment. In a study, Husak et al. [198] applied hydroxyapatite coatings on the surface of Mg alloy with the contents of Mg (96.25 wt.%), Al (1.85 wt.%), Nb (1.25 wt.%), and Zr (0.65 wt.%). The in vitro and in vivo results indicated that the number of adherent cells on the surface of uncoated Mg alloy was significantly less than that on the surface of hydroxyapatite-coated samples, and the degradation rate of this Mg alloy was decreased by hydroxyapatite coating. It is reported that the efficiency of hydroxyapatitecoated Mg alloys can be further improved by using a kind of antimicrobial agent, along with hydroxyapatite [199]. Figure 3. (a) Preoperative and postoperative radiographs of a young female patient with a trimalleolar fracture. Two Mg-Nd-Zn-Zr alloy screws coated by Ca-P coating (white arrows) were implanted for the treatment of the medial malleolar fracture. Both screws did not indicate signs of failure before fracture healing as they maintained their morphology. The radiographs also indicated the degradation process seventeen months post-surgery. (b) Preoperative and postoperative radiographs of a mid-age female patient with a medial malleolar fracture. The patient's radiograph indicated radiolucent zones around screws one month postoperatively, which almost disappeared twelve months postoperatively. L and R show Left medial malleolus and Right medial malleolus. Reprinted with permission from ref. [197]. Copyright 2021, Elsevier. Two Mg-Nd-Zn-Zr alloy screws coated by Ca-P coating (white arrows) were implanted for the treatment of the medial malleolar fracture. Both screws did not indicate signs of failure before fracture healing as they maintained their morphology. The radiographs also indicated the degradation process seventeen months post-surgery. (b) Preoperative and postoperative radiographs of a mid-age female patient with a medial malleolar fracture. The patient's radiograph indicated radiolucent zones around screws one month postoperatively, which almost disappeared twelve months postoperatively. L and R show Left medial malleolus and Right medial malleolus. Reprinted with permission from Ref. [197]. Copyright 2021, Elsevier.
Conclusions and Future Aspect
The biodegradability and biocompatibility of Mg-based materials make them suitable for biomedical applications. Most of the currently researched Mg-based implants, however, degraded sooner than we expected. Accordingly, is it true to mention that Mg is not the best choice as a biodegradable biomaterial and that we should possibly focus on another biodegradable metal? The major drawback in this field is the lack of accurate data. As it is well-known, numerous factors have an influence on the corrosion rate and, therefore, the degradation of magnesium. Some of these factors relate to the environment in which the corrosion resistance would be performed; as a result, it is first and foremost to mimic the real body environment for observations and measurements. The absence of organic components in most simulated body solutions used for corrosion and degradation testing but has a dramatic effect on the degradation of this metal is a case in point. It is, on the other hand, believed that designing suitable composition and surface modification can significantly control the degradation process. Concerning controlling the degradation rate, numerous Mg alloys and techniques for surface modifications have been introduced for different applications, making the field of biodegradable Mg biomaterials significantly advanced. While a great deal of research ought to show the in vivo and clinical efficacy of these modified Mg alloy biomaterials, the world is still waiting for the introduction of new methods that can control the degradation of Mg-based biomaterials and offer novel functions at the same time. | 9,538.2 | 2022-03-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Severity of Ionized Hypercalcemia and Hypocalcemia Is Associated With Etiology in Dogs and Cats
Background: Calcium disorders are common in small animals, but few studies have investigated the etiology of ionized hypercalcemia and hypocalcemia in large populations. This study aimed to determine the incidence of ionized calcium disorders in dogs and cats treated at a tertiary referral clinic and to describe the associated diseases. Methods: An electronic database of electrolyte analyses conducted at the Cornell University Hospital for Animals from 2007 to 2017 was searched. Dogs and cats with ionized hypercalcemia or hypocalcemia were identified based on institution reference intervals. Duplicate case entries were removed. Medical records were reviewed to identify the cause of the calcium abnormality. Chi-squared analysis with Bonferroni adjustment was performed to compare frequencies of disease processes between mild and moderate-severe disturbances. Results: The database included 15,277 dogs and 3,715 cats. Hypercalcemia was identified in 1,641 dogs and 119 cats. The incidence of canine and feline hypercalcemia was 10.7 and 3.2%, respectively. Hypocalcemia was identified in 1,467 dogs and 450 cats. The incidence of canine and feline hypocalcemia was 9.6% and 12.1%, respectively. The most common pathologic causes of hypercalcemia in dogs were malignancy-associated (12.9%), parathyroid-dependent (4.6%) and hypoadrenocorticism (1.7%). In cats, malignancy-associated hypercalcemia (22.7%), kidney injury (13.4%) and idiopathic hypercalcemia (12.6%) were most common. Dogs presenting with moderate-severe hypercalcemia vs. mild hypercalcemia were significantly more likely to have hyperparathyroidism, malignancy-associated hypercalcemia or hypervitaminosis D, whereas cats were significantly more likely to have malignancy-associated hypercalcemia or idiopathic hypercalcemia. The most common pathologic causes of hypocalcemia in dogs were critical illness (17.4%), kidney injury (10.4%) and toxicity (7.5%). In cats, kidney injury (21.6%), urethral obstruction (15.1%), and critical illness (14.7%) were most frequent. Dogs presenting with moderate-severe hypocalcemia were significantly more likely to have hypoparathyroidism, kidney injury, eclampsia or critical illness, whereas cats were significantly more likely to have kidney injury, soft tissue trauma or urethral obstruction. Conclusions: Mild calcium disturbances are most commonly associated with non-pathologic or transient conditions. Malignancy-associated hypercalcemia is the most common cause of ionized hypercalcemia in dogs and cats. Critical illness and kidney injury are frequent causes of ionized hypocalcemia in both species.
INTRODUCTION
Hypercalcemia and hypocalcemia are commonly encountered in veterinary medicine with potentially life-threatening consequences. Calcium is essential for many cellular processes, including neuromuscular transmission, enzymatic reactions, blood clotting, vasomotor tone, and bone metabolism. Ionized calcium is the biologically active form and blood ionized calcium concentrations are tightly regulated through the concerted actions of parathyroid hormone (PTH), 1,25dihydroxyvitamin D3 (calcitriol) and calcitonin. Parathyroid chief cells secrete PTH, which increases plasma calcium by mobilizing bone stores, increasing renal tubular calcium reabsorption, and increasing calcitriol synthesis. Calcitriol increases intestinal calcium uptake and enhances bone and renal calcium reabsorption. Hypocalcemia markedly increases PTH secretion with consequent increases in calcitriol concentrations that in turn inhibit PTH synthesis. Hypercalcemia suppresses PTH synthesis and stimulates calcitonin secretion thereby inhibiting osteoclastic bone resorption. Failure or disruption of these regulatory mechanisms results in hypercalcemia or hypocalcemia with accompanying clinical signs (1,2).
Hypercalcemia may lead to vomiting, depression, weakness, muscle twitching, cardiac arrhythmias, and seizures. Hypocalcemia may cause muscle tremors, facial rubbing, muscle cramping, stiff gait, seizures, restlessness, aggression, hypersensitivity, and disorientation. Hypocalcemia also contributes to sepsis-associated myocardial dysfunction and to the development of ventricular arrhythmias and refractory hypotension. Severe ionized hypocalcemia may result in coagulation abnormalities (2)(3)(4). Even in the absence of clinical signs, abnormalities in calcium concentrations may provide diagnostic clues to underlying disease.
The causes of calcium disorders are well-described (1), but the incidence of these disorders in large populations is undefined and similarly, there is limited information regarding the relative frequencies of the underlying disorders. Previous retrospective studies of small patient populations suggest that malignancy is the most common cause of hypercalcemia in dogs with primary hyperparathyroidism, kidney disease and hypoadrenocorticism also prominent (5)(6)(7). There is little consistency among reports regarding common causes of hypercalcemia in cats, but idiopathic hypercalcemia, neoplasia, acute kidney injury (AKI) and urolithiasis are reported (8,9). Commonly reported Abbreviations: AKI, acute kidney injury; CKD, chronic kidney disease; DKA, diabetic ketoacidosis; PTH, parathyroid hormone; PTHrp, parathyroid hormone related protein.
causes of hypocalcemia include hypoparathyroidism, chronic kidney disease (CKD), acute pancreatitis and eclampsia. It should be noted that much of the calcium disorder literature discusses only total calcium concentrations that are known to be unreliable indicators of the concentrations of biologically active ionized calcium (10,11). The severity of the calcium disorder is associated with mortality in both dogs and cats (12,13). Ionized calcium concentrations have non-linear U-shaped associations with case fatality rates in dogs and cats, wherein concentrations clustered around the RI midpoint had the lowest case fatality rates, while progressively abnormal concentrations were associated with proportionately increased risk of nonsurvival (12,13).
The present study aimed to: determine the incidence of ionized calcium disorders in dogs and cats presenting to a tertiary referral facility; determine the frequency of the disease processes associated with ionized calcium disorders in these patients; evaluate the association between calcium disorder severity and the causal disease process. It was hypothesized that mild disturbances in calcium are associated with nonpathologic, transient or inconsequential causes, whereas moderate or severe disturbances are associated with pathologic causes such as paraneoplastic hypercalcemia and primary hyperparathyroidism for hypercalcemia, and eclampsia and primary hypoparathyroidism for hypocalcemia.
Electrolyte Analyses
Blood ionized calcium concentration measurements were conducted using a point-of-care analyzer (RapidPoint 405, Siemens, Malvern, PA, USA) equipped with ion-selective electrodes using blood samples collected into 1 mL syringes heparinized with dry balanced lithium/zinc heparin (Westmed Inc., Tucson, AZ, USA). Samples were run immediately following collection. The sampling device used remained constant throughout the study period. The analyzer performs an automatic quality control (QC) analysis on a schedule without operator intervention. An onboard QC cartridge contains three levels of QC material to monitor system performance and provides target ranges for each QC material level. During automatic QC analysis, the system compares the results to the ranges for each parameter and identifies results that are out of range. Parameters that fail QC are turned off. Repeat QC analysis (with within range values) is required to turn on failed parameters. At our institution the automatic QC is set to run levels one and three daily at 6 a.m. and levels one and two daily at 10 p.m. Local reference intervals (RIs) for this analyzer were previously generated (2007) from healthy animals (20 dogs and 20 cats) that were not part of the study population. Those animals were considered healthy on the basis of history, physical examination, and the results of complete blood count and serum chemistry profiles. The relevant RI for ionized calcium for dogs was 1.18-1.37 mmol/L and for cats was 1.07-1.47 mmol/L.
Case Selection and Database Compilation
An electronic database of blood gas and electrolyte analyses conducted in the emergency room or intensive care unit at the Cornell University Hospital for Animals between 05/31/2007 and 01/03/2017 was searched for results from dogs and cats. Some data from this database have been previously reported (12)(13)(14)(15). The database was visually inspected and manually curated to remove samples from species other than dogs and cats, samples with missing, erroneous or untraceable case numbers, analyses from sample types other than blood (e.g., abdominal fluid) and analyses with missing data. Institution electronic medical record (EMR) systems were searched for data on patient signalment, presenting complaint, final diagnosis, outcome, and hospitalization dates to create databases containing electrolyte data, point-of-care analyses, biochemistry analyses, and case demographics. A custom application (Visual Basic, Microsoft Visual Studio for Windows, Microsoft, Redmond, WA, USA) was written to search each database via the unique patient identifier and create a final composite database combining data from all of the separate databases corresponding to the time and date stamp from the electrolyte analyses. In patients for which multiple analyses were identified, only the first recorded measurement was used. The final database was then manually checked for accuracy by cross-referencing the database entries with the parent data sources for a randomized selection of cases, spanning the entire range of case numbers, and representing 0.1% of the total case entries.
From the parent database, cases with hypercalcemia and hypocalcemia were segregated for further analyses. The patients' EMR were manually reviewed to verify the accuracy of the final diagnosis and identify the etiology of the calcium abnormality. All cases were classified by the same individual (MC), allowing for consistency throughout the process. Detailed EMR review was not conducted where the final diagnosis (as adjudicated by the original attending clinician) indicated in the EMR was consistent with a known cause of a calcium disturbance. Detailed EMR review was conducted when the final diagnosis was not indicated, was open, or was not consistent with a known etiology of calcium disturbance. In these instances, one individual (MC) reviewed the EMR thoroughly by reading available records including medical history, physical examination findings, daily structured assessments, through independent assessment of laboratory and pathology data, review of treatments performed and analysis of client and referring veterinarian communications. These case reviews were continued until it was apparent how the etiology of the calcium disturbance should be classified. In cases where the situation remained unclear discussions between authors (MC/RG) occurred to determine the best way to classify the case.
Each entry was classified into a previously reported cause of hypercalcemia or hypocalcemia per Schenck et al. (1). Additional diseases previously associated with hypocalcemia were also considered, including critical illness, including sepsis and systemic inflammatory response syndrome, diabetes mellitus, including diabetic ketoacidosis (DKA) (2), and feline urethral obstruction (1). In the present study, critical illness was defined as any potentially life-threatening multisystem disorder that, in the absence of medical intervention, would be expected to result in mortality or significant morbidity (16,17). Primarily this category included patients with evidence of systemic inflammatory response syndrome and sepsis. Patients with acute kidney injury, pancreatitis, toxicity or trauma were classified separately.
Non-pathologic and transient or inconsequential causes of hypercalcemia were combined into a single category. Patients were classified in this category only if no other cause of hypercalcemia was identifiable, if the hypercalcemia resolved on subsequent electrolyte analysis, or by the time of followup re-evaluation at the hospital. Animals under 1 year of age were also categorized into this group. This category therefore encompasses such instances as young growing animals (increased bone turnover), spurious results, hemoconcentration, and hyperproteinemia.
Cases of hypercalcemia or hypocalcemia were classified as "undetermined" if diagnostic tests were not performed, if follow-up was performed at the primary care veterinarian, or if the patient died or was euthanized prior to diagnosis. For example, a dog with lymphocytosis and hypercalcemia where the calcium disorder was suspected to be paraneoplastic but follow-up diagnostic testing was performed only by the primary care veterinarian and not recorded in the EMR would have been classified as "undetermined." Cases where the EMR indicated a suspected cause of the calcium disturbance that could not be otherwise categorized were classified as "other." When two potential causes of the calcium disorder were applicable to one patient, the disease process deemed most likely to be the cause following extensive EMR review was selected. Hypercalcemia entries were classified as malignancyassociated if the final diagnosis was cancer, if the patient's blood PTH-related peptide concentration was increased, or if the patient had a documented or suspected cancer known to be associated with hypercalcemia in the absence of an alternative cause.
Once all calcium disorders had been categorized based on cause, they were labeled as mild or moderate-severe. No definitions for mild or moderate-severe calcium disorders were identified in the veterinary literature and hence we developed a categorization that seemed reasonable based on our reference ranges and on the data set. Mild hypercalcemia was defined as an ionized calcium >1.37 mmol/L but <1.5 mmol/L in dogs and >1.47 mmol/L but <1.6 mmol/L in cats. Moderate-severe hypercalcemia was defined as an ionized calcium ≥1.5 mmol/L in dogs and ≥1.6 mmol/L in cats. Mild ionized hypocalcemia was defined as an ionized calcium ≤1.17 mmol/L but ≥1.00 mmol/L in dogs and ≤1.06 mmol/L but ≥1.00 mmol/L in cats. Moderate-severe hypocalcemia was defined as an ionized calcium <1.00 mmol/L in both dogs and cats.
Statistical Analysis
Statistical analysis was performed using commercially available software (SPSS Statistics 23, IBM, Armonk, NY and Prism 7.0e, GraphPad, La Jolla, CA). Chi-squared analysis was used to compare the frequencies of each disease classification in patients with mild ionized calcium disturbances compared to those in patients with moderate-severe ionized calcium disturbances. Fisher's exact tests were used to compare frequencies of specific categories of underlying disorder between dogs and cats for both hypercalcemia (11 comparisons), and hypocalcemia (12 comparisons). For all analyses, post-hoc Bonferroni adjustments were applied to account for multiple comparisons. Alpha was set at 0.05.
Incidence of Calcium Disturbances in Dogs
For dogs, 44,366 records were identified across the 9.5-year period. After removal of subsequent or follow-up analyses, the initial analyses from 15,277 individuals remained. Of these, 1,643 electrolyte profiles documented hypercalcemia. After removal of two profiles from dogs that were dead-on-arrival there were 1,641 profiles with hypercalcemia, corresponding to an incidence of canine hypercalcemia of 10.7% (1,641/15,277). Hypocalcemia was documented in 1,468 profiles. After removal of one profile from a miscoded Fennec fox there were 1,467 profiles with hypocalcemia, corresponding to an incidence of canine hypocalcemia of 9.6% (1,467/15,277).
Incidence of Calcium Disturbances in Cats
For cats, 9,992 records were identified across the 9.5-year period. After removal of subsequent or follow-up analyses, the initial analyses from 3,715 individuals remained. Of these, 119 electrolyte profiles documented hypercalcemia, corresponding to an incidence of feline hypercalcemia of 3.2% (119/3,715). Hypocalcemia was documented in 450 profiles, corresponding to an incidence of feline hypocalcemia of 12.1% (450/3,715).
Hypercalcemia in Dogs
Causes of hypercalcemia in dogs are summarized in Table 1
Hypercalcemia in Cats
Causes of hypercalcemia in cats are reported in Table 1 Hypercalcemia was classified as "other" in 12 cats, including 2 cats who were over-supplemented with intravenous calcium, 2 cats with concurrent urolithiasis, and 8 cats whose hypercalcemia was suspected to be secondary to lactulose administration. The cause of hypercalcemia could not be determined in 11 cases. As was the case for dogs, there were significantly more cats with nonpathologic causes of hypercalcemia where the ionized calcium concentrations were mildly increased compared to when the concentrations were moderately-to-severely increased. Ionized hypercalcemia was significantly more likely to be moderatesevere than mild in cats with idiopathic hypercalcemia or hypercalcemia of malignancy. Again, like dogs, there were significantly more cases of hypercalcemia of undetermined cause in cats with moderate-severe hypercalcemia.
Hypocalcemia in Dogs
Causes of hypocalcemia in dogs are summarized in Table 2. There were significantly more dogs with non-pathologic hypocalcemia where the ionized calcium concentrations were mildly decreased compared to when the concentrations were moderately-to-severely decreased. In contrast, there were significantly more dogs with parathyroid-hormone dependent hypocalcemia, kidney injury, eclampsia, and critical illness in cases where ionized calcium concentrations were moderately-to-severely decreased compared to when they were only mildly decreased. Again, significantly more cases of hypocalcemia were classified as undetermined in dogs with moderate-severe hypocalcemia.
Hypocalcemia in Cats
Causes of hypocalcemia in cats are summarized in Table 2 In contrast, there were significantly more cats with kidney injury, urethral obstruction, and soft tissue trauma in cases where the ionized concentrations were classified as moderatelyto-severely decreased compared to when they were classified as mildly decreased. There were significantly more cases of hypocalcemia classified as undetermined in cats with moderatesevere hypocalcemia.
Canine vs. Feline Comparisons
Comparisons of frequencies of causes of hypercalcemia between dogs and cats determined that non-pathologic causes of hypercalcemia were significantly more common in dogs than in cats, while kidney injury, idiopathic hypercalcemia, and "other" were significantly more common in cats than in dogs (all P < 0.001 after correction for multiple comparisons) ( Table 1). Within causes of hypocalcemia, non-pathologic causes, nutritional or gastrointestinal causes and eclampsia were significantly more common in dogs than in cats, while urethral obstruction and kidney injury were significantly more common in cats than in dogs (all P < 0.01 after correction for multiple comparisons) ( Table 2).
Hypercalcemia
In dogs, the most common pathologic causes of ionized hypercalcemia were malignancy, primary hyperparathyroidism, hypoadrenocorticism and kidney injury. These findings are consistent with previous reports (5)(6)(7). In cats, malignancy, kidney injury, idiopathic hypercalcemia and hyperparathyroidism were most common. These findings are consistent with a study of 71 cats with total hypercalcemia (8).
A recent review suggested that idiopathic hypercalcemia should be considered the predominant cause in cats (18). However, that article referred to unpublished data and prior reports available only in abstract form. Potential explanations for these disparities include inclusion of distinct patient populations, use of different analyzers or RIs, differences in the numbers of cats included, and dissimilarities in diagnostic investigation and disease classifications. Multicenter studies pooling data from multiple patient populations with consistent analyzer measurements and common definitions might help to address this issue. Idiopathic hypercalcemia is an elusive disease generally considered a diagnosis of exclusion. In the present study, cats with idiopathic hypercalcemia more frequently had moderate-severe hypercalcemia, suggesting that this differential should be prioritized in cats presenting with marked increases in ionized calcium.
In the present study, the degree of hypercalcemia was associated with the frequency of the underlying disorder. Dogs evaluated at our institution with moderate-severe hypercalcemia were significantly more likely to have hyperparathyroidism, malignancy or hypervitaminosis D than those with mild hypercalcemia. In contrast, cats with moderate-severe hypercalcemia at our institution were significantly more likely to have malignancy or idiopathic hypercalcemia than if the hypercalcemia was mild.
Malignancy was the most frequently identified pathologic cause of both canine and feline ionized hypercalcemia in the present study. Cancer may result in hypercalcemia through various mechanisms including PTHrp, cytokines, bone resorption due to hematologic malignancies present in marrow and tumor metastasis to bone with subsequent local bone resorption (1). Lymphoma, which can cause humoral hypercalcemia, was the most common malignancy identified in both species here, consistent with previous reports (5,7). In the present study, it is possible that hypercalcemia was incorrectly attributed to malignancy in some cases. Patients were classified as malignancy-associated if they had a documented neoplastic process and increased calcium concentrations. While this strategy may have falsely increased the overall incidence of malignancy associated hypercalcemia it would not have affected comparisons between mild and moderate-severe groups or between species because the method was uniformly applied to all cases.
Primary hyperparathyroidism was a common diagnosis in both cats and dogs in the present study. Primary hyperparathyroidism is typically considered rare in cats compared to dogs, but the disorder occurred with comparable frequency in the present study. Histopathology was available for most canine cases that were predominantly due to parathyroid adenomas, a finding consistent with the literature (19). Only one cat had available histopathology, indicating parathyroid carcinoma. The similar incidence between species identified here may have resulted from an increased secondary and tertiary patient population at our institution that might bias toward unusual cases. Alternatively, our larger sample size may have enabled us to identify the true incidence of primary hyperparathyroidism in cats. Our findings will require replication in another population to confirm which of these is correct.
Kidney injury was a more frequent cause of ionized hypercalcemia in cats than in dogs. Most cases in both species were diagnosed as CKD rather than AKI. Hypercalcemia is reported to occur in ∼30% of cats with CKD and ∼10% of dogs with CKD (10, 11), through various mechanisms including renal hyperparathyroidism, decreased glomerular filtration, increased tubular reabsorption and decreased bone storage. For the CKD patients presented here, parathyroid concentrations were not available, so it is possible that renal hyperparathyroidism was present but undiagnosed. Hypercalcemia can itself contribute to development or propagation of kidney injury and hence we cannot exclude the possibility that the kidney injury identified as the cause in these patients was in fact secondary. Urolithiasis has been previously associated with ionized hypercalcemia in cats. Urolithiasis was present in 11 of 71 cats with hypercalcemia in one study (8). Hypercalciuria may predispose patients to form calcium-containing uroliths and hence it is possible that the cats with urolithiasis in this study had a separate undiagnosed cause of hypercalcemia. Lactulose administration was suspected to be the cause of hypercalcemia in 8/119 cats in this study. An experimental study in dogs identified that lactulose administration causes increased intestinal absorption of calcium and magnesium likely through alterations in intestinal pH (20), although to our knowledge a clinical association between lactulose administration and hypercalcemia has not been reported in human or veterinary medicine to date.
The association of hypervitaminosis D with moderatesevere hypercalcemia identified in this study is weak. The total number of cases of hypervitaminosis D in this study (n = 5) was small compared to malignancy (n = 212) or primary hyperparathyroidism (n = 73) for instance. As a result, the difference between the two categories of severity were statistically significantly different, but this was based on the difference of a single case between the categories. This apparent difference should be interpreted with caution and hypervitaminosis D remains an important differential diagnosis even in patients with mild hypercalcemia.
Hypocalcemia
The most common pathologic causes of ionized hypocalcemia identified in both dogs and cats in the present report were critical illness and kidney injury, with urethral obstruction also common in cats. Dogs presenting with moderate-severe hypocalcemia were significantly more likely to have hypoparathyroidism, kidney injury, eclampsia or critical illness, whereas cats were significantly more likely to have kidney injury, soft tissue trauma, or urethral obstruction. Fortunately, many of these differentials can be ruled out based on history and physical exam alone. Many of the disorders resulting in hypocalcemia, such as kidney injury, diabetes mellitus, pancreatitis, and critical illness, are more challenging to separate given that many of these diseases may coexist. In the present study, attempts were made to prioritize the disease that was resulting in the patient's admission to the hospital, but it is possible that overlap in disease processes may have altered the results.
Ionized hypocalcemia is very common in critical illness, occurring in over half of critically-ill people (21,22), 16-24% of critically-ill dogs (23)(24)(25) and 59-93% of cats with septic peritonitis (26,27). The mechanisms by which critical illness and hypocalcemia are associated are poorly understood and are likely multifactorial. Alterations in parathyroid hormone, vitamin D deficiency, hypomagnesemia and tissue accumulation have all been proposed (3). Critically ill patients may also be predisposed to developing hypocalcemia due to concurrent disease processes or treatments, such as blood transfusions, aggressive intravenous fluid therapy, concurrent kidney injury or pancreatitis. In the present study, critical illness accounted for 17.4 and 14.7% of hypocalcemia cases in dogs and cats, respectively. Critical illness was more frequent in dogs with moderate-severe hypocalcemia, occurring in 23.7% of cases. In people, ionized hypocalcemia is associated with illness severity but is also an independent predictor of mortality (21). Ionized hypocalcemia is associated with mortality in both dogs and cats (12,13) and with duration of ICU stay and length of hospitalization (23,24,27). Failure of ionized calcium concentrations to normalize during hospitalization is a negative prognostic indicator in cats (27).
Hypocalcemia associated with AKI was a common cause of hypocalcemia in both dogs and cats in this study. In AKI, hyperphosphatemia may occur secondary to decreased glomerular filtration rate, which in turn may result in hypocalcemia due to increased binding by phosphate (1). Hypocalcemia in urethral obstruction may be due to phosphate retention secondary to obstruction, PTH resistance, or acidbase alterations. Ionized hypocalcemia is reported in 75% of cats presenting with urethral obstruction (28). Mild ionized hypocalcemia is common in dogs and cats with DKA and may be due to osmotic diuresis, supplementation of bicarbonate or potassium phosphate, or concurrent pancreatitis or AKI. Diabetes mellitus was a frequent cause of hypocalcemia in the present study, occurring in 4.4% of hypocalcemic dogs and 6.4% of hypocalcemic cats. The majority of these patients had DKA. In a study of 127 dogs with DKA, 52% had ionized hypocalcemia, which was associated with non-survival (29). Ionized hypocalcemia is also frequently identified in cats with pancreatitis, and is considered a risk factor for mortality (30,31). The hypocalcemia in pancreatitis is hypothesized to be due to increased calcitonin, calcium sequestration by peripancreatic fat and free fatty acids, and through hypomagnesemia induced PTH resistance (1). Pancreatitis was an uncommon cause of hypocalcemia in the present study and the frequency of pancreatitis did not vary with hypocalcemia severity. This finding is consistent with the variability in calcium concentrations reported for typical acute pancreatitis populations (30).
Eclampsia, or puerperal tetany, occurs in the periparturient period in dogs or cats due to depletion of ionized calcium that occurs with lactation (32). In this study, eclampsia more frequently caused severe hypocalcemia, but only in dogs. Toxicity was also an important cause of hypocalcemia in both dogs and cats in the present study and typically resulted from furosemide administration or citrate toxicity associated with blood transfusion. Furosemide is commonly prescribed for calciuresis in hypercalcemic patients, but hypocalcemia is more commonly a side-effect of loop diuretic usage in patients with congestive heart failure. In one study of 10 dogs receiving massive transfusion, all dogs experienced ionized hypocalcemia with two dogs experiencing ionized hypocalcemia <0.7 mmol/L (33).
There are limitations to the present study. In some cases, the cause of the calcium disturbance was not clearly stated in the EMR, which required categorizing the cause based on our comprehensive EMR review and considering any other diagnoses present. By necessity therefore subjective judgments were made about diagnoses that are prone to the biases inherent to retrospective studies. Similarly, in patients with comorbidities, the disease process deemed most likely to be the cause was recorded as the etiology. This may have led to misclassification. These judgments were made uniformly across all cases however, which may limit any impact on comparisons within the patient population. It is possible that some patients in this study had alternative causes of hypercalcemia or hypocalcemia that have not been previously reported. These could not have been identified based on retrospective record review. The measurements of ionized calcium were not corrected for pH, which may have altered the overall incidence of the calcium disturbances identified (34). Ionized calcium concentrations in this study were measured on heparinized whole blood samples and it is known that ionized calcium is less stable in heparinized blood compared to plasma. It is standard practice in our institution to measure electrolyte concentrations immediately post-collection which would have minimized any impact of sample type on measured concentration. The cause of the calcium disturbance was undetermined in many cases, particularly in patients with moderate-severe disorders. In many cases this indeterminate status related to the death or euthanasia of the patient prior to establishment of a definitive diagnosis, an outcome that may have been more common in patients with moderate-severe calcium disorders.
The present study used RIs generated in 2007 based on 20 dogs and 20 cats. In 2012, the American Society of Veterinary Clinical Pathology recommended that RIs be generated from populations containing a minimum of 40 animals (35). It is possible that a new RI generated from larger numbers of healthy animals might have led to differences in the percentages of patients in some groups in the present study. Recently, RIs for the RapidPoint 500 machine (the next generation of the analyzer used in the present study) have been published (36,37). The canine ionized calcium RI published by Bachmann et al. (37) was 1.23-1.40 mmol/L based on 51 dogs. This is quite comparable to the interval used at our institution (1.18-1.37). Application of this published RI to our data would likely affect the distribution of cases across the three categories of hypercalcemia and of hypocalcemia, but it seems likely the broad conclusions would be similar. In the study by Bachmann et al. (36) a RI for ionized calcium in cats (n = 24) was not reported because their data were non-parametric. In that study, the median (min-max) values were 1.30 (1. 18-1.35). This is a narrower range than that used in the present study but has a comparable mid-point the interval used at our institution. Similar to the situation in dogs, it is probable that applying the range of values from Bachmann et al. (36) would affect the distribution of cases across the three categories of hypercalcemia and of hypocalcemia, but would leave the broad conclusions from the present study unaltered. One finding in the present study that might be impacted through use of a narrower RI in cats is the difference in the percentage of mild hypocalcemia in cats compared to dogs (48.2 vs. 85.9%). This could be due to interspecies differences, but given the disparities in the RIs is more likely due to the wider RI in cats and the use of a fixed (>1.00 mmol/L) cutoff for defining mild hypocalcemia.
In summary, dogs presenting with moderate-severe hypocalcemia were significantly more likely to have hypoparathyroidism, kidney injury, eclampsia or critical illness, whereas cats were significantly more likely to have kidney injury, soft tissue trauma or urethral obstruction. Mild calcium disturbances are commonly associated with non-pathologic or transient conditions. Malignancy-associated hypercalcemia is the most common cause of ionized hypercalcemia in dogs and cats. Critical illness and kidney injury are frequent causes of ionized hypocalcemia in both species. In both dogs and cats with either hypercalcemia or hypocalcemia, mild disturbances were more likely to be non-pathologic. This may provide some justification to monitor and reconfirm mild calcium disturbances prior to pursuing further testing. In contrast, moderate-severe calcium disturbances warrant further investigation because of the likelihood of a pathologic cause in these patients.
DATA AVAILABILITY
Anonymized datasets generated for this study are available upon reasonable request to the corresponding author.
ETHICS STATEMENT
This study was exempt from ethics committee approval because it presents a retrospective analysis of electrolyte data collected as part of clinician-driven care provided to patients at the institution hospital. No client or patient identifying information is presented. | 6,976.2 | 2019-08-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Model evaluation by a cloud classification based on multi-sensor observations
The detailed understanding of clouds and their macrophysical properties is crucial to reduce uncertainties of cloud feedbacks and related processes in current climate and weather prediction models. Comprehensive evaluation of cloud characteristics using observations is the first step towards any improvement. An advanced observational product was developed by the Cloudnet project. A multi-sensor synergy of active and passive remote-sensing instruments is used to generate a Target Classification providing detailed information about cloud phase and 5 structure. Nevertheless, this valuable product is only available for observations and there is yet no comparable surrogate for models. Therefore, a new cloud classification algorithm is presented to calculate a comparable classification for models by using the temperature, dew point and all hydrometeor profiles. The study explains the algorithm and shows possible evaluation methods making use of the new synthetic cloud classification. For example, the statistics of the vertical cloud distribution as well as e.g. the accuracy of cloud forecasts can be 10 investigated regarding different cloud types. The algorithm and methods are exemplarily tested on two months of operational weather forecast data of the COSMO-DE model and compared to a Cloudnet supersite in Germany. Additionally, the cloud classification is applied to Large Eddy Simulations with a similar resolution as of the observations showing detailed cloud structures. 15 Copyright statement.
Introduction
Clouds and their related processes are still responsible for the highest uncertainties of current climate and weather prediction models (IPCC, 2007;Forster et al., 2007).They are of great importance for accurate weather predictions to various end users and applications like solar power forecasts (Huang and Thatcher, 2017;Antonanzas et al., 2016;Sperati et al., 2016), the aviation sector (Bolgiani et al., 2018;Gultepe et al., 2015) and many more.Nevertheless, the evaluation of the macrophysical cloud properties of current atmospheric models is even nowadays very challenging due to the complexity of involved processes and the large variability of clouds.
Data, Methods and Cloud Classification Algorithm
The observations and derived Cloudnet products are obtained by the Leipzig Aerosol and Cloud Remote Observations System (LACROS) supersite of the HOPE campaign (Macke et al., 2017) for April and May 2013 (SAMD, 2018).This supersite was located at a sewage plant near Krauthausen, Germany (40 km west of Cologne).Various cloud types like low-level cumulus clouds, high cirrus clouds, large precipitating ice clouds and several frontal passages were captured during these two months depicting a large variety of typical synoptic situations.The Cloudnet products have a time resolution of 30 seconds and a height resolution of 30 m.The data is available from roughly 200 m above ground up to 15 km height due to the remote sensing measurement characteristics.
The operational COSMO-DE model of the German Meteorological Service (DWD) is evaluated (Baldauf et al., 2011).This cloud-resolving model runs at a horizontal resolution of 2.8 km and contains 51 height levels with terrain following hybrid coordinates.The layer thickness is increasing with height till the upper edge of the model at 22 km.The 1-hourly output 3 Microphysical processes are parameterised by a 1-moment bulk formulation (Baldauf et al., 2011) The official Cloudnet algorithm differentiates between eleven categories, which are "Aerosol & insects", "Insects", "Aerosol", "Melting ice & cloud droplets", "Melting ice", "Ice & supercooled droplets", "Ice", "Drizzle/rain & cloud droplets", "Drizzle or rain", "Cloud droplets only" and "Clear sky".The aerosol, as well as insect categories, are not diagnosed by the presented cloud classification algorithm, because most atmospheric models like COSMO-DE don't provide any information about it.
Therefore, these categories are set to "Clear sky" in the observations and simulations.The remaining eight cloud classes are defined consistently with respect to the Cloudnet algorithms where possible.For example, the temperature has to be above the freezing level for liquid phases as for example "Drizzle or rain" and the dew point below 273.15K for ice classes like "Ice & supercooled droplets".The snow and graupel hydrometeors are also classified as ice because of their ice phase and characteristics.Other distinctions between e.g."Drizzle or rain" and "Drizzle/rain & cloud droplets" are set by the specific cloud water hydrometeor concentration QC for the cloud classification algorithm.In contrast, the Cloudnet algorithm is based on remote-sensing observations and thus using e.g.thresholds of the cloud radar reflectivity and LiDAR attenuated backscatter coefficient to differentiate between both classes.Nevertheless, a comparable diagnosis is developed to generate a consistent synthetic cloud classification for the model with respect to Cloudnet.
The algorithm itself works on every grid box as follows.The category of "Ice" is e.g.determined by a dew point below the freezing point and a dominant concentration of cloud ice defined by an ice concentration QI larger than a certain threshold and a cloud water concentration QC below another fixed threshold.The "Melting ice" category is e.g.defined by a dew point greater than the freezing point and QI greater than the critical hydrometeor concentration.All rules are compiled together by the flowchart in figure 2. The order of the case selection statements is crucial to get physical consistent results.If for example a grid box is first checked for "Drizzle/rain & cloud droplets" with a temperature above the freezing level, QR and QC larger than a certain threshold and afterwards examined for "Drizzle or rain", for which only the first two conditions have to be fulfilled, all "Drizzle/rain & cloud droplets" cases would be classified only as "Drizzle or rain".Similar situations exist for other case selection statements like "Melting ice" and "Melting ice & cloud droplets".The algorithm can be easily advanced by additional case differentiations including for example information about the surrounding grid boxes or aerosols & dust, if they are provided by the model.For this, the categories of "Drizzle/rain & cloud droplets" and "Cloud droplets" are merged to a new category named "Liquid clouds".The categories of "Ice", "Ice & supercooled droplets", "Melting ice" and "Melting ice & cloud droplets" are combined to "Ice clouds".The previously mentioned classes are also merged, because "Ice Clouds" mainly consists of "Ice" and "Liquid clouds" out of "Cloud droplets only".The categories of "Clear sky" and "Drizzle or rain" aren't modified.The timely averaged frequencies of occurrence profiles for all four cloud categories are depicted in figure 4.
The analysis points out model biases, seen for example by a continuous overestimation of "Ice clouds" by up to 30 % above 3 km and "Liquid clouds" by up to 3 % between 1 and 3 km, which would not be feasible to determine looking only at the cloud fraction statistics.Therefore, the "Ice cloud" overestimation explains the differences found at the "Clear sky" category with an underestimation of a similar magnitude with roughly 30 %.This also confirms the previously stated results of the qualitative comparison.Nevertheless, especially at high altitudes, the lower sensitivity of the remote-sensing instruments has to be considered which could reduce the observed frequency of occurrence of "Ice clouds".The choice of merging the categories for "Ice clouds" doesn't affect the overall conclusions because of the small number of other categories like for "Drizzle/rain & cloud droplets" with only 251 occurrences out of 47,576 samples.
The profile of "Drizzle or rain" fits well to the observations which was also seen at the qualitative comparison.Precipitation or at least Drizzle was present for roughly 20 % of the time of the two months.The missing "Drizzle or rain" within the lowest layers at the observations are due to the remote-sensing instruments, which start measuring roughly 200 m above ground.
COSMO-DE captures this category with a frequency of occurrence of 20 % down to the ground.Overall, the generally good statistics of the COSMO-DE except for the mismatches found at "Ice clouds" is proven by the similar shape of the distributions of the four distinct categories.
Point-to-Point Verification
Precise cloud predictions for the right time and place are of high importance for various applications like general weather forecasts or even for the radiative budget and all related quantities of the model itself.The cloud classification contains for every height interval of each time step information about the cloud phase, which can be directly compared between the observed and the modelled classification.This enables to analyse if the model predicts the correct cloud type at the same time and location 5 as of the measurements even though an exact matching is not expected due to the chaotic characteristics of the atmosphere.
The evaluation of the cloud composition is e.g.crucial to investigate their radiative properties and derived quantities.Certain mismatches of the multiple cloud classes can furthermore indicate model shortcomings.For example, "Cloud droplets" of the As an example, the COSMO-DE Cloud Classification is compared pointwise with the LACROS observations for which the results are discussed.In total 47,576 points with 8 different cloud categories are analysed, which results in an 8 x 8 contingency table (Tab.1).The table depicts how often the model and the observations contain the same category at the same time, place and height, respectively how often another category is predicted than measured.Therefore, hits on the diagonal are the optimum where the model matches the observations.
The high number of "Clear sky" cases and the simplicity to diagnose this category by the algorithm results in the best agreement of 84.6 % found for "Clear sky".The overestimation of "Ice" above 3 km is depicted by a hit rate of only 75.3 %.
In addition, 12 % of the modelled "Ice" points are thus observed as "Clear sky".
General issues of atmospheric models like the right representation of the melting layer are identifiable by this analysis.For example, a lower melting layer through a warmer temperature profile lead to an earlier melting of ice hydrometeors, which is seen by the observed 52.5 % of "Melting ice" points already modelled as "Drizzle or rain".Difficulties at the right distinction
Fuzzy Verification
Small displacements or time lags of cloud and precipitation forecasts often induce large errors, if the model output is compared pointwise with the observations.Depending on the specific interest as for example to evaluate the statistics of the cloud forecasts, fuzzy verification methods are more appropriate allowing the model to be uncertain in time and/or space or further dimensions like the cloud phase.In addition, the large variability of clouds, the multi-dimensional and categorical problem of the classification as well as different resolutions of the observations and the model, cause problems at using standard point to point verification metrics like BIAS and RMSE.Therefore, fuzzy verification techniques are more appropriate for the analysis of the cloud classification to assess a small shift in time or place of a cloud still as a correct prediction.
For each of the fuzzy analyses like e.g.being fuzzy in space by one grid box, an 8 x 8 contingency table can be calculated, which would be difficult to interpret due to the large amount of numbers.For that reason, the focus at this point is on the hit rates of the same cloud phase between the model and the measurement to evaluate the overall accuracy of the cloud forecasts including the cloud type.Further comparisons with a random forecast or a bootstrapping can be applied to the cloud classification to investigate the statistical significance of the results.The fuzzy verification of the cloud classification is exemplarily tested by the two months of COSMO-DE and LACROS observation dataset.
For the fuzzy verification in time, the hour before and after the observation is included, shown by the "Time" column in Tab. 2. For being fuzzy in space, first one grid box around the centre ("Space 1" column) and then three ("Space 3" column) grid boxes surrounding the middle as well as the whole extracted area ("Space Full" column) of 18 x 17 grid points (50 x 48 km) are considered for the evaluation.Only the hit rates, normalised by the total number of observed events for each category separately, are regarded.As a benchmark, we calculated 10,000 times a hit rate resulting from randomly chosen observational and model grid points and computed the average.The statistical significance of the point to point comparison is tested by a bootstrapping with return applied to the cloud classification with 10,000 iterations.The bootstrapping assumes that the climatology is captured correctly, which is indicated by a good agreement of the frequency of occurrences seen in section 3.1.
The standard deviation of the bootstrapping provides an uncertainty estimation of the analysis.All results are compiled in table 2.
The easier prediction of classes like "Clear sky" than of more rare categories is obvious by a hit rate of roughly 72.7 % for "Clear sky" already gained by the comparison with a random forecast.Therefore, the added value for "Clear sky" isn't that high as indicated by the large hit rate of the real forecast of 84.6 %.The improved forecast skill is especially visible for rare categories like "Drizzle or rain", where an increase of 44 % and a factor of eight is found compared to the random forecast.The reliability of the results, based on the bootstrapping, is higher for more frequent occurring categories like "Clear sky" with a small standard deviation of less than 0.2 % compared to 2.3 % for "Ice & supercooled droplets" with only 200 observed points.
Overall, standard deviations of less than 2.3 % prove the statistical significance of the presented results.
LACROS Observations
Overall As expected, by being fuzzy in time or space, the hit rates increase by considering more and more grid boxes respectively time steps.Thus, including the whole region with 18 x 17 grid points ("Space Full" column) shows the highest hit rates.Larger variability within three hours of the model ("Time" column) is observable compared to 9 considered grid boxes of being fuzzy by one additional grid box ("Space 1" column), seen by higher hit rates for being fuzzy in time than in space, except for "Ice & supercooled droplets".The opposite is found regarding three or more surrounding grid boxes, except for "Clear Sky".The In general, a similar cloud structure and phase is found comparing the modelled cloud classification to the observed one.Nevertheless, differences for example between the categories of "Drizzle or rain" and "Drizzle/rain & cloud droplets" can be identified, which provide detailed insights into the models' microphysics.The example shows the usability of the developed cloud classification algorithm as well for other models even at LES resolution.Also, the other introduced cloud evaluation methods can thus be applied, which is shown exemplarily by the frequency of occurrence for the four cloud classes (Fig. 6).
13 Mind different axis.
Rain events of the ICON model seem to be too rare and underestimated by roughly 15 %, as also found by Heinze et al. (2017).This results in an overestimation of the "Clear Sky" frequency of occurrence.The profile of "Ice clouds" is overall in good agreement with the observations.Nevertheless, an overestimation of "Ice clouds" by roughly 10-20 % above 7 km is found which might arise from the lower sensitivity of the remote sensing instruments and thus less observed ice at high altitudes.The low "Liquid cloud" layer during the afternoon, seen as "Drizzle or rain" by the observations, leads to an overestimation of this Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License.frequency of occurrence.The accuracy of cloud forecasts for the right time and place is investigated by direct comparisons of the modelled and measured cloud classification for which also fuzzy verification methods are applied.The potential of this new evaluation approach is shown by comparisons with the operational COnsortium for Small-scale MOdelling (COSMO) model for the German domain (COSMO-DE) as well as with first ICOsahedral Non-hydrostatic (ICON) Large Eddy Simulations (LES) (section 4).Results are finally concluded and discussed (section 5).
Figure 1 .
Figure 1.Schematic illustration of the cloud classification approach to generate a comparable Cloudnet target classification on the basis of an atmospheric model output.The original target classification is obtained by observations, using cloud radar, LiDAR and microwave radiometer measurements.
Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License. of 12 h forecasts, starting at 00 and 12 UTC, are analysed within this study.The COSMO-DE model gets its initial and hourly boundary conditions from the COSMO model setup with 7 km grid spacing covering Central Europe (COSMO-EU).
, which contains five different hydrometeor classes (specific cloud water content QC, specific cloud ice content QI, specific rain water content QR, specific snow content QS and specific graupel content QG).The closest grid point to the LACROS supersite is selected for the pointto-point comparisons and a region of roughly 50 x 50 km² between Aachen and Düsseldorf is extracted from the model output for the fuzzy verification.The new cloud classification algorithm combines the temperature, dew point and all hydrometeor information of an atmospheric model to generate a consistent cloud classification corresponding to the Cloudnet Target Classification.The cloud classes are determined based on physical principles from the model output by consecutive case selections, as depicted in Fig. 2. Every grid box of the model is assessed independently by the algorithm.
Figure 2 .
Figure 2. Flowchart of the cloud classification algorithm using the output of an atmospheric model to generate a comparable Cloudnet target classification, differentiating eight different cloud categories.
Figure 3 .
Figure 3. Cloudnet Target Classification of the LACROS observations (a,c) and of the cloud classification algorithm applied on the COSMO-DE (b,d) for April (a,b) and May (c,d) 2013.The output is provided on the 51 height levels of the COSMO-DE with an hourly resolution.The observations are adapted to the common grid by using the most frequent category.
3. 1
Frequency of Occurrence of Cloud ClassificationThe frequency of occurrences of the cloud classes are calculated for the observations and model to quantify differences between both datasets and investigate qualitative findings in more detail.Therefore, the cloud statistics of the model can be evaluated considering the multiple cloud phases of the classification.The mean vertical cloud profile for the cloud classes highlights e.g.certain biases or model shortcomings for specific cloud types like ice clouds and thus allow for in-depth analysis of the cloud microphysics.The cloud type climatology of the model for specific locations can be assessed using a long time series.To illustrate the potential of this method, the frequency of occurrence is calculated for the COSMO-DE dataset.The eight cloud categories are merged to four because of the small number of occurrence of most categories as well as for reasons of clarity and comprehensibility.In respect to the model hydrometeors of cloud water QC, cloud ice QI, snow QS, graupel QG and rain QR, the cloud classification categories are merged to "Clear Sky", "Ice Clouds", "Liquid Clouds" and "Drizzle or rain".
Figure 4 .
Figure 4. Frequency of occurrence (0-12 km) of April and May 2013 of the Clear Sky (a), Drizzle or Rain (b), Liquid Clouds (c) and Ice Clouds (d) category for the LACROS Cloudnet target classification (black solid line) and for the COSMO-DE Cloud classification (black dashed line).Mind different axis.
which are classified as "Drizzle or rain" by the observations suggest e.g.problems at the rain formation process.
of the eight different cloud categories are visible e.g. by 31 % of observed "Drizzle/rain & cloud droplet" points, which are categorised as "Drizzle or rain" at the modelled classification.The underestimation of "Liquid clouds" (sect.3.1) is confirmed by 55.3 % of the modelled "Clear sky" points measured as "Cloud droplets".Only a few observations were made of rare categories like "Drizzle/rain & cloud droplets" with 251 points or even "Ice & supercooled droplets" with 200 points out of Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License.47,576.Therefore, the right capturing of those categories is very difficult for the model because of the small sample size and thus only low hit rates are obtained like 17 % for "Ice & supercooled droplets".
eight different cloud categories of the observed LACROS Cloudnet target classification and of the COSMO Cloud Classification of April and May 2013.The hit rates are normalized by the total number of observed events (first column) of each category.The point to point comparison results are at the first numbers of the first rows of the second column.The numbers at the second rows of the second column are the mean hit rates of randomly chosen points from 10,000 iterations.The second numbers of the first rows at the second column are the standard deviation calculated by a bootstrapping with 10,000 iterations.The hit rates of the third column are determined by being fuzzy in time by one hour.The hit rates of being fuzzy in space are at the fourth to sixth column.For more details, see text.
5
biggest rise at the hit rates for the fuzzy verification is seen for rare categories like "Cloud droplets".For this category, an increase by a factor of four from 8.7 % to 40.5 % is visible comparing the point-to-point verification with the fuzzy verification including all extracted points.The presented results indicate for possible model improvements and are a good starting point for further in-depth analysis.Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License.The applicability of the presented cloud classification algorithm to other atmospheric models is tested by a case study with the ICON LES model(Dipankar et al., 2015).Realistic Germany-wide LES simulations were performed within the High Definition Clouds and Precipitation for Advancing Climate Prediction (HD(CP)²) project with a horizontal resolution down to 156 m and an output frequency of 9 seconds for specific locations like the Cloudnet supersites(Heinze et al., 2017).Thus, the output is averaged to the 30 sec.resolution of Cloudnet using the most frequent cloud class for each time interval.The ICON LES model consists of 151 terrain following levels up to 22 km with increasing layer thickness with altitude.The initial-/boundary conditions are provided by the COSMO-DE analysis and boundary conditions are updated every hour.The simulations were done with the two-moment microphysics of(Seifert and Beheng, 2001), which has 6 hydrometeor categories (cloud water, cloud ice, rain water, snow, hail and graupel).According toJerger (2014), hail could be assigned to ice as well, which allows applying the same cloud classification algorithm (Fig.2) to the LES output as for the COSMO-DE simulations.The algorithm is applied for a case study to the nearest grid box of the LACROS supersite for 26 April 2013, showing a frontal passage during noon (Fig.5).
Figure 6 .
Figure 6.Frequency of occurrence (0-12 km) of 26 April 2013 of the Clear Sky (a), Drizzle or Rain (b), Liquid Clouds (c) and Ice Clouds (d) category for the LACROS Cloudnet target classification (black solid line) and for the ICON-LES cloud classification (black dashed line).
5
category by the model, indicating issues at the rain formation process.Nevertheless, the high spatial and temporal resolution of the ICON output shows an unprecedented detail of the modelled cloud structure and development, which shows among others the added value of these realistic large eddy simulations.Addi-14 Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License.tionally, the cloud classification offers a very intuitive and comprehensive view on the modelled clouds to make big datasets of large LES accessible, which is crucial e.g. to find model issues or screen the simulations for interesting physical processes.5Conclusions and DiscussionThe proposed cloud classification algorithm uses the temperature, dew point and hydrometeor profiles of a numerical atmospheric model to generate a cloud classification similar to the observational-based Cloudnet Target Classification.This observational product is a comprehensive tool for the detailed evaluation of cloud phase, composition and structure, but has so far only been used to derive model quantities like cloud fraction or ice water content from the observations.The modelled surrogate makes therefore a direct comparison of the extensive cloud classification product possible, allowing for an in-depth cloud evaluation.The modelled cloud classification provides, in addition, an easily accessible first impression of the vertical cloud structure.The qualitative comparison with the observations indicates how well the clouds are represented by the model, giving a hint about the overall model performance and the underlying cloud microphysics.The comparably designed cloud classification can consequently serve ideal as a basis for various statistical analyses and further derived cloud properties.For example, the frequency of occurrence can be calculated to evaluate the mean vertical cloud distributions for the multiple cloud types.Model biases of certain cloud types or other shortcomings of the model can thus be identified.Furthermore, the prediction of the right cloud type at the same time and location can be investigated by comparing the cloud phase of every height level for each time step with the observations.Fuzzy verification techniques are more appropriate to assess cloud forecast statistics due to the different time and spatial resolutions of the observations and the model as well as due to the large variability of clouds.This is especially worthwhile because of the multi-dimensional and categorical dataset of the cloud classifications.The fuzzy verification allows the model to be uncertain in time and space, which prevents a misinterpretation of the evaluation due to e.g. a simple time lag or displacement of the model.The cloud classification algorithm and evaluation methods are exemplarily tested by comparing the COSMO-DE model data with two months, respectively the ICON LES model with one day, of observations for a mid-latitude Cloudnet supersite.The results show the value of the classification to point out, for example, certain model shortcomings.The calculated frequency of occurrence of "Ice clouds" shows a significant overestimation above 3 km with up to 30 % for the COSMO-DE.Thus, a too low frequency of occurrence for "Clear sky" conditions by up to 30 % is seen in the middle and upper atmosphere.The pointwise comparison of the cloud classification demonstrates e.g.modelled "Drizzle or rain" cases, which are observed as "Melting ice" points.The earlier phase transition to rain indicates possible issues of a too warm temperature profile or of the cloud microphysics.Allowing the model to be uncertain in time or space leads, as expected, to higher hit rates.The hit rates of the correct cloud category are higher for being uncertain by one hour than of being uncertain by one or more grid boxes surrounding the centre for the COSMO-DE.Considering three or more cells, the opposite is the case.The application of the new cloud classification algorithm to the ICON LES provides very detailed information about the cloud structure and phase at a similar resolution as of the observations.Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License.Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2018-259Manuscript under review for journal Geosci.Model Dev. Discussion started: 2 November 2018 c Author(s) 2018.CC BY 4.0 License.
Table 1 .
Point wise comparison (Contingency table) of the eight different cloud categories of the LACROS Cloudnet target classification and of the COSMO-DE Cloud classifications of April and May 2013.The COSMO Cloud classification results are based on the 00/12 UTC analyses with the hourly forecasts for the hours in between.The absolute numbers of the contingency table are normalized by the total number of observed events of each category.
Table 2 .
Hit rate table of the | 6,258 | 2018-11-02T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Dynamical Aspects of an Equilateral Restricted Four-Body Problem
The spatial equilateral restricted four-body problem ERFBP is a four body problem where a mass point of negligible mass is moving under the Newtonian gravitational attraction of three positive masses called the primaries which move on circular periodic orbits around their center of mass fixed at the origin of the coordinate system such that their configuration is always an equilateral triangle. Since fourth mass is small, it does not affect the motion of the three primaries. In our model we assume that the two masses of the primaries m2 and m3 are equal to μ and the mass m1 is 1 − 2μ. The Hamiltonian function that governs the motion of the fourth mass is derived and it has three degrees of freedom depending periodically on time. Using a synodical system, we fixed the primaries in order to eliminate the time dependence. Similarly to the circular restricted threebody problem, we obtain a first integral of motion. With the help of the Hamiltonian structure, we characterize the region of the possible motions and the surface of fixed level in the spatial as well as in the planar case. Among other things, we verify that the number of equilibrium solutions depends upon the masses, also we show the existence of periodic solutions by different methods in the planar case.
Introduction
Dynamical systems with few bodies three have been extensively studied in the past, and various models have been proposed for research aiming to approximate the behavior of real celestial systems.There are many reasons for studying the four-body problem besides the historical ones, since it is known that approximately two-thirds of the stars in our Galaxy exist as part of multistellar systems.Around one-fifth of these is a part of triple systems, while a rough estimate suggests that a further one-fifth of these triples belongs to quadruple or higher systems, which can be modeled by the four-body problem.Among these models, the configuration used by Maranhão 1 and Maranhão and Llibre 2 , where three point masses form at any time a collinear central configuration Euler configuration, see 3 , is of particular interest not only for its simplicity but mainly because in the last 10 years, an increasing number of extrasolar systems have been detected, most of them consisting of a "sun" and a planet or of a "sun" and two planets.
We study the motion of a mass point of negligible mass under the Newtonian gravitational attraction of three mass points of masses m 1 , m 2 , and m 3 called primaries moving in circular periodic orbits around their center of mass fixed at the origin of the coordinate system.At any instant of time, the primaries form an equilateral equilibrium configuration of the three-body problem which is a particular solution of the three-body problem given by Lagrange see 4 or 3 .Two of these primaries have equal masses and are located symmetrically with respect to the third primary.
We choose the unity of mass in such a way that m 1 1 − 2μ and m 2 m 3 μ are the masses of the primaries, where μ ∈ 0, 1/2 .Units of length and time are chosen in such a way that the distance between the primaries is one.
For studying the position of the infinitesimal mass, m 4 , in the plane of motion of the primaries, we use either the sideral system of coordinates, or the synodical system of coordinates see 5 for details .In the synodical coordinates, the three point masses m 1 , m 2 , and m 3 are fixed at √ 3μ, 0, 0 , − √ 3/2 1 − 2μ , 1/2, 0 , and − √ 3/2 1 − 2μ , −1/2, 0 , respectively.In this paper, the equilateral restricted four-body problem shortly, ERFBP consists in describing the motion of the infinitesimal mass, m 4 , under the gravitational attraction of the three primaries m 1 , m 2 , and m 3 .Maranhão's PhD thesis 1 and the paper 2 by Maranhão and Llibre studied a restricted four body problem, where three primaries rotating in a fixed circular orbit define a collinear central configuration.
In the ERFBP, the equations of motion of m 4 in synodical coordinates x, y, z are ẍ − 2 ẏ Ω x , where Our paper is organized as follows: Section 2 is devoted to describing the most important dynamical phenomena that governs the evolution of asteroid movement and states the problem under consideration in the present study.In Section 3 reductions of the problem are discussed and a comprehensive treatment of streamline analogies is given.Section 4 is devoted to the principal qualitative aspect of the restricted problem-the surfaces and curves of zero velocity, several uses of which are discussed.The regions of allowed motion and the location and properties of the equilibrium points are established.We describe the Hill region.The description of the number of equilibrium points is given in Section 5, and in the symmetrical case i.e., μ 1/3 , we describe the kind of stability of each equilibrium.In Section 6, the planar case is considered.There, we prove the existence of periodic solutions as a continuation of periodic Keplerian orbits, and also when the parameter μ is small and when it is close to 1/2.Finally, in Section 8 we present the conclusions of the present work.
Next, we will enunciate some four-body problem that has been considered in the literature.Cronin et al. in 6, 7 considered the models of four bodies where two massive bodies move in circular orbits about their center of mass or barycenter.In addition, this barycenter moves in a circular orbit about the center of mass of a system consisting of these two bodies and a third massive body.It is assumed that this third body lies in the same plane as the orbits of the first two bodies.The authors studied the motion of a fourth body of small mass which moves under the combined attractions of these three massive bodies.This model is called bicircular four-body problem.Considering this restricted four-body problem consisting of Earth, Moon, Sun, and a massless particle, this problem can be used as a model for the motion of a space vehicle in the Sun-Earth-Moon system.Several other authors have considered the study of this problem, for example, 8-11 and references therein.The quasibicircular problem is a restricted four body problem where three masses, Earth-Moon-Sun, are revolving in a quasi-bicircular motion i.e. a coherent motion close to bicircular also has been studied, see 12 and references therein.The restricted four-body problem with radiation pressure was considered in 13 , while the photogravitational restricted four body problem was considered in 14 .
Statement of the Problem
It is known that equilateral configurations of three-bodies with arbitrary masses m 1 , m 2 , and m 3 on the same plane, moving with the same angular velocity, form a relative equilibrium solution of the three-body problem see e.g., 4 or 3 .More precisely, we consider three particles of masses m 1 , m 2 , and m 3 called primaries each describing, at any instant, a circle around their center of masses which is fixed at the origin , with the same angular velocity ω and such that its configuration at any instant is an equilateral triangle see Figure 1 .Now, we consider an infinitesimal particle m 4 attracted by the primaries m 1 , m 2 , and m 3 according to Newton's gravitational law.Let r be the position vector of m 4 .
The equations of motion can be written as r ∇U, 2.1 where denotes derivative with respect to t and with r 1 t , r 2 t , and r 3 t representing the position of each primary, respectively.To remove the time dependence of the system 2. −1.This orthonormal moving frame corresponds to the synodical system.Then, 2.1 can be written as where where r j t e iωt ζ j , with ζ j α j iβ j for j 1, 2, 3. Applying the above notation, we can write r x 1 ix 2 e 1 x 3 e 3 , r 1 ζ 1 e 1 , r 2 ζ 2 e 1 , r 3 ζ 3 e 1 , and so r − r j x 1 ix 2 x 3 e 3 − ζ j for j 1, 2, 3. We perform the reparametrization of time dτ ωdt, then the system 2.4 is transformed into where the dot denotes the derivative with respect to τ, and the potential W is given by
2.8
If we define μ 1 m 1 /M, μ 2 m 2 /M, and μ 3 m 3 /M, where M m 1 m 2 m 3 , the equations of motions 2.4 become where For simplicity, we will consider an equilateral triangle of side 1 and so we obtain that M/ω 2 1.
Equations of Motion and Preliminary Results
From 2.9 , we deduce that the equations of motion of the ERFBP in synodical coordinates are given by the system of differential equations
3.3
Analogously to the circular three-body problem, we can verify that the system 3.1 possesses a first Jacobi type integral given by Thus we have the following result.
Proposition 3.1.The Jacobi-type function 3.4 is a first integral of the ERFBP for any value of μ.
Proof.Differentiating 3.4 with respect to the time, we get and using 2.9 we can reduce the obtained expression to Hence C is a constant of motion.
In order to write the Hamiltonian formulation of the ERFBP we introduce the new variables Hence, it is verified that system 3.1 is equivalent to an autonomous Hamiltonian system with three degrees of freedom with Hamiltonian function given by Therefore, the Hamiltonian system associated is 3.9 Of course, the phase space where the equations of motion are well defined is
3.10
where the points that have been removed correspond to binary collisions between the massless particle and one of the primaries.
Mathematical Problems in Engineering
Additionally, the spatial ERFBP admits the planar case as a subproblem, that is, z Z 0 is invariant under the flow defined by 3.9 .
On the other hand, we see that there are two limiting cases in the ERFBP, which we described below.
a If μ 0, we obtain a central force problem, with the body of mass m 1 1 at the origin of the coordinates.
Note that μ 1/3 corresponds to the symmetric case, that is, where the masses of the primaries are all equal to 1/3.
It is easily seen that the equations of motion 3.9 are invariant by the symmetry This means that if ψ τ x τ , y τ , z τ , X τ , Y τ , Z τ is a solution of the system 3.9 , We note that this symmetry corresponds to a symmetry with respect to the xz-plane.In the planar case, the symmetry corresponds to symmetry with respect to the x-axis.
Permitted Regions of Motion
In this section, we will see that the function Ω x, y, z allows us to establish regions in the x, y, z space, where the motion of the infinitesimal particle could take place.We will use similar ideas to those developing in 15, 16 .
By using 3.4 , the surface of zero velocity is defined by the set This set corresponds to the so-called Hill region.We note that That is, the region of all possible motions is given by the whole phase space and so the infinitesimal particle is free to move; in particular escape solutions are permitted.
In the spatial case, the surfaces that separate allowed and nonallowed motions are called zero-velocity surfaces, and for the planar case the set that separates the allowed and nonallowed motions is called zero-velocity curve.The shape and size of zero velocity sets −C Ω x, y, z depend on C and μ.They correspond to the boundary of the Hill regions.The zero-velocity set ∂R C is defined by the equation only for C < 0 and any value of μ.Next, we give a list of all possible situations that may appear when this condition is fulfilled.
1 z → ±∞ on the ∂R C in which case x 2 y 2 → −2C, this means, that around the z-axis the variables x, y must be asymptotic to a circle of radius 3 For |C| very large this implies that x, y can be sufficiently close to one of the primaries, or the infinitesimal mass is close to infinity.
4 Since x 2 y 2 is a factor of Ω on ∂R C , then small values for −C are not allowed.
The Planar Case
As we mentioned in last section, the set {z Z 0} is invariant under the flow, and so the motion of the infinitesimal body lies on the xy plane that contains the primaries.In Figure 7, we show the evolution of the function Ω in the planar case for different values of the parameter μ.
Next we show the evolution of the Hill's regions as well as the zero velocity curves, for μ 1/3 and many values of the Jacobian constant C; the permissible areas are shown on Figures 8, 9, 10, and 11 shading.
In Figure 12, we show the behavior of level curves in the planar case for some values of μ and for different energy levels.
Equilibrium Solutions
It is verified that the equilibrium solutions of the system 3.9 or equivalently 3.1 are given by the critical points of the function Ω Ω x, y, z or simply they are the solutions of the following system of equations:
5.1
From the last equation we see that the coordinate z must be zero, so the critical points are restricted to the plane xy, and are given by the solutions of the first two equations.It is known see 17 that the number of equilibrium solutions of the system 5.1 is 8, 9 or 10 depending on the values of the masses, m 1 , m 2 and m 3 which must be positive.Six of them are out of the symmetry axis i.e., out of the x-axis , therefore on the axis of symmetry we must have 2, 3 or 4. From the analysis done it follows that the number of the equilibrium solutions depends on the parameter μ.This implies that finding the critical points is a nontrivial problem, and this is one of the main differences with the problem studied by Maranhão in his doctoral thesis 1 , because there, the number of critical points did not depend on the parameter μ.
Mathematical Problems in Engineering
The critical points on the axis y 0 are the zeros of the function where An explicit computation shows that in the limit case problems the number of equilibrium points corresponding to the system 5.1 is as follows.
a The function 5.2 with μ 0 results in whose zeros are x −1 and x 1, and so there are two equilibrium points.
From numerical simulations we get that the number of critical points along the x-axis is given in Table 1.Observe that μ * : 0.266318 is the bifurcation value.
In the symmetric case when all the masses are equals i.e., μ 1/3 we have that the graph of F 1/3 is similar to the one shown in Figure 13.As a consequence, there are exactly 4 equilibrium solutions on the x-axis, and therefore there are exactly 10 equilibrium solutions.Of course, 0, 0, 0 is an equilibrium solution.
Mathematical Problems in Engineering
In general for any equilibrium solution of the form x 0 , y 0 , 0 , the linearized system 3.9 in the planar case give us that the characteristic polynomial is whose roots are where ρ ± is given by with a W xx x 0 , y 0 , 0 , b W xy x 0 , y 0 , 0 and c W yy x 0 , y 0 , 0 .A very simple result is the following.
5.9
Now, we remark that Consequently we have the following result: Corollary 5.2.In the spatial ERFBP for any equilibrium solution x 0 , y 0 , 0 we have at least two pure imaginary eigenvalues associated to the linear part, which are given by λ ± −W zz x 0 , y 0 , 0 i.
From this corollary we deduce that to study the nonlinear stability in the Lyapunov sense of each equilibrium solution of the spatial ERFBP is not a simple problem, because we need to take into account the existence or not of resonance in each situation.Leandro in 17 studied the spectral stability in some situations according to the localization of the equilibrium solution along the symmetry-axis .In a future work we intend to study the Lyapunov stability of each equilibrium.
5.11
Therefore, we have the following result.
Corollary 5.3.In the symmetrical spatial ERFBP the equilibrium solution 0, 0, 0 is unstable in the Lyapunov sense.
In general, it is possible to prove that the equilibrium solutions on the x-axis are x 1 −0.9351859666722429, x 2 −0.23895830919534947 and x 3 1.1799984048894328, and by symmetry it follows: Corollary 5.4.In the symmetrical spatial ERFBP all the equilibrium solutions are unstable in the Lyapunov sense.
According to 17 we have the following corollary.
Corollary 5.5.In the symmetrical planar ERFBP all the equilibrium solutions are unstable in the Lyapunov sense.
Continuation of Periodic Solutions in the Planar Case
In this section we prove the existence of periodic solutions in the ERFBP for μ sufficiently small in the planar case and by the use of the Lyapunov Center Theorem when μ is close to 1/2.In order to find periodic orbits of our problem we will use the continuation method developed by Poincaré which is one of the most frequently used methods to prove the existence of periodic orbits in the planar circular restricted three-body problem see 15 .This method has been also used by other authors in different problems.In Meyer and Hall 5 , we find a good discussion of the Poincaré continuation method to different n-body problem see also 18 .
In our approach we will continue circular and elliptic solutions of the Kepler problem with the body fixed in the origin of the system with mass 1.We know that all the orbits of the Kepler problem with angular momentum zero are collision orbits with the origin.We assume that the angular momentum is not zero and we study the orbits that have positive distance of − √ 3/2, 1/2 and − √ 3/2, −1/2 .In the following lemma we resume the kind of orbits that we will consider.Lemma 6.1.Fixed a > 0 there exists a finite number of elliptic orbits with semi-major axis a, such that its trajectories are periodic in the rotating system and pass through the singularity of the other primaries − The proof of this lemma can be found in 19 .
Continuation of Circular Orbits
In this section we show that circular solutions of the unperturbed Kepler problem can be continued to periodic solutions of the ERTBP for small values of μ.We introduce the polar coordinates given as x r cos θ, y r sin θ, thus ẋ ṙ cos θ − r θ sin θ and ẏ ṙ sin θ r θ cos θ.So, Ẋ ṙ cos θ − r θ 1 sin θ and Ẏ ṙ sin θ r θ 1 cos θ, consequently X 2 Y 2 ṙ2 r 2 θ 1 2 and yX − xY −r 2 θ 1 .Thus, the Hamiltonian 3.8 now is where
6.3
The new coordinates are not symplectic.In order to obtain a set of symplectic coordinates r, θ, R, Θ we define R ṙ radial velocity in the sideral system and Θ r 2 θ 1 angular momentum in the sideral system , then H is When μ 0 we have that is the Hamiltonian of the Kepler problem in polar coordinates.So, if μ is a small parameter, the Hamiltonian 3.8 assumes the form For μ 0, the Hamiltonian system associated is
6.7
Let Θ c be a fixed constant.For c / 1, the circular orbit R 0, r c 2 is a periodic solution with period |2πc 3 / 1 − c 3 |.Linearizing the r and R equations about this solution gives ṙ R, Ṙ −c −6 r, 6.8 which has solutions of the form exp ±it/c 3 , and so the nontrivial multipliers of the circular orbits are exp ±i2π/ 1 − c 3 which are not 1, provided 1/ 1 − c 3 is not an integer.Thus we have proved the following theorem see details in 5 .
Theorem 6.2.If c / 1 and 1/ 1 − c 3 is not an integer, then the circular orbits of the Kepler problem in rotating coordinates with angular momentum c can be continued into the equilateral restricted four body problem for small values of μ.
Continuation of Elliptic Orbits
In Section 3, we saw that the ERFBP has the S-symmetry which when exploited properly proves that some elliptic orbits can be continued from the Kepler problem.The main idea is given in the following lemma, which is a consequence of the uniqueness of the solution of the differential equations and the symmetry of the problem.Lemma 6.3.A solution of the equilateral restricted problem which crosses the line of syzygy (the xaxis) orthogonally at a time t 0 and later at a time t T/2 > 0 is T -periodic and symmetric with respect to the line syzygy.
That is, if x t and y t is a solution of the equilateral restricted four body problem such that y 0 ẋ 0 y T/2 ẋ T/2 0, where T > 0, then this solution is T -periodic and symmetric with respect to the x-axis.
In Delaunay variables l, g, L, G , an orthogonal crossing of the line of sizygy at a time t 0 is l t 0 nπ, g t 0 mπ, n, m integers.6.9 These equations will be solved using the Implicit Function theorem to yield the following theorem see details in 5 .Proof.The Hamiltonian of the ERFBP in Delaunay coordinates for μ sufficiently small is and the equations of motion are
Application of the Lyapunov Center Theorem
For μ 1/2, we have three equilibrium solutions on the x-axis which are Theorem 6.5.There exists a one-parameter family of periodic orbits of the ERFBP emanating from the Euler equilibrium (for μ 1/2).Moreover, when approaching the equilibrium point along the family, the periods tend to 2π/ −3 8 √ 2.
Numerical Results
In the Section 8, we established theorems on the continuation of periodic solutions from the Kepler's problem in rotating coordinates to the ERFBP.In this section, we present some numerical experiments that illustrates the thesis of Theorem 6.2.
To find those circular orbits we first selected an angular momentum c such that c / 1 and 1/ 1 − c 3 / ∈ Z.By varying c we generated a set of initial conditions for Kepler problem in rotating coordinates given by the system 3.9 taking μ 0. We have chosen y 0 0 and X 0 0 for all orbits, ensuring that we were following a family of symmetric orbits; we have taken into account the fact that circular orbits satisfy r c 2 .
However, the circular orbits associated to c ≈ 0, 1 is close to the circular orbit if μ ≤ 10 −4 , for instance c 9/10 can be continued for μ small and of the order 10 −4 but not for higher values.The orbits obtained as a consequence of numerical simulations are shown in Figure 15.
Conclusions and Final Remarks
The spatial equilateral restricted four-body problem ERFBP is considered.This model of four-body problem, we have that three masses, moving in circular motion such that their configuration is always an equilateral triangle, the fourth mass being small and not influencing the motion of the three primaries.In our model we assume that two masses of the primaries m 2 and m 3 are equal to μ and the mass m 1 is 1 − 2μ.In a synodical systems of coordinates the dynamics obeys to the system of differential equations
8.2
In Section 4 it is devoted to give the principal qualitative aspect of the restricted problem-the surfaces and curves of zero velocity, several uses of which are discussed.The regions of permitted motion and the location and properties of the equilibrium points are established.We describe the Hill region.The description of the number of equilibrium points is given in Section 5, and in the symmetrical case i.e., μ 1/3 we are describing the kind of stability of each equilibrium.In Section 6 the planar case is considered.Here, we prove the existence of periodic solutions as continuation of periodic Keplerian orbits, when the parameter μ is small and when it is close to 1/2.Finally, in Section 7 we present some numerical experiments that illustrates the thesis of theorem concerning with the continuation of circular orbits of the Kepler problem to the ERFBP with μ small enough.
In a work in progress we intend to continue the study of the ERFBP in different aspects of its dynamics.For example, the behavior of the flow near the singularities collisions .The study of the escapes solutions i.e., the unbounded solutions .Existence of chaos under the construction of a shift map.We desired to get periodic solutions under the use of numerical methods.
Figure 1 :
Figure 1: The equilateral restricted four body problem in inertial coordinates.
Figure 2 :
Figure 2: The equilateral restricted four body problem in a rotating frame.
Figure 4 :
Figure 4: Evolution of zero-velocity surface in the three dimensional ERFBP for μ 1/3. a C −1. b and c C −1.6 under different points of view.
Figure 5 :
Figure 5: Evolution of zero-velocity surface in the three dimensional ERFBP for μ 1/3.All cases correspond to C −1.7 under different points of view.
Figure 7 :Figure 8 :Figure 9 :
Figure 7: Evolution of the graph of Ω x, y on the xy plane for different values of μ.
Figure 12 :
Figure 12: Energy level curves for some values of the parameter μ in the planar case.
Theorem 6 . 4 . 2 ,
Let m, k be relatively prime integers and T 2πm.Then the elliptic T -periodic solution of the Kepler problem in rotating coordinates which satisfies −1/2 can be continued into the equilateral restricted four body problem for small μ.This periodic solution is symmetric with respect to the line of syzygy.
Table 1 :
Number of critical points on the x-axis. | 6,225.8 | 2009-01-01T00:00:00.000 | [
"Physics"
] |
A Comparative Discourse on Christian and Secular Distinctive Features of Transformational Development
The primary objective of this article is to explore some distinctions between Christian and secular views of transformation, characteristics of transformational development and the holistic practitioner. To meet this aim, relevant literature has been explored. The article argues that the Christian’s development motivation, goal and process are distinctive. The affirmation of indigenous knowledge; peaceful relationships, self-worth, empowerment and spiritual development are basic characteristics of transformational development. The paper also insists that the attitudes and characteristics of a holistic practitioner play a crucial role in realising these characteristics of transformational development. Understanding the value of this could assist faith-based organization and church-based development agency staff in engaging holistically.
Introduction
The quest for a Christian theological approach to meeting human needs and community development led to the adoption of phrases such as 'holistic ministry', 'transformational development ', 'integral mission', 'diaconia' and 'holistic community-based sustainable development'. 3This paper is limited to transformational development.This is not to say that the idea of Transformational Development (TD) is regarded as the best alternative strategy for community development, but rather proposed as a holistic Christian framework for addressing human and social need.In fact, the fundamental focus of transformational development is holistic change from a level of human existence that is less than the one envisioned by God to one in which a person is fully human and free to move to a state of wholeness in harmony with God, one another and the environment (Myers, 2011: 3) 4 .This approach insists that equity, justice, human dignity and participation are the bedrock of sustainable development.Focusing on socio-economic empowerment alone (although it is essential) may not be sufficient to meet the range of human needs, which include abstract human needs such as harmonious relationships, dignity and spiritual growth.
This paper is divided into three sections.In the first section, some distinctions between Christian/secular views of transformation are presented.This is followed by the characteristics of transformational development and the third section describes the attitudes and characteristics of a holistic practitioner.
Distinctions between the Christian and secular views of transformation
The Christian motivation for development, its goals and processes of implementation, are key foundations that need to be understood.This paper does not intend to perpetuate a dualism between the secular and the sacred; it is rather an attempt towards understanding the Christian motivation with regards to development as unique.
Motivation
Motivation is one of the key elements that determine the impact of a development project. 5In some contexts, the most essential element of secular development is its emphasis on progress, evolution and economic growth.It proposes a process of structural change in such a way that underdeveloped countries could be developed to the level of developed countries (Bowers du Toit, 2010b: 262).The Christian's idea of development, however, is based on the Old Testament concept of shalom or the New Testament concept of the kingdom of God, which are characterised by material well-being, harmony, peace and justice (Bowers 2010b: 266) 6 . 4 This article leans heavily on the legacy of evangelical scholars and practioners such as Bryant Myers, Wayne , Vinay Samuel, Chris Sugden and others who have written on a Christian understanding of development as mission.
The foundation of Christian involvement in community development arises from concern for one's neighbour and the sustenance of the love of God (Dudley, 1991: 1).Thus Moffett (2012: 599) makes it right that a commendable and sincere attempt of the secular effort toward improving human conditions cannot be measured with that of Christianity on the basis that the Christian emphasis is both the level of a vertical relationship (with God) and a horizontal one (with neighbours). 7hristian development agencies may be involved in social transformation for various reasons; however, their respective ministries are virtually always a natural expression of their faith.Their ministries are in response to the great commandment: total love for God and neighbour.It can also be argued that, while secular donors may provide economic incentives as well as strategies to achieve the donor's or development agency's purpose, the Christian side of social transformation is borne out of personal obedience to Jesus Christ and the desire to have others know and follow Christ (Dudley, 1991: xi-xii).
Secular agencies indeed have sound motives in terms of improving the conditions of the poor; however, the Christian motive goes beyond this.The mission and vision statements of Christian development organisations such as Tearfund and World Vision International may serve as examples of this unique motivation.The latter's mission statement (World Vision International, 2014) is summarised as follows: To follow our Lord and Saviour Jesus Christ in working with the poor and oppressed to promote human transformation, seek justice and bear witness to the good news of the kingdom of God.
That of Tearfund (n.d.) is:
To serve Jesus Christ by enabling those who share evangelical Christian beliefs to bring good news to the poor.Proclaiming and demonstrating the gospel for the whole person through support of Christian relief and development […] Seeking at all times to be obedient to biblical teaching.
The above quotes indicate a theological and contextual foundation for Christian motivation.Theologically, Christian motivation emerges from an understanding of the nature of God, humans, the fall, redemption and the kingdom of God.Contextually, it promotes self-reliance in meeting the basic needs of individuals and the community, as well as equity, justice and peace (cf. August, 2010: 48).
Furthermore, people's motivation could also be influenced by the person they pledge allegiance to.In that sense, the secular development workers could be working for the donors' benefit or for the benefit of the poor from a humanist perspective.It should be noted that this could also be true of Christian development workers (Elliston, 1989: 170).However, Christian workers ultimately pay allegiance to Christ, with whom they have had a personal encounter.The workers are accountable to the people they serve, to donors, and to God.They regard the poor as individuals with the same rights to dignity and respect regardless of ethnicity, political, or religious value (Myers, Whaites &Wilkinson 2000: 36).The dominant perspective in the secular development agency is often centred on material, technological and economic progress.In this sense, the future is bright only when there is an increase in the level of employment, a surplus in the budget, and science and technological equipment have been transferred to poor countries for the efficient production of goods and services (Sine, 1987: 3).The Christian perspective, however, is centred on both the immediate future and an eternity in terms of the Kingdom of God (cf.Bowers du Toit, 2010a: 433).
Many secular development agencies may choose to intervene on behalf of others for their benefit, but Christian involvement in social transformation is more than a choice; it is a command.For instance, Dayton (1987: 59) asserts that the world is the object of redemption and the church is the vehicle of that redemption.More so, even though we live in an age of progress and in a scientific and technological world, Christians do not accomplish social transformation through advances in technological and economic progress alone as noted above, but, should recognise the omniscience of the Holy Spirit.By implication a Christian development worker believes that as human beings we cannot overcome our obstacles or achieve success by ourselves.Instead it is through the power of the Holy Spirit who guides and directs our affairs that we can be victorious
Goal
The aims and objectives for engaging in individual or community development (for present or for future purposes) have to be considered before embarking on a development project.Bowers du Toit (2010b: 267) observes that there are structures that hinder progress and rob people of the enjoyment of peace, harmony and justice as intended by God in creation.Therefore, she argues, in agreement with Bragg, that the goal of Christian social transformation is: To repel the evil structures that exists in the present cosmos and institutes through the mission of the church the value of the kingdom over against the values of principalities and powers.This indicates that peaceful and just relations were God's original plan for humanity.However, the entry of sin brought about social disorder.Therefore, Christian engagement in social transformation is to model the good news so as to provide everyone with the opportunity to respond to the commands of the gospel and to live in obedience to it.At the same time, restoring broken relationships,8 ensuring justice, peace and hope; and helping communities to recover their true identity and discover their vocation as stewards of God's resources.
Process
Christian and non-Christian development agencies share some practices and systems in the process of development activities.However, Christians have other unique characteristics that may not be found in the secular field.Some of the things Christians and non-Christian development agencies share in common include: "needs assessment, planning, funding, staffing, training, managing, evaluating, making reports, relating to other agencies, social groups, and political structures, coping with cultural and communicational differences and many others similar complex issues" (Elliston, 1989: 173, also see Myers, 2011: 181).Christian involvement in socio-economic development plays a dual role.In the process of improving the material condition of the people, Christian beliefs are introduced to the beneficiary.
For instance, a holistic Christian approach posits that it is God who provided the soil for development and that it is He who is going to help us prosper through development activities (Bornstein, 2002: 9).Moreover, Christian engagement in holistic development is fuelled by a sense of concern, responsibility and privilege (Bosch, 1991: 140).Therefore, Christian involvement in social action is not only unique in terms of evangelism and its understanding of the Kingdom, but even in the way it views and values the people and community it serves.A holistic worker understands that Christ died for the people who will benefit from their development programmes.Their relationships and lives should therefore be characterised by the fruits of the spirit: love, joy, peace, kindness, faithfulness, gentleness, goodness and self-control (cf.Galatians,[5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. This suggests that, in any effort, whether economic, political, social or religious, the fundamental issue is geared towards realising a fuller and richer human life.It represents a unique approach to human need by being conscious of God-human relations, justice, equity and dignity (Bragg, 1987: 42).It understands and promotes the spiritual side of human development to balance the socio-economic side.This integrated approach is observable in Bragg's argument (1987: 39), which says that "transformation is a part of God's continuing action in history to restore all creation to its rightful purposes and relations."Myers (2011: 259) affirms that God's project in the biblical story is to restore the lives of individuals and communities marred by sin so that they can be good, just and peaceful once again.
Characteristics of Transformational Development
The concept of transformation is not a mere alternative or strategy for community development.Instead, it is a Christian framework for looking at human social change.The characteristics of transformational development in this paper incorporate unique principles that could be used to measure any Christian development praxis.
Affirmation of culture, context and indigenous knowledge
Culture is the way of life and thought of people in a given environment.It can be elaborated on as an integrated system of beliefs, values, customs and institutions that express these beliefs, values and customs (Paredes, 1987: 63).In every society, the peoples' beliefs, values and customs bind them together and give them a sense of identity, security and continuity.These include beliefs about God, reality, the ultimate meaning of the values relating to what is true, good, beautiful and normative, including norms on how to behave, relate to others, talk, pray, dress, work, play, trade, farm and many others.This culture relates to institutions that express beliefs, values and customs that relate to governance, law courts, churches, families, schools, hospitals, factories, shops, unions, clubs and others (Paredes, 1987: 63).
On this direction, transformation is not about changing the way people think and behave in a community.It is about being aware that community customs are linked together, to the extent that, if there is going to be change in one area of life, many other areas will change as well (Hiebert, 1989: 86-87).Therefore, if new ideas are to be introduced, it needs to be done in such a way that it can easily be adopted by the beneficiaries.The holistic practitioner needs to plan and use resources beneficiaries are familiar with.Anacleti(2002: 71) notes that for people to participate in decisions that affect their lives, they must start with what they know and what most people regard as their own culture and values.This is because they are guided by their cultural principles and values.As such, it is mandatory to study the local culture in an effort to realise that community development is not optional.Anacleti (2002: 170) adds that: [p]eople are not developed, they develop themselves and for people to develop themselves they have to be convinced that the changes envisaged will not be a mere experiment with their lives, but will actually mean a change for better This is in contrast with the modernisation way of thinking, in which indigenous culture is seen as a hindrance to development.On this note, Bragg (1987:45) reminds us that transformation affirms indigenous culture because all cultures are part of God's gift to humanity.For Christ to enter into Jewish social and religious life, shows that he (Christ) honours human culture.Christ's attitude here is an eye opener to the fact that all cultures have intrinsic values that can be redeemed and used as a basis for social transformation.
This suggests that valuing people's culture in development efforts is essential to the planning and implementation of all development.These cultures will serve as instruments for promoting social justice, equality, sustainability, identity and dignity for all members of the community.Myers (2011: 205) explains that it is important for a practitioner to respect the community's story, because the history of the community could unveil the community's real problems and present a possible way forward.As for the community dwellers, they feel their story is valuable as long as someone is able to listen to it (Myers, 2011: 206).
To value and respect people's cultures and indigenous knowledge are both theological and contextual in Christian community development.Hiebert (1989: 75) reveals that being both theological and contextual are two dimensions of Christian development.It is theological because Christians' vision, mission and motivation of development emerge out of their fundamental understanding of the nature of God, humans, the fall, redemption and the kingdom of God.By being contextual the Christian finds it easier to build and encourage those they have developed, regardless of their cultural, social, religious, and racial differences.That is, development must be done with the context of the people in mind.
Just and peaceful relationships
Christian development practitioners understand and promote just and peaceful relationships as one of the means of achieving holistic development of the human community.Bragg (1987: 43), for instance, points out that transformation provides a ground where human beings, no matter their race, religion or nationality, may live a fully human life, freed from domination and oppression by other people.In addition, development is a matter of human relations and justice (right relationship with God and God's people) in which domination, oppression and exploitation are abnormal (Robinson, 1994: 318).
This means that unjust relationships are the root causes of disparities and inequalities in society.In this sense, true development is the development of people, the release of people from their enslaved conditions so that they can actively participate in the process of making decisions that affect their lives and labour (Robinson, 1994: 318-319).In the account of the fall in Genesis, the right relationship with God, fellow human beings and nature were distorted.A healthy relationship with God is the basis of all other relationships and these relationships are an integral part of holistic development (Bragg, 1987: 44).Migliore (2004: 267-268) adds that the missional activity of the church is to foster a just and inclusive community in which members use their ministerial gifts for the well-being of the whole.Myers (2011: 181) also explains the levels of these relationships as relationship with the triune God, relationship with oneself and relationship with the community.
This means that society will be good if all relationships are restored.Moltmann (1993: 134) comments that "every relationship to another life involves the future of that life, and the future of the reciprocal relationship into which one life enters with another".For instance, Christian hope is a living hope and is connected in relationships with God, oneself and others in the community.When this framework of relationships ceases to exist, "hope" will be invalid.A cordial relationship unites affection with respect.In a positive relationship, both parties experience respect, acceptance, trust, freedom and happiness.
Therefore, when human relationships are cordial, oppressive attitudes can be abolished, and when what makes a person truly humane emerges it can be affirmed that transformation is taking place in both the individual and the community.Moltmann (1993: 116) concludes that, until then, "the new man, the true man, the free man [the transformed man] is the friend".In the same way, transformation that is founded on a just and peaceful relationship allows people to trust and live with one another as spacefriends while sharing privileges and opportunities.Such a transformation is healing and joyous.Church educators should endeavour to facilitate the healing process in communities living in desperation by speaking and acting prophetically (cf.Brueggemann1984: 26).
Dignity and self-worth
Promoting human dignity and self-worth is another important goal of TD (Bragg, 1987: 42).In other words, affirmation of people's dignity and self-worth is one of the significant tasks of the transformational agenda, because people need to have self-esteem to be fully human.This is because the experiences most poor people go through tend to make them feel as if they are valueless, unproductive, hopeless and helpless.When they are transformed it will give them a sense of belonging in society.Myers (2011: 178) states that both the poor and non-poor need to know who they are and the purpose for which they were created.In that case, restoring the identity and vocation of the poor and non-poor becomes a vital indication of human transformation.
The fact that men and women, poor and non-poor, are made in God's image says something about their identity.For God to have loved them and sent the Son to come and die for them signifies their human dignity.To give them gifts that contribute to their human well-being shows the importance of their vocation (Myers, 2011: 178).Along these lines, Myers (2011:179) emphasises that: i.A transformed, dignified person transforms [his or her] environment.
ii. People, not money, or programmes, transform the world.
iii.Genuine steps to transforming people are not all about transferring resources, but include recovering lost dignity and identity.iv.People must be shown that their human dignity and identity are intrinsically related to God in Christ.v. God's purpose in human history is redemptive.The point is that, on the one hand, social, political and economic transformation of the poor is essential for human well-being.On the other hand, a transformational agenda declares to people that they are made in the image of God with potential and abilities.Through the salvific work of Christ they can regain and fulfil the purpose of their creation (Myers, 2011:180).This coincides with Burkey's (1993: 56) perception that the essence of development is to ensure that people are able to increase their level of self-confidence, pride, initiative, creativity, responsibility, and co-operation.In the absence of these elements, all efforts towards poverty alleviation will be difficult or impossible.
Similarly, theological and non-theological scholars within the discipline of development have argued that, beside the material human needs there also are abstract needs.These are not visible, yet they are very crucial for human survival.Burkey (1993: 3), for instance, refers to such human needs as emotional security and mental rest.This study holds that these factors have to do with a sense of who people are, what they are worth and the purpose for which they are created (Samuel & Sugden, 1999: 238).This is because it is possible for people to have the material things they need for sustenance but still lack self-respect, peace, love, protection, free choice and someone to listen to their story.They can be emotionally depressed because of isolation.People who have a good sense of identity and dignity know that they are not expected to treat others or be treated as less important human beings (Samuel & Sugden, 1999: 246).For this reason, Bragg (1987: 42) sees development as people gaining control over their environment and destiny with dignity and self-worth.
It is, therefore, reasonable to argue that those individuals, families and communities with a good sense of identity and dignity experience freedom, justice, love and peace.In such an environment, care is given to the poor and weak, women and children; people respect and are respected, and people love and are loved.Hughes and Bennett (1998: 79) argue that this kind of life was practised and taught by Jesus and serves as a model for his followers worldwide to break down every social barrier that robs people of a sense of identity and dignity.Christians have their identity in God through the finished work of Christ on the cross and are equipped with spiritual gifts to serve in the household of faith.
3.4 Empowerment and self-reliance Another significant characteristic of transformation is empowerment with an attitude that supersedes the idea of superiority and inferiority.The worldly concept of power does not create such an environment; rather, "it has always been in tension with the power of the servant, the power of love and of the finer aspect of justice" (Christian, 1999: 185).As such the desire to become "bigger, better, faster, prettier, stronger, and smarter" has always been a driving force (Oladosu, 2010: 56).By implication powerlessness is viewed as incapability which hinders development, especially when people lack economic and social status, and political and religious power (Speckman, 2007: 249-251).In this sense, empowerment is a people-enablement process for tackling poverty.The element of empowerment therefore is the control over economic resources and the ability of the poor to articulate and assert themselves (Rahman, 1993: 205).Ajulu (2001: 142) perceives empowerment as a three-tiered integral process focusing on the individual, community and structure.The integral empowerment process addresses all human needs, be they social, material or spiritual.It also addresses the community as a complete entity for the rich, poor and those in between.Lastly, it addresses a range of structures that affect individual and community development.9Thus, collective abilities are needed of individuals in a given community to take charge of identifying and meeting their own needs as households, communities, organisations, institutions, societies and nation (Rowlands, 1996: 90).It is not part of God's plan that power only resides with the "upper class" while the "lower class suffers".God is not in favour of dependency ,and human development should free a person from depending on others (Speckman, 2007: 253).Synder (1995: 6) explains that empowerment is the state of persons (women and men) being enabled to take their destiny into their own hands.Synder (1995: 6) outlines three elements of empowerment in development as follows: the first is economic empowerment, which has to do with access to assets like land, capital and technologies that can produce income.The second has to do with basic human necessities such as education and health services, clean water, fuel and shelter.Participation in decision-making is the third element.The above list indicates that participants must be empowered for the sustainability and effectiveness of any development programme or project.
Nevertheless, empowerment would require the development of skills and abilities to make decisions regarding community development programmes.Individuals and communities need to own the vision and value their work.As Gilchrist and Taylor (2011: 22) stress, "[e]very person has capabilities, abilities and gifts.Living a good life depends on whether those capabilities can be used, abilities expressed and gifts given.If they are, such people will feel valued, feel powerful and be wellconnected to the people around them".This belief is also held by Swanepoel and De Beer (2011: 52), who argue that people must be empowered with the necessary information and freedom to decide on what they want.
The above analysis of empowerment shows that people's power comes ultimately from self-reliance, especially when they are able to meet their material and emotional needs themselves.That is, self-reliance means confidence, reliance primarily on one's own resources, human and natural, and the capacity for autonomous goal-setting and decision making.Physical, economic, social and political incapacity makes a person dependent on others.Equal access to material goods and opportunities to fulfil basic needs are essential characteristics of development.By implication, dependency implies inequality, which is against the will of God.Interdependence is rather a biblical injunction, which is an essential aspect of human life.God intends humans to live in community, not in isolation (Robinson, 1994: 319-320).In this way, development is essentially the removal of the conditions of dependency.Nevertheless, self-reliance must lead people to acknowledge God as the source of their lives.It must lead them to put their confidence in Him whose goodness and mercy are reliable and whose promises cannot fail. 10elf-reliance also leads to sustainability.A self-reliant individual or community does not depend on external help without building capacity.Bragg (1987) reveals that it is God's intention for us to have adequate life-sustaining goods and services, and that development must enhance sustainability of the community's economic, social and environmental resources.Development projects must continue even when the change agent is no longer in the project area.Consequently, any form of development that does not lead to self-reliance must be avoided.The people should own the project, rather than feel that it was imposed on them by some authority.
Spiritual development
Whilst empowerment and self-reliance are important for human well-being, they are not complete without recognition of the spiritual dimension of life.The element of spirituality completes the idea of holistic human development.Of course, it could be expected that government and non-governmental organisations that are involved in community development would focus on people's intellectual, physical and social wel-fare at times to the neglect of their spiritual condition.It would be a great disappointment if faith-based development practitioners and institutions also neglect spirituality.
Of course, the ultimate goal of development is human well-being.11Ignorance, exploitation, socio-cultural constraints and institutional structures are major factors that rob people of freedom by holding back access to social, economic and political power and rendering them voiceless (Bragg, 1987: 43).Poor people are mostly not free in the physical, material, spiritual and social areas of their lives.Transformation therefore involves the whole person, mind, body and spirit in any development effort.
Moreover, a transformational approach seeks to address the issue of sin, which leads to corruption, greed, conflicts, inequality and exploitation, all of which are contrary to God's plan for people (Davis, 2009: 92-93).Furthermore, evil is not only in the human heart, but also in social structures (Bowers, 2010b: 266).Within a transformational approach, unjust economic relations, political imbalances, social misappropriation and religious or cultural domination, are addressed.In this case, 'changed' people will be able to "discover their true identity as children of God and recover their true vocation as faithful and productive stewards of God's gifts for the well-being of all" (Myers, 2011: 3, 17).
Emphasis on spiritual development as a characteristic of development cannot be overemphasised.Korten (1990: 168) reminds us that people's spiritual growth and nourishment are among the important responsibilities of the church.That is because the church's role is to teach people in society the use of power, values, brotherhood, peace and the ability to live in harmony with one another.By implication, this approach can lead to the elimination of unjust structures, the tendency for corrupt practices, exploitation and misuse of power.It can be argued theologically that human beings have always been in search of meaning and purpose in their lives.This search can bring them a sacred connection.The connection gives people insight into the meaning and significance of their lives and the integration of all aspects of their being (Nash & Stewart, 2002: 17).This makes spirituality an important aspect of human need for sustainable well-being. 12ometimes people reject development efforts because of their cultural and religious orientation.As an illustration of the importance of spirituality, at other times it could be discovered that people prefer magic charms to development efforts (Myers, 2011: 142).What people believe about the world and its relationship to their well-being shapes the way they respond to everything, including poverty (Hughes & Bennett 1998: 133-134).This strengthens the role that spiritual sensitivity plays in the initiation and management of development programmes and projects (Nash & Stewart, 2002: 19).Therefore, when a person is transformed, spiritual growth begins, the Holy Spirit indwells the believer (John,(14)(15)(16)(17), and the old nature is replaced with the new (2 Corinthians, 5: 17).Therefore, people's spiritual transformation is crucial.Hughes and Bennett (1998: 133) elaborate: Even if a community develops economically, politically and socially, unless it encounters God in Jesus, it will still be afflicted by the poverty of not knowing God, […] this spiritual poverty will sooner or later lead to other forms of poverty as other forms of sin such as greed begin to dominate.
This quote shows that the freedom to grow in relationship with God and live out values appropriate to the Kingdom of God is crucial for sustainable development (Byworth, 2003: 109).Transformation therefore, frees people from slavery to other people, institutions, beliefs, oppressive systems and ignorance and paves the way to political, social, economic and spiritual freedom (cf.Bragg, 1987: 43).
The holistic practitioner
The holistic practitioner's attitudes and character play an important role in achieving holistic development.These unique qualities of Christian development are crucial to achieving transformation.
The attitudes of a holistic practitioner
The role of a holistic practitioner is to help in fostering good relationships in the community.Therefore, the fruits of the spirit, such as love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control, are key attitudes of a holistic practitioner (Galatians 5:22-23).In addition, a holistic transformational practitioner must learn to be a good neighbour, i.e. to love and to be kind to neighbours.S/he must have the willingness to be a learner.No or little transformation will take place if the worker and beneficiaries are not ready to learn from one another (see Myers, 2011: 219).
The holistic practitioner's spiritual and moral life must go hand-in-hand with professionalism.Obedience to simple instructions helps workers to maintain their integrity and ensures a true sense of stewardship.A Christian who sets apart and cleanses him/herself for God's service must maintain the evidence of inner transformation that is characterised by a life of purity, moral integrity, and holy living.His or her thoughts and actions must be expressed in an outward life of goodness and godliness (Samaan, 1989: 131).
The characteristics of a holistic practitioner
Understanding the characteristics of a holistic practitioner is crucial, since the focus of TD is on the physical and spiritual transformation of the people.Development workers may have well-designed plans and strategies on how to carry out successful projects, but that may not be adequate if they do not clearly understand what is desired of a holistic practitioner.The first important characteristic of a holistic practitioner is to be a born-again Christian.That is, he or she needs to know and experience God in an intimate relationship, growing towards Christ-likeness while doing the work of God (Samaan, 1989: 131).
Besides being born again, knowledge of the Bible is crucial for effective service.The Word of God is an infallible manual for a holistic practitioner; it teaches how to live for God and to serve him.It is only by careful, devotional Bible study that a worker will not fall into public disgrace (cf. 2 Timothy, 2: 15).Theological and biblical knowledge are key tools that a holistic practitioner can use to handle the physical and spiritual challenges that may be encountered in the project area.
However, being a Christian and a professional cannot be separated in the development approach.Professionalism can fuel a holistic practitioner to success.A Christian worker who wants to make an impact in the project area must have the technical knowhow, skills, strategies and knowledge of the dynamics of needs in the project area.In other words, since the Christian approach is holistic, the Christian worker's knowledge must be holistic too.Knowledge of social science and of scripture is important for engaging in transformational development (Myers, 2011: 225).
It takes a humble and compassionate practitioner to relocate to the project area so as to carefully study and become intimately familiar with the social and cultural system of the communities where s/he will serve.Burkey (1993: 5) echoes Chambers' analysis of the flaws of most people who are keenly concerned with rural development but who are neither rural nor poor.Often the strategies for project conception and implementation they use do not represent the aspirations and interests of the targeted communities.
Conclusion
The paper argues that holistic Christian ministry has unique characteristics.Upholding culture and indigenous knowledge is an important characteristic of transformational development.People's way of life, beliefs, values and customs, as well as the institutions that express them, must be respected.The holistic practitioner must communicate the gospel of word and deed within the context of the beneficiaries, and ensure that indigenous knowledge is valued.Just relations must be the focus of development, as unjust relations are the root causes of the inequalities in society.Thus, the affirmation of people's dignity and self-worth is mandatory for a transformational agenda.People need self-esteem, not dependency -which implies inequality, to be fully human.Therefore, all people should be empowered intellectually, physically, mentally and spiritually to enjoy the God-given blessings of creation.Individuals or communities must be free to grow in their physical, mental, social and spiritual life. | 7,757.4 | 2017-09-07T00:00:00.000 | [
"Philosophy"
] |
Clinical Significance of Serum Soluble TNF Receptor I/II Ratio for the Differential Diagnosis of Tumor Necrosis Factor Receptor-Associated Periodic Syndrome From Other Autoinflammatory Diseases
Objectives: Genetic analysis of TNFRSF1A can confirm the diagnosis of tumor necrosis factor receptor-associated periodic syndrome (TRAPS), but interpretation of the pathogenesis of variants of unknown significance is sometimes required. The aim of this study was to evaluate the clinical significance of serum soluble tumor necrosis factor receptor type I (sTNFR-I)/II ratio to differentiate TRAPS from other autoinflammatory diseases. Methods: Serum sTNFR-I and sTNFR-II levels were measured using an enzyme-linked immunosorbent assay in patients with TRAPS (n = 5), familial Mediterranean fever (FMF) (n = 14), systemic juvenile idiopathic arthritis (s-JIA) (n = 90), and Kawasaki disease (KD) (n = 37) in the active and inactive phase, along with healthy controls (HCs) (n = 18). Results: In the active phase, the serum sTNFR-I/II ratio in patients with s-JIA, KD, and FMF was significantly elevated compared with that in HCs, whereas it was not elevated in patients with TRAPS. In the inactive phase, the serum sTNFR-I/II ratio in patients with s-JIA and FMF was significantly higher compared with that in HCs, and the ratio was lower in TRAPS patients than in patients with s-JIA and FMF. Conclusions: Low serum sTNFR-I/II ratio in the active and inactive phase might be useful for the differential diagnosis of TRAPS and other autoinflammatory diseases.
INTRODUCTION
Tumor necrosis factor receptor-associated periodic syndrome (TRAPS) is an autosomal dominantly inherited autoinflammatory disease caused by mutations in TNFRSF1A (1). Symptoms of TRAPS include recurrent fever, abdominal pain, myalgia, exanthema, arthralgia/arthritis, and ocular involvement. Clinical features and laboratory parameters in patients with TRAPS and other autoinflammatory diseases, including systemic juvenile idiopathic arthritis (s-JIA), Kawasaki disease (KD), and familial Mediterranean fever (FMF), tend to overlap. These diseases share clinical manifestations such as fever, rash, and arthritis, as well as laboratory findings such as elevated inflammatory markers. Furthermore, there are no definitive biomarkers for these diseases, making the diagnosis difficult. Genetic analysis of TNFRSF1A can confirm the diagnosis of TRAPS, but interpretation of the pathogenesis of variants of unknown significance is sometimes required. Although the pathogenesis of TRAPS remains unknown, low levels of serum soluble tumor necrosis factor receptor type I (sTNFR-I) in TRAPS patients have been reported (1).
In this study, we aimed to demonstrate that the serum sTNFR-I/II ratio may be useful for differentiating TRAPS from other autoinflammatory diseases including FMF, s-JIA, and KD. We measured serum sTNFR-I and sTNFR-II levels in patients with these autoinflammatory diseases and compared them between each disease.
Participants
Five TRAPS patients from three families, fourteen FMF patients, 90 s-JIA patients, 37 KD patients, and 18 healthy controls (HCs) were enrolled in this study. Two patients in one family, who we reported previously (2), had a T50M (p.Thr79Met) heterozygous mutation in TNFRSF1A and two in another family had a C43R (p.Cys72Arg) heterozygous mutation and one in another family, previously reported (3), had a C30Y (p.Cys59Tyr) heterozygous mutation in the same gene. The initial diagnosis of two of the patients with T50M or C30Y mutation were s-JIA, while that in one of the patients with a C43R mutation was FMF. All FMF patients had a mutation in exon 10 of MEFV (thirteen patients with M694I, one with M694V). The diagnosis of s-JIA was based on the International League of Associations for Rheumatology criteria (4). The diagnosis of KD was based on the classic clinical criteria as follows: fever persisting for at least 5 days, changes in extremities (acute phase: erythema of palms and soles, and edema of hands and feet; subacute phase: periungual peeling of fingers and toes in weeks 2 and 3), polymorphous exanthem, bilateral bulbar conjunctival injection without exudate, changes in lips and oral cavity (erythema, cracked lips, strawberry tongue, diffuse injection of oral and pharyngeal mucosae), and cervical lymphadenopathy (≥1.5-cm diameter) (5). The classic diagnosis of KD was based on the presence of ≥5 days of fever and ≥4 of the five principal clinical features (5).
The criteria for the active phase of TRAPS, FMF, and s-JIA are defined as follows: fever, rash, arthritis, and serositis along with increased serum C-reactive protein (CRP) levels. The criteria for the inactive phase on medication include no clinical symptoms that can be seen in the active phase as well as normal CRP levels . Serum samples were collected from three patients with TRAPS, 8 with FMF, 90 with s-JIA, and 33 with KD in the active phase. Serum samples were also collected from five patients with TRAPS, 10 patients with FMF, 33 patients with s-JIA and 6 patients with KD in the inactive phase. The clinical characteristics of these patients in the active phase are shown in Table 1. All patients with s-JIA and KD had fever, but one patient with TRAPS and one patient with FMF had no fever in the active phase. Most patients with s-JIA and KD had rash, and most patients with TRAPS and FMF had serositis. Only one patient with TRAPS was treated with a low dose of prednisone. The patients with FMF, KD, and s-JIA in the active phase received no treatments including prednisone, colchicine, immunosuppressants, and biologics.
This study was approved by the Institutional Review Board of Kanazawa University. All participants provided written informed consent. The study was performed in accordance with the ethical standards laid down in an appropriate version of the 1964 Declaration of Helsinki.
Quantification of Serum Cytokines
Sera were extracted from blood samples, divided into aliquots, frozen, and stored at −80 • C until analysis. Serum levels of sTNFR-I and sTNFR-II were measured using a commercial enzyme-linked immunosorbent assay (ELISA) according to the manufacturer's instructions (R&D Systems, Inc., Minneapolis, MN, USA).
Statistical Analysis
Statistical analysis was performed using GraphPad Prism 7 software (GraphPad, San Diego, CA, USA). Serum sTNFR-I and sTNFR-II levels and sTNFR-I/II ratio were presented as the median and interquartile range (IQR). Comparisons between several groups were performed using one-way analysis of variance with Tukey's multiple comparisons test. A P-value of < 0.05 was considered statistically significant.
Comparison of Serum sTNFR-I and sTNFR-II Levels and sTNFR-I/II Ratio in TRAPS and Other Autoinflammatory Diseases in the Active Phase
We measured serum sTNFR-I and sTNFR-II levels in patients with TRAPS in the active phase and compared our findings with those observed in FMF, s-JIA, and KD patients and HCs. As shown in Figure 1A and Table 2, serum sTNFR-I levels were significantly elevated in the active phase in patients with s-JIA (median, 2,900 pg/mL; IQR 2,240-3,563) (p < 0.0001) and KD (median, 2,400 pg/mL; IQR 1,860-3,160) (p < 0.0001) compared with HCs (median, 835 pg/mL; IQR 795-1,083). Serum sTNFR-I levels were significantly elevated in the active phase in patients with s-JIA compared with FMF (median, 1,260 pg/mL; IQR 1,113-1,635) (p < 0.01) and TRAPS (median, 920 pg/mL; IQR 890-1,000) (p < 0.01). However, serum sTNFR-I levels in patients with TRAPS were not elevated compared with those in HCs and were significantly lower compared with those in patients with s-JIA. Serum sTNFR-I levels in patients with TRAPS were also lower compared with those in patients with FMF, although this was not statistically significant. As shown in Figure 1B and Table 2, serum sTNFR-II levels were significantly elevated in the active phase in patients with s-JIA (median, 6,250 pg/mL; IQR 4,550-7,963) (p < 0.0001) and KD (median, 7,250 pg/mL; IQR 4,410-9,390) (p < 0.0001) compared with those in HCs (median, 3,125 pg/mL; IQR 2,730-3,775). Serum sTNFR-II levels were significantly elevated in the active phase in patients with s-JIA (p < 0.05) and KD (p < 0.05) compared with those in FMF patients (median, 3,340 pg/mL; IQR 2,375-3,858). Serum sTNFR-II levels in patients with KD were also significantly elevated in the active phase compared with those in patients with TRAPS (median, 3,100 pg/mL; 2,330-3,550) (p < 0.05).
Comparison of Serum sTNFR-I and sTNFR-II Levels and sTNFR-I/II Ratio in TRAPS and Other Autoinflammatory Diseases in the Inactive Phase
We also measured serum sTNFR-I and sTNFR-II levels in patients with TRAPS in the inactive phase and compared these values with those obtained for patients with s-JIA, FMF, KD, and HCs. As shown in Figure 1D and Table 2, serum sTNFR-I levels were significantly lower in the inactive phase in patients with TRAPS (median, 444 pg/mL; IQR 350-495) compared with those in s-JIA patients (median, 1,040; 685-1,380) (p < 0.01) and KD (median, 1,450; IQR 1,238-1,698) (p < 0.001). Serum sTNFR-I levels in patients with TRAPS were also significantly lower compared with those in HCs (median, 835; IQR 795-1,083) (p < 0.05).
Distribution Map of Serum sTNFR-II and sTNFR-I/II Ratio
As shown in Figure 2A, in the active phase, serum sTNFR-I levels and sTNFR-I/II ratio in patients with s-JIA and KD were high. In contrast, both values in TRAPS patients were similar to those in HCs. In patients with FMF, they were mildly elevated and higher than those in patients with TRAPS.
As shown in Figure 2B, in the inactive phase, serum sTNFR-I levels and sTNFR-I/II ratio in patients with TRAPS were lower than those in HCs.
DISCUSSION
In this study, we demonstrated that in the active phase, the serum sTNFR-I/II ratio in patients with s-JIA, KD, and FMF was significantly elevated compared with that in HCs, whereas it was not elevated in patients with TRAPS. In the inactive phase, the serum sTNFR-I/II ratio in patients with FMF and s-JIA was significantly higher compared with that in HCs, but was lower in patients with TRAPS compared with FMF and s-JIA. From these findings, low serum sTNFR-I/II ratio in the active and inactive phase might be useful for the differential diagnosis of TRAPS and other autoinflammatory diseases prior to genetic analysis for TRAPS.
TRAPS is an autosomal dominantly inherited autoinflammatory disease caused by mutations in TNFRSF1A (1). The pathogenesis of TRAPS remains unknown and is under investigation. One possible explanation is the shedding hypothesis (1). In normal conditions, after activation of the (7). The structurally altered mutant of TNFR-I failed to interact with the wild-type receptor and formed abnormal self-aggregates that were retained in the endoplasmic reticulum. Misfolding of TNFR-I in the ER induces an inflammatory response through the unfolded protein reticulum (8), ligand-independent NFκB activation (9-11), and generation of mitochondrial reactive oxygen species (12). This misfolding hypothesis might explain how the inflammatory phenotype of TRAPS may be associated with the induction of cytokines, such as IL-1β, due to an unfolded protein response. Clinical features and laboratory parameters in patients with TRAPS often overlap with those of other autoinflammatory diseases, particularly FMF, s-JIA, and KD. Furthermore, there are no definitive biomarkers for these diseases. This situation makes the clinical diagnosis of these patients difficult. Our patients with TRAPS were initially diagnosed with s-JIA or FMF. Furthermore, the patient with TRAPS diagnosed as FMF had undetermined mutations outside of exon 10 of MEFV. Thus, genetic analysis is not always the best approach for diagnosing these diseases, particularly in patients with ambiguous genetic mutations. McDermott et al. reported serum s TNFR-I levels were not elevated even in the active phase in patients with TRAPS (1). However, we previously reported that serum sTNFR-I levels were significantly elevated in KD and s-JIA (13). From these findings, we hypothesized serum sTNFR-I levels might be useful for differentiating TRAPS from other autoinflammatory diseases whose clinical features are similar to TRAPS.
In this study, serum sTNFR-I levels in patients with TRAPS were not elevated even in the active phase. Furthermore, serum sTNFR-I levels were lower than those in HCs in the inactive phase. Nonetheless, serum sTNFR-II levels in patients with TRAPS did not differ from those in HCs in both the active and inactive phase. Although we examined only patients with TRAPS with T50M, C43R, and C30Y mutations of TNFRSF1A, McDermott et al. also reported that serum sTNFR-I levels of TRAPS patients with C33Y, T50M, C88Y, and C52F mutations of TNFRSF1A in the inactive phase were lower than those in HCs, and in the active phase they were more elevated than those in the inactive phase, but were not as high as levels in rheumatoid arthritis and systemic lupus erythematosus. Serum sTNFR-II levels of TRAPS patients with C33Y and C52F mutations in TNFRSF1A were similar between the active phase and the inactive phase (1). These findings indicate that low serum levels of sTNFR-I, both in the active and inactive phase, is a characteristic of TRAPS. Diagnosis of TRAPS is conducted via a genetic test and should be considered in suspected TRAPS patients with insufficient elevation of serum sTNFR-I levels in the active phase compared with other autoinflammatory diseases, and also with a significant decrease in these levels in the inactive phase compared with HCs.
In this study, serum sTNFR-I levels in patients with s-JIA were significantly elevated in the active phase compared with those in FMF patients. Serum sTNFR-II levels in patients with s-JIA and KD were significantly elevated in the active phase compared with those in FMF patients. The serum sTNFR-I/II ratio in patients with FMF was significantly elevated compared with that in HCs, and there were no differences in the ratio between FMF and s-JIA, and KD. These findings indicate both sTNFR-I and sTNFR-II are increased in patients with s-JIA and KD, whereas sTNFR-I is predominantly increased and sTNFR-II is not increased in patients with FMF. Furthermore, serum sTNFR-II levels in patients with FMF were significantly lower in the inactive phase compared with those in s-JIA and KD patients. During cell-mediated immune responses, sTNFR-II is mainly shed from stimulated monocytic cells and lymphocytes whereas other cells responding to IFN-γ preferentially shed sTNFR-I (14), but it is unclear why. However, monocytes/lymphocytes might contribute more to the pathogenesis of s-JIA and KD compared with that of FMF, to which neutrophils mainly contribute.
This study had some limitations. First, the sample number of TRAPS patients was very small. We measured serum levels of sTNFR-I and sTNFR-II only in TRAPS patients with T50M, C43R, and C30Y mutations in TNFRSF1A. Second, we did not perform a cost-benefit analysis. Third, in general, cytokine measurement by ELISA is limited to the laboratory level. Further studies to evaluate these levels in TRAPS patients with other mutations in TNFRSF1A are necessary. Larger studies may help to define the true diagnostic value of sTNFR-I and the sTNFR-I/II ratio as clinical markers.
In conclusion, the serum sTNFR-I/II ratio in TRAPS patients may be a useful indicator for the differentiation of TRAPS from FMF, s-JIA, and KD. Particularly, decreased serum sTNFR-I levels and sTNFR-I/II ratio in the inactive phase and not increased those in the active phase may be useful in cases of suspected TRAPS. Hence, genetic tests for TRAPS should be considered in patients with these abnormal findings.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board of Kanazawa University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
JY, MS, TT, and MY were involved in the acquisition of data and analysis and interpretation of data. JY and MS wrote the manuscript. All authors were involved in the conception, design of the study, revising it critically for important intellectual content, and read and approved the final manuscript. | 3,574.2 | 2020-10-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Determination of toxicity and chromatographic analysis of spilanthol content in in vitro culture of Spilanthes oleracea Jacq
1 Programa de PósGraduação em Biotecnologia Vegetal, Universidade Federal do Rio de Janeiro, CCS, Bloco K, Cidade Universitária, Ilha do Fundão, 21952-590 Rio de Janeiro RJ, Brazil. 2 Faculdade de Farmácia Universidade Federal do Rio de Janeiro, CCS, Bloco A, Cidade Universitária, Ilha do Fundão, 21949-590 Rio de Janeiro RJ, Brazil. 3 Instituto de Ciências BiomédicasUniversidade Federal do Rio de Janeiro, CCS, Bloco B, Cidade Universitária, Ilha do Fundão, 21949-590 Rio de Janeiro RJ, Brazil. 4 Universidade Federal do Estado do Rio de Janeiro – UNIRIO, Departamento de Botanica, Avenida Pasteur 458, 22290-040 Rio de Janeiro – RJ, Brazil.
INTRODUCTION
Spilanthes oleracea Jacq. is a plant of the Amazon region that is commonly utilized in local cuisine and folk medicine.The entire plant is claimed to have medicinal properties (Dubey et al., 2013).The leaves are eaten raw or as a vegetable by many tribes in India (Chakraborty et al., 2004) and also in the Amazon region.It is commonly *Corresponding author: E-mail<EMAIL_ADDRESS>Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License known as "toothache plant" or "Paracress" (Wyk and Wink, 2009).The alkamide spilanthol, which occurs in several members of Asteraceae including S. oleracea, causes a pronounced tingling and mouthwatering effect upon ingestion.Industrial applications of this substance include oral care (Hirayama and Ikenishi, 2010), and as a flavoring and preservative in food (Miyazawa and Yamaguchi, 2010;Tanaka and Yagi, 2009).In cosmetics, it has been recently been employed as an anti-ageing ingredient, among other applications (Demarne and Passaro, 2008).Besides these applications, the leaf extract showed larvicidal activity against Aedes aegypti, increasing the possibility that it could be used as an important tool in the control of dengue (Ramsewak et al., 1999;Pandey et al., 2007Pandey et al., , 2011)).
The content of spilanthol is generally higher in the fieldharvested flower heads, as also reported for Echinacea and Spilanthes (Perry et al., 1997;Nayak and Chand, 2002), than in other tissues, showing a tissue-specific distribution.Plant tissue culture can be used to induce quantitative and qualitative modifications in the production of plant secondary metabolites, by changing nutrient and hormone contents in the culture medium (Abyari et al., 2016;Collin, 2001).In addition, tissue culture eliminates the effect of climate conditions and diseases to which field-grown plants are subject.
Earlier studies of in vitro culture of different species showed that the accumulation of different secondary metabolites can be efficiently induced by various elicitors.Abiyari et al. (2016) demonstrated that the application of casein hydrolysate and L-phenylalanine is effective for productions of scopoletin.Alkamides were induced by methyl jasmonate (MeJa) in Echinacea pallida (Binns, 2001).Romero et al. (2009) demonstrated the efficacy of E. pallida, Echinacea purpurea and Echinacea angustifolia hairy root cultures in the in vitro production of alkamides.Moreover, in vivo elicitation through foliar application of elicitors [acetylsalicylic acid, salicylic acid, and methyl salicylate; as well as the metal elicitor (titanium (IV) ascorbate)] on E. purpurea increased the phenolic content up to 10 times as compared to the control, and also increased the biomass yield (Kuzel et al., 2009).Co-culture of different organs/species together also has been attempted since the co-culture provides the opportunity for metabolites produced by one organ/species to be excreted into medium and taken up by another organ.Sidwa-Goricka et al. (2003) have established co-culture of hairy roots of Ammi majas and cell-shoot suspension culture of Ruta graveolens to investigate possible interaction of metabolic pathways of coumarins, whereas, Wu et al. (2008) established the coculture of ginseng (Panax ginseng) and E. purpurea adventitious roots for the productions of secondary metabolites.
This study aimed to assess spilanthol production in S. oleracea in vitro culture under the influence of benzyladenine (BA) and methyl jasmonate (MeJa), and evaluate the co-cultivation effect and acute toxicity of the crude extract from field-grown plants.
Plant material and tissue culture
Seeds of S. oleracea Jacq.were collected in Belém, State of Pará, Brazil, and were identified by Dr. Ricardo Secco of the Emílio Goeldi Museum, Belém.Voucher specimens were deposited in the Emílio Goeldi Museum under catalogue number Herbarium MG 156.773.Four-week-old in vitro germinated seedlings were used as a source of explants for initiation of cultures in MS medium (Murashige and Skoog, 1962).Nodal segments excised from in vitro-cultured seedlings were inoculated in different treatments: culturing in the presence or absence of agar (30 and 90 days), under the influence of BA (2.22 and 4.44 µM).
The interaction with elicitors (2.0 µL salicylic acid and 45 µL methyl salicylate (Sigma Aldrich) added to ethanol, with a final concentration of 100 ppm (Binns, 2001) was put on cotton pieces that were put with the plants (with 60 days of culture on growth regulator-free MS medium).In addition, co-cultivation of nodal segments of S. oleracea and Polygala paniculata L. was performed.P. paniculata L. in vitro culture demonstrated production and release of methyl salicylate according to Victorio et al. (2011).The cultures were maintained in a growth chamber under cool-white fluorescent lighting tubes (1.6 W m -2 , 23 µmol.m -2 s -1 and daily photoperiod of 16 h at 25 ± 2°C).
Spilanthol extraction and GC analysis
Freeze-dried leaves, shoots and calli were macerated for 2 days in 90% chloroform (p/v) according to Simas (2003).The extract was then filtered and the solvent volume was reduced in a rotary evaporator.The resulting dry residue was weighed to determine the yield of crude extract for each treatment (Table 1).
GC-FID
Quantitative analysis of the extracts (30 mg/mL) was performed in a gas-chromatography system (Shimadzu GC-17A) equipped with a flame ionization detector (FID) and DB-5 capillary column (30 m, 0.32 mm, 0.25 m), and 1 µL of each sample was injected with a split-mode injector (1: 6) into a flow of hydrogen gas held constant at 1 mL/min flow rate.The oven temperature was programmed for an initial temperature at 100°C, increasing at 10°C/min up to 200°C, held for 20 min, then a second ramp-up of temperature at 3°C/min to a final temperature of 250°C, and held for 5 min.The temperatures of the injector and detector were held at 250 and 200°C, respectively.The percentage content of spilanthol was calculated by integrating the areas of the corresponding signals.
Qualitative analysis was performed in a mass spectrometer (Hewlett-Packard, model HP-5971 A) coupled to a gas chromatograph, model HP-5890 A, Series II, equipped with a DB-5 capillary column (30 m, 0.32 mm, 0.25 m).Experimental conditions were: ionization by electron impact at 70 eV, helium as carrier gas at flow rate of 1 mL/min.The National Institute of Standards and Technology (NIST, 1990) database was used for comparison of mass spectra.
Acute toxicology assay
Thirty female albino Swiss mice (25-30 g), two months old, were obtained from the central animal house of the Microbiology Institute/UFRJ.They were housed in standard polypropylene cages, five per cage, and kept under controlled room temperature (28 ± 2°C; relative humidity 50-55%) in a 12-h light cycle.The mice were given a standard laboratory diet and water ad libitum.Food was withdrawn 12 h before and during the experimental period.The leaf ethanolic extract from field-grown plant was dissolved in 10% DMSO and administered to the animals orally by gavage.Two doses of the ethanolic extract (300 and 3000 mg/kg) were tested through intraperitoneal administration.On the first day of treatment, the animals were observed for 3 h, for any behavioral changes or deaths.After ten days of treatment, the animals were anesthetized lightly with ether and killed by cardiac puncture, and their body and organs (liver and spleen) were weighed.All experimental protocols were approved by the institutional Animal Ethics Committee (IBCCF/UFRJ 036).
Statistical analysis
All experiments used a fully randomized block design.Each experiment consisted of 5 nodal segments/vessel and 6 replicates per treatment (plant growth regulator, elicitors).All the experiments were repeated three times.The data were subjected to one-way ANOVA, and mean values were compared by Dunnett's multiple comparison test or by the nonparametric Kruskal-Wallis test followed by Dunn's multiple comparison test as post-test, using the software GraphPad InStat, version 6.01.Rooting percent data were tested for the significance of the difference between two percentages, at the 5% significance level, using the software Statistica for Windows, version 5.0.For the quantitative analysis of phytochemical compounds, the data were collected from two independent experiments, and are presented as the mean values.
In vitro culture of S. oleracea
Liquid or low-agar concentration media increase the growth of certain species cultured in vitro (Casanova et al., 2008;Abdoli et al., 2007), which is caused by greater availability of water and nutrients (Debergh, 1983); however, S. oleracea showed no difference in in vitro shoot development on liquid or solid media (Table 2).Liquid cultures are suitable for bud development and shoot multiplication, with BA added to the culture medium.Consequently, liquid medium was used throughout this study.The growth ability of S. oleracea nodal explants was improved with the additional BA source (Figure 1a, b and Table 2) as previously reported (Haw and Keng, 2003;Saritha et al., 2002;Bais et al., 2002;Deka and Kalita, 2005;Saritha and Naidu, 2007).Furthermore, the assayed concentrations (2.22 and 4.44 M) did not show significant differences in the explant responses except in relation to the root development.BA increased the multiplication rate (numbers of buds and nodal segments); however, rooting of BA-grown plantlets was inhibited, with no significant changes in shoot length.Basal-shoot calli were induced to the detriment of root development (Figure 1b).At the end of the cycle (multiplication phase), 70% of the plantlets had rooted.Auxin supplementation was not necessary for root formation.
The co-cultivation with P. paniculata (Figure 1c) resulted in an increase of the height of the plants, accompanied by increased bud neoformation.The significant difference in plant height between the co-culture and control reinforces the hypothesis that the methyl salicylate produced by P. paniculata may act as an allelopathic compound, since co-culturing stimulated the growth of S. oleracea.All plantlets acclimatized well, and the establishment of micropropagated plants occurred at a high rate (96%) (Figure 1d).
Gas chromatographic analysis of spilanthol content in micropropagated plants
Spilanthol content was determined in different parts of in vitro grown plants and in each tissue type analyzed (Table 3, Figure 2).The roots from in vitro plants, including acclimatized plants, contained no spilanthol (Table 3).Franca et al. (2016) determined the spilanthol content in all organs of in vitro culture of Acmella oleracea using Murashige and Skoog as basal medium and phytagel as the gelling agent.The aerial parts (leaves and stems) of plants exposed to 100 ppm salicylic acid and methyl salicylate did not contain spilanthol.However, the plants co-cultivated with P. paniculata maintained their capacity to synthesize spilanthol, with about 11.9% relative abundance (RA) (Table 3).Possibly, the concentration of elicitors may be related to the difference in responses between experiments involving methyl salicylate alone and methyl salicylate released from P. paniculata.
In order to increase the levels of spilanthol in vitro, Binns (2001), using elicitors such as jasmonates, found 3).Basal calli grown in the medium exhibited spilanthol contents increasing from 2.1 to 6.0% RA in 4.44 M BA (Figures 1c and 2).The highest spilanthol accumulation was found in MS control plants after 90 days of culture, reaching 58.5% RA of total content, against 19.0% in a 30-day culture (Table 3).However, when these highest-accumulating spilanthol plants were transferred to field conditions, the spilanthol content was low (4.0%) (Table 3).Extracts from flowers enriched with spilanthol contained 1.2% in the pentane extract, 6.17% in the methanol extract, and up to 17% in the CO 2 supercritical fluid extract in different species of field-grown Spilanthes (Stashenko et al., 1996, Ramsewak et al., 1999).Data for spilanthol content specifically in leaves and stems are lacking, and the existing reports emphasize the contents in flowers or homogenized aerial parts of S. oleracea.However, Stashenko et al. (1996) calculated a spilanthol content of 21% as measured by area percent of gas chromatogram in a CO 2 supercritical fluid in an extract from leaves of S. americana.
The in vitro culture of S. oleracea showed the influence of nutrient-enriched Murashige and Skoog medium cultures on spilanthol accumulation.This accumulation increased in the time-consuming medium at 90 days of culture, and organ-specific production of spilanthol in the leaves, stems and even in the callus was observed.The results show the importance of in vitro MS culture, as compared to field culture, which did not result in spilanthol accumulation.In in vitro culture, S. oleracea had high spilanthol content in three months, from small amounts of in vitro plants extracted by maceration in chloroform solvent.
Acute toxicology assay
After ten days of the experiment, all animals treated intraperitoneally with the ethanol extracts of S. oleracea leaves (300 and 3000 mg/kg) were alive, with no external physical abnormalities.No notable changes were observed in the weights of liver and spleen from animals treated with the extract, in any dose tested, indicating no acute toxicity (Table 4).Chakraborty et al. (2004) conducted experiments with intraperitoneal administration of an aqueous extract of aerial parts (100, 200 and 400 mg/kg) for four hours, to evaluate the analgesic activity in albino mice.At the same time, the authors evaluated the acute toxicity of an aqueous extract of S. acmella, and observed no adverse effect or mortality in albino mice that ingested up to 3 g/kg p. o. during a 24-h observation period.
Spilanthol is one of several compounds that are extractable by ethanol from S. oleracea leaves.This possibility was demonstrated by Molina-Torres et al. (1999) in their spilanthol extraction and purification procedure from an ethanol extract.Ethanol, as used in this study, proved to produce a Spilanthes extract, which proved to have a safe composition in the animal-model tests.The absence of subchronic toxicology was observed in other studies, as related by Zuluaga et al. (2008) who administered an ethanolic extract of Spilanthes americana in Swiss albino mice, and by Ekor et al. (2005) who evaluated an aqueous extract of Spilanthes filicaulis.
Conclusion
The in vitro propagated S. oleracea in liquid-medium culture maintained the capacity to synthesize spilanthol in the leaves, stems and calli.The roots of in vitro-grown plantlets contained no spilanthol.Spilanthol was present in the regenerated plants; however, in much lower amounts than in in vitro culture.In this study, the plant organ-specific biosynthesis of spilanthol and in vitro culture proved to be an efficient method to obtain spilanthol in a liquid medium, without the need for the time-consuming addition of a growth regulator.The ethanolic extract obtained from field-grown leaves was proven to be safe, by an acute test in mice.The regeneration protocol and GC analysis developed here provide a new approach towards quality control of micropropagated plants.This method of producing secondary metabolites has significant implications for the production of standardized-quality phytopharmaceuticals through mass production and analysis of the active ingredients.
Figure 1 .
Figure 1.Plant regeneration from nodal segments of S. oleracea.a-Shoot development from axillary buds induced by 2.22 μM BA liquid after 30 days.b-Basal and friable calli development in 4.44 μM BA liquid medium after 30 days.c-Co-culture of Spilanthes oleracea and Polygala paniculata (arrow); d-Acclimatized plants.
Figure 2 .
Figure 2. GC fingerprint of plant materials of S. oleracea.A-Chromatographic profile of field grown flowers of S. oleracea.B-Spilanthol fragmentation pattern.C to F-chloroformic extracts of samples from in vitro culture.C-leaves, D-Calli obtained from 4.4 µM BA, E-Shoot, F-Root.
Table 1 .
Yield of dried chloroform extracts (%, m/DM) obtained from dry mass in vitro plants of S. oleracea for different treatments.
Table 2 .
Spilanthes oleracea in vitro development.Effect of liquid and solid media, BA concentrations and co-culturing with Polygala paniculata.
Table 3 .
Values of relative area (%) of spilanthol content from plant organ extract for each treatment.
Table 4 .
Mean weights (g) of mouse organs obtained from animals used (n = 5/treatment) for acute toxicology testing.The inoculum applied was an ethanol extract of aerial parts of field-grown plants of S. oleracea. | 3,659.6 | 2018-03-28T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Study on hydraulic spray atomizing system as a new resource-efficient dyeing-finishing method for wool fabric
This study introduces hydraulic spray (HS) atomizing system as new resource-efficient continuous dyeing-finishing method for wool fabric. Here, wool fabric was dyed and finished by using commercial dyes and finishes through either one-step or two-steps HS method. Results obtained from color strength (K/S), color difference (ΔECMC) and color fastness analysis presented the apprehension of HS method in dyeing of wool fabric with different GSM and dyes. Finishing performance of wool fabric was measured through water contact angle analysis. Analysis shows that, the finishing performance of HS method were substantial to reach water contact angle as high as 145° while maintaining high fastness to wash and abrasion. Between one-step and two-steps HS method, one-step method showed better performance with high resource efficiency compared to two-steps method. Results from statistical analysis shows no statistical significance of fabric weight, type of dyes, and finishes to the performance of new HS method which is crucial for true-scale industrial implementation and scaling up of this process. The findings of this report are of great importance as it presents a greener alternative to the conventional resource-intensive dyeing-finishing methods of wool fabric.
Atomization is essentially the process of converting bulk liquid into small drops. It is a disruption of the consolidating influence of surface tension caused by the action of internal and external forces. Spray atomization is the transformation of a liquid into a spray of fine particles 1 . This process is widely utilized when distributing material over a controlled surface area in various fields due to their high control on the process, low waste generation, and easy operation process. Spraying is the most widely used means of pesticide application for pest control in agriculture and forestry 2 . Recently, hydraulic spraying technology attracted attention to many researchers in functionalization of textiles due to the feasibility, sustainability, and economic benefits [3][4][5] . Li, Arumugam et al.
(2020) reported a fully spray coated organic solar cell fabricated directly on to standard polyester cotton fabric 6 . Samanta and Bordes (2020) proposed a preparation method of conductive textiles by spray coating of water-based graphene dispersions 7 . Sadanandan, Bacon et al. (2020) reported that, spray coating of graphene on textile fabrics is emerging as one of the more promising techniques to overcome the limitations of the irregular and coarse structures of textile fabrics 8 . Spray coating is a potential process for realizing thinner films on textiles. Based on that spray technology can be a recognized alternative to spin coating and non-contact deposition process as opposed to, for example, screen printing. Spray coating also benefits from a wider range of acceptable rheological parameters compared to digital printing, which strictly limits these properties.
The hydraulic spray atomizing system is a continuous process that sprays desired material in the fabric in large through an atomizer 9 . In this system it is no longer necessary to prepare large deposits of chemicals. Besides that, during processing, there are no physical and chemical interaction (as like conventional methods) which protects inherent characteristics of material treated 10 . In addition, the process reduces the discharge of waste as no/less chemicals are needed compared to conventional methods which results in reduction in energy and other resources consumption in subsequent waste management/treatment processes 11,12 .
At present, sustainability aspects in terms of production processes is a serious concern in textile processing industries globally. Out of many challenges in conventional textile processes, resource-insensitivity, waste Preparation methods of dyeing-finishing of wool fabric (HS and Conventional). In this study, both one-step and two-steps dyeing-finishing of wool fabric using HS method was studied and compared with conventional processes. MiniMax hydraulic spray atomising system from Imogo AB was used along with laboratory scale FlexDyer. The details of the MiniMax HS atomizing system and the machine parameters can be found in Sect. 1.1 of the supporting information. For all studies, samples were conditioned at 20 ± 2 °C temperature and 55 ± 5% relative humidity for 24 h and all parameters were chosen based on respective preliminary studies (see Sect. 1.2 of the supporting information). Processes involved in this study are presented schematically in Fig. 1 and described as follows; (a) Two-steps dyeing-finishing process of wool fabric through HS method: In two-steps process, wool fabrics were dyed and finished in separate processes (see Fig. 1a). Spray solution to fabric ratio was 1: 0.4 with 80% pick-up rate followed by standard dye fixation in laboratory autoclave (98 °C for 90 min). Spray solutions were prepared by dissolving either acid dyes (Dye 1: 8.75 g/L, pH ~ 3) or reactive dyes (Dye 2: 20 g/L, pH ~ 4.5) in water. After dyeing, the samples were rinsed and dried at ambient condition before applying the commercial hydrophobic finishes (Finish 1: 80 mL/L, pH ~ 5 and Finish 2: 125 mL/L, pH ~ 4) at 80% pick-up followed by drying (W1: 160 °C for 2 min, W2: 160 °C for 1 min) and curing (W1: 170 °C for 1 min, W2: 170 °C for 0.5 min) in a Mathis lab stenter machine. In this experiment, the samples were however sprayed and finished on both sides. (b) One-step dyeing-finishing process of wool fabric through HS method: Wool fabrics were dyed and finished at the same time at 80% pick-up rate (see Fig. 1b). Dye and finish solution were prepared separately and mixed to set for four spray solution involving two dyes and two commercial hydrophobic finishing agents as: Acid + F1 (pH 3.5), Acid + F2 (pH 3.5), Reactive + F1 (pH 4.5), Reactive + F2 (pH 4.5). The spray was applied on both sides of the fabric before fixation in laboratory autoclave (98 °C). After fixation, the samples were dried and cured in a Mathis lab stenter machine as same as two-steps method. (c) Conventional dyeing-finishing process of wool fabric: Wool fabrics were dyed in an exhaust dyeing machine followed by finishing through a pad-dry-cure method (see Fig. 1c). Typically for dyeing, the liquor ratio was 1: 20. The solution was prepared by dissolving either acid dye (0.35 g/L) or reactive dye (0.8 g/L) and fabrics were dyed following respective standard dyeing curves ( In total 24 samples were prepared and comparatively investigated in this study to understand the feasibility of the HS atomizing method for dyeing-finishing of a wool fabric as an advanced resource-efficient process. Summary of all samples and their corresponding descriptions is provided in Table 1.
Material characterizations.
The characterization of the samples was carried out in terms of color measurements (dyeing), water contact angle (finishing) and fastness of dyes and finishes towards washing and abrasion. As prepared dyed and finished wool fabrics were fully characterized to understand the effectiveness of the www.nature.com/scientificreports/ newly introduced advanced resource-efficient HS dyeing-finishing method. To study the dyeing performance, color measurements of the samples were done with a Datacolor 500 spectrophotometer. The color data was measured in the visible spectrum region of 360-700 nm and converted into tristimulus values that describe a specific point in the color space. With this measurement tool, two different color values were measured and used in the color assessment. The color strength was measured through Kubelka-Munk's equation (Eq. 1) using reflectance of dyed samples (R), the absorption coefficient (K) and the scattering coefficient (S), and expressed as K/S.
Besides the color strength, the color difference between samples was measured and expressed as the ΔE CMC value. To get a mean value of the K/S and ΔE CMC value, four different readings of reflectance on different positions on each sample were used. These measurements were done with three replicates of each sample. The samples were conditioned before measuring, and to get an accurate color measurement, the fabric was folded, so a double layer was measured. To assess the performance of the hydrophobic finish, hydrophobicity of the samples in terms of water contact angle was measured using an optical tensiometer from Biolin Scientific (Attension Theta). The water contact angle (θ H 2 O ) was measured by taking the average contact angle after 2 s once the water droplet (drop size 5 μL) was stabilized on the fabric surface. Three independent measurements were carried out on each sample and the mean value with coefficient of variation has been reported. The fastness of dyes and finishes toward washing was measured according to ISO 6330:2012 in a wascator, type A. The samples were washed according to the program 4 N (as discussed in the standard), with a maximum load of 2 kg, at 40 ± 3 °C, for 30 min and with 20 ± 1 g of a non-phosphate powder detergent without optical brightener and enzymes. To www.nature.com/scientificreports/ test the color fastness over time, washing was repeated with another three cycles, with a drying step in between at 70 °C for 25 min and conditioned before further color testing. The fastness of dyes and finishes toward abrasion was measured using a Martindale SDL Atlas M235 with a speed of 47.5 revolutions per minute according to ISO 12,947-1:1998, a weight load of 9 kPa and a total of 10,000 runs. A standard woven wool fabric was used as the opposite rubbing cloth.
Statistical analysis. Statistical analysis was performed to determine whether there is significant difference in the gathered data by implementing them in Minitab statistical tool. To determine the significant difference between processes (conventional, two-steps and one-step HS) a one-way ANOVA test was carried out at 95% confidence interval with the null hypothesis stating that all means are equal and the alternative hypothesis stating that at least one mean is different. To test the significant difference between the measurements before and after washing test, a paired t-test was performed at 95% confidence interval with the null hypothesis stating that the mean of differences (μ d ) is equal to 0 and the alternative hypothesis stating that the mean of differences is not equal to 0.
Results
The results from this study are presented in two parts; Part 1 presents the results related to the dyeing performance of resultant wool fabrics prepared through two-steps or one-step HS method in relation to the conventional exhaust dyeing method. Part 2 presents the results related to the finishing performance of resultant wool fabrics prepared through HS methods (two-steps or one-step) in relation to the conventional pad-dry-cure method.
Part 1: Analysis of dyeing performance. A comparative study to assess the dyeing performance of the two-steps and one-step HS method in relation to the conventional exhaust dyeing method were carried out through color strength (K/S) measurements, color difference (ΔE CMC ) measurements, and fastness to washing and abrasion based on color measurements of the dyes wool fabrics.
Color strength (K/S) measurement. All dyed wool fabrics with both acid and reactive dyes were evaluated through color strength measurements to identify the difference between the conventional and HS dyeing www.nature.com/scientificreports/ method. In principle, the color strength value provides evidence related to the depth of the color on the dyed fabric surface 22 . Results presented in Fig. 2b and c show the plots of K/S values of the dyed wool fabric samples prepared with acid dye and reactive dye. Results from both acid and reactive dyed samples showed that there is a significant difference in color strength depending on the method of dyeing used, which is also visible by the naked eye (Fig. 2a). Conventionally dyed samples showed higher color strength values than HS dyed samples. This can be due to the possible diffusion limitation of dyes in the HS dyeing method compared to the conventional dyeing method, which restricted the dyes to be evenly distributed on the pores of the wool fabric 23 .
Higher diffusion in conventional methods may occur due to the use of electrolytes in the conventional method (which was not used in HS method) that influences the solubility and adsorption of dyes into the fibers 24 . The wool fiber swells in liquid, and the acidic conditions charge the amino acids on the surface, making it possible for the dye to enter the fiber and make strong bonds with the fibers 25 . As the dye fixation in the HS method is a rather dry process, where the fabric is subjected to only dry heat while being moist from the spray liquid, the wool fiber swells less, and affects the dye fixation process. The fabric surface keeps less moisture (the characteristic of wool) during the fixation, and the dyes migrate into a damper environment with lower pH. The level of unfixed dyes is not higher, so the fixation seems to take place deeper into the fabric. Nevertheless, the K/S value of samples dyed using new HS methods showed significant color strength as high as 14.0 which is suitable for commercial application.
A close look at the results shows that, there is a noticeable difference in color strength between the one-step and two-steps HS dyed-finished samples. One-step HS dyed samples showed better color strength than that of two-steps spray dyed samples. The poor color strength in the two-steps HS dyeing method can be due to meddling applied to the color during the finishing step. Although the dyeing method for both cases was the same, in the one-step method, dyes and finishes were mixed and sprayed over the wool fabric together, whereas in the two-steps method, dyes and finishes were sprayed separately over the samples to form a layer-by-layer assembly of dyes and finishes. The extend of the difference in color strength between the one-step and two-steps dyed samples were found to be influenced by the type of finishes used and the weight of the fabric (see Fig. 2). Finish 2 (Ruco-Dry DHE) was found to result in more color difference than Finish 1 (Ruco-Dry ECO DCF). Lighter Color difference (ΔE CMC ) measurement. To further understand the dyeing performance of the conventional and HS method, the color difference of both acid and reactive dyed wool fabric samples were evaluated. At first, the comparative color difference analysis of the HS dyed samples (one-step) was carried out in respect to the samples prepared through conventional method (see Fig. 3a). After that, the color difference between one-step and two-steps HS dyed samples was studied, as well (Fig. 3b). Results from the color difference between HS dyed and conventional exhaust dyed wool fabric samples shows a significant color difference that can be detected by the naked eye as all samples showed a ΔE CMC value of over 1.0. This indicates the possible color difference of the samples due to the dyeing condition in different combination of HS method (one-step or two-steps), wool fabric (W1 or W2), and finishing agent (F1 and F2) which can be subjected to optimization before bulk processes in industrial scale. Nevertheless, the color difference for W1 fabric (469 GSM) was found strongest with 6.6 for acid dyes and 6.7 for reactive dyes. For W2 fabric (264 GSM), the color differences are 4.5 for acid dyes and 4.1 for reactive dyes. This emphasizes the characteristics of two different textile processes to achieve altered product performances. A close look at the results reveals that acid dyes account for a higher color difference than reactive dyes. Reactive dyes are a better fit for a continuous dyeing process, as the dyeing mechanism is less dependent on the swelling of the wool fiber under high temperatures and presence of water 26,27 . On the other hand, the color difference in a one-step HS method is lower than two-steps method (see Fig. 3a). Further analysis on the color difference measurement between the two-steps and one-step dyed samples (see Fig. 3b) shows that W1 fabric dyed with either acid dyes or reactive dyes and finished with F1 has shown ΔE CMC values, which are high enough to be detected by the naked human eye 28 . On the contrary, the color difference of W2 fabric dyed with acid dyes and finished with F1 has shown ΔE CMC value less than 1, which indicates the existence of color difference beyond the detection limit of the human eye.
Fastness to washing based on color strength (K/S) measurement. Color fastness is an essential analysis to determine the performance of dyeing. Resultant wool fabric samples prepared through either conventional or HS dyeing method were subjected to fastness to washing analysis. The dyeing performance over fastness to wash has been evaluated based on color strength measurements, which is plotted in Fig. 4. Results show that, regardless of the method used, K/S values of most samples have decreased after washing. The decrease in K/S after washing can be explained as the loss of loosely fixed dyes from the fabric during washing 29,30 . Some samples showed surprising increase in color strength after four washing cycles compared to one cycle, which can be due to the attainment of the evenness of dyes on the fabric surface after possible patches of dyes were removed. This novel study has opened several new discussions through its findings, where investigation of reported phenomenon of color strength related to washing is one of them. Although this is out of the scope of this work, but it certainly can be explored for better understanding of the HS technology for dyeing and finishing.
Fastness to washing based on color difference (ΔE CMC ) measurement. Color fastness of resultant wool fabric in terms of washing has been further evaluated based on color difference of samples before and after washing. www.nature.com/scientificreports/ Results are presented in Table 2, which showed that the samples dyed with acid dyes are significantly different in color after washing (observed for both conventional and HS methods). Result shows that, W2 fabric holds its color better after washing than W1 fabric, as the ΔE CMC values have a lower significant difference. A comparison between acid and reactive dyes shows that reactive dyes have a better washing fastness than acid dyes, as most ΔE CMC values stay close to or < 1, which can be found for both types of fabric dyed with reactive dyes. In general, the color difference increased with the number of washes for samples prepared with all three methods (one-step HS method, two-steps HS method, and conventional method). A close look on the results provides evidence of comparatively higher color difference on the HS dyed samples then the conventionally dyed samples. The ΔE CMC in HS1-W1@AF1 after one wash was 3.73 that rose to 5.34 after four washes, whereas C-W1@AF2 has an initial ΔE CMC of 0.71 after one wash that rose to 1.50 after four washes. This can be due to the successive impact of the washing cycles on the interaction of the loosely attached/bonded dyes with the fabric surface that causes the abstraction of dyes from the fabric 31 . Nevertheless, despite the loss of dyes, the strength of color is high enough to retain the characteristics of the dyed fabric as a colored material as supported by the K/S analysis.
Fastness to abrasion based on color strength (K/S) measurement.
Color fastness of selected dyed wool fabrics in respect to abrasion was studied based on color strength K/S (before and after abrasion) according to the method described earlier (material characterizations section). Results shows that samples prepared through hydraulic spray atomizing system exhibited no significant difference in color strength after abrasion test regardless of one-step and two-steps process as well as dyes used. This phenomenon is particularly important as the hydraulic spray atomizing system is a continuous coloration process that excludes several after treatment process compared to conventional method. A detailed study can be carried out as a further study to understand the mechanism of superior color fastness of selected dyed wool fabric in respect to abrasion.
Part 2. Analysis of finishing performance.
To understand the effect of each preparation method and the performance of the hydrophobic finishes, all finished samples were comparatively studied through water contact angle measurement and their fastness in respect to washing and abrassion. A one-way ANOVA analysis was performed on the data to determine the significant difference between the samples.
Water contact angle ( θ H 2 O ) measurement. To assess the hydrophobicity of the samples, contact angle measurements were performed as described earlier in material characterizations section. Figure 5 shows www.nature.com/scientificreports/ all samples finished with any of the two finishes showed higher water contact angle on the fabric when it is prepared with the HS method compared to samples prepared through conventional padding method. This can be explained with the hydrophobic nature of wool which repels liquid to enter the core of the fiber or fabric 26,32 . As the finishing liquid most likely does not fully penetrate into the fabric, the water-repellent chemicals will primarily react with the fibers on the fabric surface, which results in higher contact angles for the HS finished samples compared to the conventional padded samples. As for Ruco-Dry ECO DCF (F1), samples prepared with the conventional padding method for W1 fabric, a θ H 2 O of 125° was recorded, which can be increased by 9° if prepared through a two-steps HS method and by 14° if prepared through one-step HS method. This indicates a better finishing performance of the HS method over conventional padding method once a surface effect, in this case a water-repellency, is desired. Comparing the one-step and two-steps HS method shows that the one-step HS method is more efficient than the two-steps method. The high finishing performance is related to the even and uniform distribution of the finishes on the surface of the wool fabric. On the other hand, a comparison between W1 fabric and W2 fabric shows that there is no significant difference in finishing performance contrary to the difference in dyeing performance. For samples prepared with Ruco-Dry DHE (F2), samples prepared with the HS method show better finishing performance than conventionally padded samples. However, no significant difference was found between the one-step and the two-steps HS method. In general, the finishing performance of the samples prepared with Ruco-Dry DHE was found to be higher than that of Ruco-Dry ECO DCF. Samples finished with Ruco-Dry DHE approach superhydrophobic properties with contact angles between 140 and 150° for one-step HS samples 33,34 . Notable is also that there is no significant difference in average contact angles between the two different fabrics W1 and W2.
Fastness of hydrophobic finishes of wool fabrics to washing. The fastness properties of the applied finishes on the two wool fabrics have been studied in respect to washing as presented in Table 3. Results show that, in general, almost all samples decrease in the finishing performance after washing, which can be related to the removal of loosely attached or bonded finishes on the fabric surface. Comparing conventionally padded, one-step and twosteps HS finished samples, the loss of performance is more prominent in samples prepared with the one-step HS method followed by the two-steps HS method and lastly the conventional padding method. The differences in θ H 2 O for the samples prepared with HS methods can be related to the fact that the hydrophobic agents seem to make weaker bonds in a direct spraying process. As it is likely that there are more negatively than positively charged amino acids on the surface, the water repellent agents form weaker bonds with the fiber surface. During Table 3. Fastness of hydrophobic finishes to washing based on water contact angle of the finished wool fabrics. www.nature.com/scientificreports/ washing, these bonds are easily broken causing the fabric to lose some of its hydrophobicity 35 . Besides, there is no significant difference in average contact angles for all samples after washing, the samples of all three processes show similar contact angles. For W1 fabrics finished with Ruco-Dry Eco DCF these contact angles are 129° prepared by the conventional method and 132° and 130° in the two-steps and one-step HS method, respectively. This is similar for W2 fabrics with the same finish, where the contact angles respectively vary from 131° (conventional padding) to 133° (two-steps HS) and 132° (one-step HS). After a washing cycle, the samples were tumble dried to restore the full effect of the water-repellent finish. The realignment of the hydrophobic agent on the fiber surface can cause the contact angle to increase after washing, as seen with the conventional samples. In general, Ruco-Dry DHE performed worse in the washing test than Ruco-Dry Eco DCF, whereas the initial contact angles of DHE were higher than those of DCF as presented in Table 3.
To further understand the differences among the samples prepared through all three methods, results were analyzed through a paired t-test. Table S1 of the supplementary information discusses the P-values of the performed paired t-test. If the P-value is below 0.05, the null hypothesis, i.e., the mean of the differences is 0, should be rejected. This means that the differences in means before and after washing are significantly different, when the P-value is lower than 0.05. A few samples show an insignificant difference in their contact angle before and after washing, although a consistency in the values is missing. Generally, the samples dyed and finished conventionally show a lower significance of difference.
Fastness of hydrophobic finishes of wool fabrics in respect to abrasion. Another factor that can affect the performance of finishes is abrasion. Therefore, the fastness of hydrophobic finishes on wool in respect to abrasion has also been studied based on the water contact angle measurement. Similar to fastness to washing, the performance of hydrophobic finishes was also affected by abrasion. In general, the loss of performance is more prominent in samples prepared with one-step HS followed by the two-steps HS and lastly by conventional padding as presented in Table 4. As was mentioned earlier, direct spraying in the HS method causes the hydrophobic finish to form less strong bonds because of the lack of positively charged amino acids on the fiber surface, which are thus more easily rubbed off.
To determine whether the difference between the means before and after abrasion is significantly different, a paired t-test was carried out. Table 4 presents the t-and P-values that were gathered from the experiment as well as mean water contact angles before and after abrasion. If the P-value is below 0.05, the null hypothesis, i.e., the mean of the differences is 0, should be rejected. In most cases, this means that the contact angle of the samples dyed and finished conventionally are not significantly different before or after abrasion. All wool fabric samples dyed and finished in the HS methods, except for HS2-W1@F1, C-W1@F2, HS2-W2@F1 and C-W2@F2, however, show that there is no significant difference in the contact angle measurements before and after abrasion, as the P-values are below 0.05. www.nature.com/scientificreports/ Sustainable aspects of the HS atomizing process. The sustainable aspect of HS atomizing process has been investigated in terms of use of water, energy, and chemicals. Proposed HS methods are continuous dyeing-finishing process; thus, the processing time was not in the scope of the study. Nevertheless, the speed of the process is subjected to real-time optimization during bulk productions. Table 5 shows an overview of the resource consumption in different HS method compared to conventional methods. Results shows that, the HS methods attributes promising resource-efficient properties with saving of up to 88% water, 100% chemicals depending on the fabric and process it replaced. However, over 200% more dyestuff is used in this process, as the HS dyebath is much more concentrated which need to be optimized before large scale industrial applications. Along with records from Imago, it can be seen that, the new HS method reduces the consumption of energy, water and produces less waste that is identical to the sustainable trifecta -net zero energy, water and waste. Calculations on a large scale are very dependent on many variables, but a comparison of the lab scale methods can at least be made, which may allow predictions for industrial production. The contents of these baths can be translated to their contents per gram of fabric. When comparing the results between the one-step and two-steps HS method, it can be seen that the one-step method uses 50% less water because it combines two baths into one. Because of the shift in pick-up percentage from 60 to 80% of the water-repellent chemicals, 25% less of these chemicals were added to the one-step bath. Because of the higher content of the more alkaline wetting agent in the bath, more acetic acid needed to be added to adjust the pH value. The reactive dyebath is less acidic and thus requires less acidic acid to balance the pH. The reduction in energy used comes from the dye fixation process. In an exhaust dyeing process, the bath liquid has to stay at a certain temperature throughout the process, which is energy consuming. The different method for dye fixation, in an autoclave as opposed to a heated and moving bath, cause the reduction in energy use. The wastewater generated in the HS method is also less than in a conventional process. The dye or finish liquid is used almost entirely, reducing the wastewater from the dyeing, and finishing process.
Conclusions
In summary, this work introduces a new method for dyeing and finishing of wool fabric using hydraulic spray atomising process. The new method found successful in dyeing of wool fabric with both reactive and acid dyes in an ambient condition. The finishing of hydrophobic agent on dyed wool fabric was also achieved with great performance. The performance of dyeing and finishing as observed through color strength (K/S), color difference (ΔE CMC ), color fastness analysis and ( θ H 2 O ) analysis establoshed the feasibility of the new methods as summarized below; (a) Resultant wool fabric showed significant color strength as high as 14 which offers possibility to dye different depth of color ranging from medium to dark shades. When compared between one-step and two-steps HS methods, it appears that one-step HS method results in higher color strength and lower color difference while offering fastest and eco-friendly route for wool dyeing. (b) The hydrophobic finish in wool fabric through HS method offered better performance than the conventional padding method. While HS method achieve θ H 2 O as high as 145° which is close to the super-hydrophobicity, the highest θ H 2 O through padding method was 135°. (c) The HS method is indeed a water, energy, and chemical efficient method, with an 88% reduction in use of water compared to the conventional dyeing-finishing method. Less auxiliaries were used during the dyeing process because of the accuracy of the HS machine. The HS method positively affects the trifecta of sustainable development by improving the environmental, social and economic aspects. | 7,140 | 2022-12-17T00:00:00.000 | [
"Materials Science"
] |
The cAMP Pathway as Therapeutic Target in Autoimmune and Inflammatory Diseases
Nucleotide signaling molecules contribute to the regulation of cellular pathways. In the immune system, cyclic adenosine monophosphate (cAMP) is well established as a potent regulator of innate and adaptive immune cell functions. Therapeutic strategies to interrupt or enhance cAMP generation or effects have immunoregulatory potential in autoimmune and inflammatory disorders. Here, we provide an overview of the cyclic AMP axis and its role as a regulator of immune functions and discuss the clinical and translational relevance of interventions with these processes.
can all be phosphorylated by many other kinases, and the action of PKA is counterbalanced by specific protein phosphatases.
Basal cytosolic cAMP levels are in the low micrometer range (19). In the cytosol, cAMP is not evenly distributed but rather forms submembranous spatially discrete pools generated in microdomains containing AC, PDE next to PKA localized by A-kinase-anchoring proteins (AKAPs) (20). Specificity in cAMP signaling and fine and selective tuning of its different tasks is ensured by the differential expression of distinct isoforms and splice variants of anabolic, katabolic, and signaling molecules in various tissues and cell types and by differential composition of cAMP microdomains (21). Although various cAMP activities can have redundant, independent, or opposing effects within the same cell (22), some individual AC and PDE knockout and transgenic mice (23,24) show specific phenotypes. In particular, individual PDE control select cyclic nucleotide-regulated events by being integrated into non-overlapping multi-molecular regulatory signaling complexes, suggesting cell or tissue-specific interference points (25,26).
Eventually, an important, often overlooked aspect of the pathway consists in the secretion of cAMP into extracellular space and its transmission via gap junctions between cells (27). Whereas transmitted cAMP directly contributes to intracellular cAMP levels, excreted cAMP is converted into AMP and adenosine by cell surface bound PDE and ecto-5′-nucleotidases CD39 and CD73. By signaling through A2A and A2B adenosine receptors, extracellular adenosine stimulates AC and increases intracellular cAMP generation (28). Knockout mice with disrupted CD39 and CD73 have underscored the importance of the extracellular cAMP-adenosine feedback mechanism in physiological processes (29,30). In the immune system extracellular cAMP may contribute to regulatory T cells (Treg) function (31,32) and has been shown to promote monocyte differentiation into dendritic cells (DCs) (33).
CYCLiC AMP iN iMMUNe HOMeOSTASiS AND PATHOPHYSiOLOGY
Due to its multiple roles in cell physiology cAMP exerts broad modulatory effects on a variety of cells (see Figure 2). In the immune system, cyclic AMP regulates both innate and adaptive immune cell activities (34).
Monocytes and Granulocytes
The functional state of monocytes orchestrates inflammatory and reparative phases in inflammatory responses and appears to be accompanied by changes in their intracellular cAMP levels. In the mouse, two major types of monocytes, Ly6C high and Ly6C low , circulate in blood. Ly6C high monocytes display pro-inflammatory activity, whereas Ly6C low monocytes are patrolling cells, monitor tissue integrity, and exert anti-inflammatory and tissue repair activities (35). The orphan nuclear receptor Nr4a1 (Nur77) regulates the expression of genes linked to inflammation. Inflammatory stimuli inhibit its expression and induce an inflammatory Ly6C high phenotype (36,37). In turn, Nur77 is upregulated and represses numerous inflammatory genes in the transition from an inflammatory Ly6C high to anti-inflammatory Ly6C low/neg state (38)(39)(40). Elevated cAMP levels induce Nur77 expression (41) and, thus, favor a reparatory monocyte phenotype (42). Through these effects on phagocytes increased cAMP levels affect myeloid cell immunity against pathogen and parasites (43)(44)(45) and may also affect the differentiation of tumor-infiltrating myeloid-derived suppressor cells (MDSCs) by repression of TNF-α production. In regard of the latter CREB activation has been shown to upregulate miR-9 expression that promotes the differentiation of the socalled MDSCs with significantly increased immunosuppressive function (46).
In sum, increased cAMP levels appear to generally weaken monocyte inflammatory functions (47)(48)(49)(50). Interestingly, bacteria and fungi have taken advantage of this effect in the course of evolution. Pathogen capture and programed destruction are among the most important activities of innate immune cells to prevent tissue invasion and pathogen dissemination. Certain microbacteria and fungi have evolved to hijack the host cAMP axis by introducing microbial adenylyl and guanylyl cyclases (51) and by intoxicating the host cell with preformed cAMP or adenylate cyclase toxins (52)(53)(54). Bordetella pertussis, for example, suppresses neutrophil extracellular trap (NET) formation by overwhelming leukocytes with supraphysiologic intracellular cAMP levels (55). Likewise, bacterial-derived or -induced cAMP facilitates intracellular bacterial survival by multiple actions, including CREB-dependent anti-apoptotic signaling and repression of intracellular bacterial killing in invaded monocytes and macrophages.
NK Cells
Natural killer (NK) cells are capable of destroying tumor cells and virally infected cells (cytolysis) without prior sensitization. In NK cells, cAMP levels regulate target cell adherence and cytotoxic function. Both pharmacological repression and induction of cAMP inhibit perforin-mediated and CD95 ligand-mediated target cell lysis (56-60).
Dendritic Cells
As professional antigen-presenting cells of the immune system, DCs are equipped with a unique capability to induce and regulate adaptive immune responses. In DC, cyclic AMP suppresses the release of pro-inflammatory mediators (TNF-α, IL-17, IFN-γ) (61) and promotes the release of anti-inflammatory mediators, such as IL-10 (62). As a functional consequence, cAMP concentrations in DC regulate T cell immunity (63). Pharmacological inhibition of cyclic nucleotide PDE4, which is highly expressed in DC, for example, suppresses the DC Th1-polarizing capacity (64,65) and commands secretion of IL-6 and TGF-beta and subsequent induction of Th17 differentiation (66). It, thus, appears that cAMP levels differentially regulate cytokine production by DC as a response to changes in the microenvironment. Apart from spatio-temporal fine-tuning of DC activities, cAMP activities in DC depend on the stage of DC maturation: prostaglandin E2 (PGE2), a key inducer of cAMP, exerts a stimulatory function for immature DCs in peripheral tissues (67) but inhibitory function for mature DCs in lymph nodes (68).
B and T Cells
In addition to innate cell function, cAMP also controls numerous adaptive immune cell activities. In adaptive immune cells, cAMP is essentially required in the induction of antigen-stimulated activation (69-72) but subsequently limits activation by negatively Signaling via the T cell receptor (TCR) leads to an activation of adenylate cyclases, resulting in high cAMP levels in regulatory T cells (Treg). cAMP can be transferred via gap junctions into conventional T cells (Tcon), thereby mediating the suppressive activity of Treg (A). Phosphodiesterase 4 (PDE4), which can be activated by MAP kinase ERK-related pathways, reduces cAMP amounts in Treg by enzymatic cleavage, impairing the regulatory activity of Treg (B). IFN-α abolishes the suppressive function of Treg by cAMP reduction, restoring the Tcon activation. Inhibition of the ERK or PDE4 pathway, respectively, results in a renewed suppressive capacity of IFN-α treated Treg (C).
regulating signaling through B cell and T cell receptors (TCR). In B cells, it provides an essential signal in the induction of antigen-stimulated proliferation and antibody production (69,70,72). Elevation of intracellular cAMP enhances IgE production by promoting recombination of the Ig heavy chain loci and by favoring Th2 differentiation. In T cells, cAMP participates in the regulation of nearly all functional activities ranging from peripheral maintenance of naïve T cells (73) to their activation via the TCR (74), acquisition of effector function (75,76), and memory (77). In cognate activation, cAMP acts as a temporary inhibitory feedback signal that limits T cell activation through the cAMP-PKA-Csk signaling pathway (74). Unlike temporary increases, continuously elevated cAMP levels induce an anergylike state (78,79). Likewise, anergizing TCR signals result in increased intracellular cAMP concentrations that upregulate the cyclin-dependent kinase (CDK) inhibitor p27kip1, sequester cyclin D2-cdk4, and cyclin E/cdk2 complexes and prevent progression through the G1 restriction point of the cell cycle (80). Furthermore, cAMP levels regulate the acquisition of effector function. Pharmacological upregulation of cAMP by inhibition of PDE activity, for example, prevents the development and function of cytotoxic T lymphocyte (CTL) (81). The significance of cAMP in acquisition of effector functions in T cells is also reflected by the observation that CREB mutant mice have normal T cell numbers in the thymus but exhibit a marked defect in peripheral T cell proliferation and IL-2 production, resulting from G1 cell-cycle arrest and apoptotic cell death (82). Most prominent, cAMP forms an essential component of the suppressive mechanism in Treg (83)(84)(85)(86)(87)(88)(89)(90)(91)(92). Treg contain increased levels of cytosolic cAMP, further upregulate their cAMP level upon activation and consign cAMP to target cells via gap junctions (83,85). In the target cell, cAMP inhibits the proliferation and differentiation of effector functions, in part by interfering with gene expression via ICER (90). Repression of cAMP accumulation in Treg by either adenylyl cyclase inhibition, application of a cAMP-specific antagonist, or PDE overexpression abrogates murine and human Treg suppression (83,84,86,91,93). Inversely, blockade of cAMP degradation by PDE inhibition improves Treg-mediated suppression in a murine asthma model (85). In line, non-functional Treg in Foxp3-mutant scurfy mice harbor significantly reduced levels of cytosolic cAMP (94).
Increased cAMP formation in Treg is a prerequisite for their suppressive activity (95) (see Figures 3 and 4). Constitutively high cAMP levels in Treg appear to be caused by Foxp3-induced decreased PDE3B expression (96) and increased AC9 activity (87) driven by their constitutive active state (95). During Tregmediated suppression, cAMP is transferred via gap junctions to conventional T cells (Tcon), where it represses IL-2 production and inhibits the proliferative response (83). Pharmacological inhibition of cAMP formation abrogates the suppressive function of Treg (see Figure 3) (91).
In this context, Bacher et al. showed that IFN-α, an antineoplastic agent with well-known autoimmune side effects, disturbs the immunosuppressive activity of human CD4 + CD25 + Foxp3 + Treg by disabling cAMP upregulation upon activation (92, 97) (see Figure 3 and 4). IFN-α-mediated inhibition of Treg suppression can be partially restored by pharmacological inhibitors blocking ERK and PDE/PDE4 activity through specific inhibitors (92, 97) (see Figures 3 and 4). These results are in line with the observation that human T cells predominantly express the short PDE4B and PDE4D isoforms, functionally regulated by the ERK2 MAP kinase (98,99). As PDE have an essential role in the IFN-α-mediated inhibition of Treg, PDE4 interference by specific inhibitors may represent a therapeutic option to restore immune regulation in autoimmune diseases, such as psoriasis or lupus erythematosus, accompanied by reduced Treg function (64,100).
Next to its role in the Treg-suppressive mechanism cAMP is required for the generation and maintenance of Treg: the cAMP-responsive transcription factor CREB stabilizes FoxP3 expression and promotes and maintains the Treg phenotype (101,102). Treg essentially depend on IL-2 for their peripheral maintenance and suppressive activity (103,104) and their number and activity can be therapeutically manipulated by lowdose IL-2 and particular IL-2/anti-IL-2 complexes (105, 106) to control autoimmune diseases and inflammation (107). Interestingly, IL-2 may contribute to increased cAMP production in Treg by increasing adenylate cyclase AC7 activity (88).
In conjunction with its role in control of the Treg phenotype, its transmission via gap junctions to and from Treg also appears to play a role in the Treg lifecycle as evidence by the observation that Treg numbers are significantly reduced in connexin 43 knockout mice (108). Some viruses prevent their rejection by the immune system by interfering with the cAMP pathway in T cells. HIV-1 surface glycoprotein gp120 induces anergy in naive T lymphocytes (109,110) and increases cAMP levels and suppressive activity in Treg (86,111,112). In turn, cAMP repression restores antiviral T cell function in HIV patients (113).
Beyond their role in immune regulation, Treg take on homeostatic functions by regulating metabolic activity in visceral fat and participating in tissue repair. Functionally distinct Treg accumulate in injured skeletal muscle and contribute to repair processes. Muscle Treg distinctly express the growth factor amphiregulin, which improves muscle repair by directly acting on muscle satellite cells (114). In line with outlined role of cAMP in Treg function, amphiregulin synthesis is inhibited by PKA inhibitors and enhanced by ligands that increased cAMP or directly activate the PKA (115).
Together these findings classify cAMP as a key component of immune cell function and disclose cAMP-regulating enzymes as molecular targets for therapeutic intervention with immune activities in pathological processes like allergy and autoimmunity.
MODULATiON OF cAMP iN AUTOiMMUNe AND iNFLAMMATORY DiSeASeS
Cyclic AMP is a central player in the network of signaling pathways underlying pathogenesis of several diseases and several interference points are used therapeutically in a variety of conditions. Although the clinical impact of changes in cAMP remains incompletely defined, one fundamental conclusion can nevertheless be drawn: interventions that enhance cAMP generation or actions have immune dampening potential; conversely, repression of cAMP or cAMP signaling has immunostimulatory capability.
Formation of cAMP by AC and degradation by PDE identifies AC and PDE as major targets for therapeutic intervention with cAMP levels. To date, the AC activity has been mostly pharmacologically targeted through agonists or antagonists affecting upstream G-protein-coupled receptors (GPCR) (23,116). However, AC knockout and transgenic mice revealed individual and clearly distinct physiological functions for AC isoforms (23). The observation that individual isoforms play a dominant role in specific tissues has led to AC being considered as main drug targets (117). In order to achieve selective interference, isoform-selective compounds are required. Such compounds are currently being sought and tested. Here, the idea is pursued, that selective inhibitors intervene in a tissue-specific manner, but remain ineffective in tissues that express various AC isoforms (118).
AC-specific compounds already reached preclinical stages and others have been approved for particular diseases, such as colforsin daropate hydrochloride (NKH447), a AC5 selective forskolin (FSK) derivate, for the treatment of advanced congestive heart failure (119, 120). Thus, even though AC isoform-targeted drugs are still in early stages of the development, the finding that AC have clearly separated physiological functions at least suggests AC as pharmacologic targets in a broad spectrum of diseases ranging from neurodegenerative disorders to congestive heart failure and lung diseases as asthma and chronic obstructive pulmonary disease (COPD).
Since their identification in 1958 (2), continuing efforts have been undertaken to advance the understanding of PDE biology and function, and PDE have been considered pharmacological targets in various diseases, such as pulmonary diseases like COPD and asthma, depression, schizophrenia, erectile dysfunction, and autoimmune disease like psoriasis/psoriasis arthritis and rheumatoid arthritis (8,100,(121)(122)(123)(124)(125). Although numerous PDE inhibitors have been developed, their introduction into the clinic has been hampered by their narrow therapeutic window and side effects, such as nausea and emesis, occurring even at sub-therapeutic levels.
In the immune system, PDE family 3, 4, and 7 members represent the predominant cAMP-degrading enzymes (126). PDE4 are encoded by four separate genes (PDE4A-D) and each PDE4 controls non-redundant cellular function (127). In addition, more than 20 PDE4 variants arise from alternative mRNA splicing or the use of different transcriptional units (5). While PDE4A, PDE4B, and PDE4D are expressed in immune cells (T and B cells, neutrophils, eosinophils, DCs, monocytes, macrophages), PD4C is minimally active or absent (128,129). PDE3 and PDE7 are detected in most inflammatory cells, including T and B cells, NK, and myeloid cells (6, 59,127,[130][131][132]. However, PDE4s are the predominant cAMP-degrading isoenzymes (126,127). In addition, the expression levels of the PDE isoenzymes are differentially regulated by a variety of inflammatory stimuli (126,127). Apart from immune cells, PDE4 members are also expressed in chondrocytes, smooth muscle cells, epithelial cells, and vascular endothelium (127). By increasing levels of intracellular cAMP, PDE4 inhibitors show anti-inflammatory effects in almost all inflammatory and immune cells and are known to suppress a multitude of inflammatory responses, including proliferation, chemotaxis, phagocytosis, and release of pro-inflammatory mediators, such as cytokine and chemokines, reactive oxygen species, lipid mediators, and hydrolytic enzymes (34,126,129).
Numerous selective PDE4 inhibitors have been patented and some of them have been evaluated in clinical trials, including diseases, such as asthma, COPD, atopic dermatitis, rheumatoid arthritis, and psoriasis/psoriasis arthritis. However, most of these compounds had to be discontinued because of narrow therapeutic windows. Doses needed for an efficient treatment could not be reached due to side effects, such as nausea, emesis, diarrhea, and abdominal pain being the most common. It has been hypothesized that adverse side effects of the PDE4 inhibitors are a result of their non-selectivity to all four PDE4 subtypes and PDE4 inhibition in non-target tissues at doses similar (or lower) than needed for therapeutic efficacy. It is postulated that blocking of PDE4D in non-target organs promotes emesis (133). In view of side effect profile of second-generation PDE4 inhibitors, new strategies for the design of active and non-emetic compounds have been employed to overcome the adverse effects and to improve therapeutic effects. In this context, despite highly conserved catalytic domains of PDE4 isoenzymes, PDE4 subtype-specific inhibitors have been generated. For example, potent PDE4B inhibitors with more than 100-fold selectivity over PDE4D have been synthesized (134,135). Compared with the non-selective PDE4 inhibitor cilomilast (134), selective PDE4B inhibitors demonstrated a potent anti-inflammatory activity and significantly less gastrointestinal side effects. In order to circumvent side effects observed upon oral administration, inhalation (136) and topical application (137) of PDE4 inhibitors have been explored in the treatment of airway inflammation and inflammatory cutaneous diseases. Two phase studies conducted with a PDE4 inhibitor (AN2728) in psoriasis and atopic dermatitis patients showed promising results (138,139). The interest for PDE4 anti-inflammatory activity arose from early studies with the prototypic PDE4 inhibitor, rolipram (140). However, although PDE4 inhibitors have been mostly developed to treat lung diseases, such as asthma or COPD, no compound has yet reached the market for asthma treatment. By contrast, the orally active PDE4 inhibitor roflumilast (Daliresp ® , Forest Pharmaceuticals) has been approved for COPD by the European Medicines Agency in 2010 and the U.S. Food and Drug Administration in 2011 based on four clinical trials. These studies have shown that roflumilast improves lung function and reduces the frequency of COPD exacerbations in patients with chronic bronchitis symptoms (141)(142)(143)(144). Although side effects were generally mild to moderate, nausea, diarrhea, weight loss, and headache were still reported (145). Despite these side effects, roflumilast received approval for COPD with severe air flow limitations, symptoms of chronic bronchitis, and a history of exacerbations in several countries (146,147).
Another currently marketed oral PDE4 inhibitor is apremilast (Otezla ® , Celgene Corporation) that has been approved by the EMA and FDA for psoriasis and psoriasis arthritis, two autoimmune diseases, characterized by chronic inflammation, tissue and organ involvement, and accelerated growth cycle of skin cells. Apremilast was developed based on the rolipram and roflumilast pharmacophore by coupling a series of phthalimide analogs in order to optimize its activity and to decrease side effects (148). The safety and efficacy of apremilast for the treatment of patients with plaque psoriasis and psoriasis arthritis were evaluated in numerous multicenter, randomized, double-blind, placebo-controlled clinical trials (ESTEEM-1 and -2 for psoriasis, PALACE-1, -2, and -3 for psoriasis arthritis) (149)(150)(151)(152). In the two ESTEEM trials, apremilast reduced the severity and extent of moderate-to-severe plaque psoriasis (including nail, scalp, and palmoplantar manifestations) versus placebo in adults. Similarly, in three PALACE trials (PALACE 1-3), apremilast improved the signs and symptoms of psoriasis arthritis relative to placebo in adults with active disease despite treatment with conventional synthetic and/or biologic disease-modifying anti-rheumatic drugs. According to the published clinical trials, apremilast was well tolerated in all study groups analyzed. Throughout phase II and III trials, the most frequently reported side effects consisted of headache, nausea, diarrhea, emesis, and nasopharyngitis and upper respiratory tract infection under continued treatment. However, the studies showed that the gastrointestinal adverse effects usually subside within a month of therapy.
It is an interesting result of the clinical studies that improved inhibitor specificity does not prevent side effects. This result suggests that the same or overlapping cell populations caused both wanted and unwanted effects. In view of recent research results regarding the expression and activities of anabolic and catabolic cAMP enzymes in immune cells, the question arises whether particular PDE4 inhibitor effects are caused by alteration of immune cell functions. This question is underlined by the similarity of side effects in PDE4 inhibitor studies and some immunotherapeutic approaches. Unfortunately, effects in individual immune cell populations have not been considered in clinical studies with PDE inhibitors so far. For a better understanding of the underlying causes of wanted and unwanted effects, such studies appear urgently needed. Alongside their specificity, effective interference with the cAMP pathway through inhibitors depends on their mechanism of action. Basically, inhibitors may act reversibly or irreversibly. Irreversible inhibitors bind to enzymes through covalent bonds. Covalent inhibitors have many desirable features, including increased biochemical efficiency of target disruption, reduced sensitivity toward pharmacokinetic parameters and increased duration of action that outlasts the pharmacokinetics of the compound. Only few inhibitors of this type, however, exist for anabolic and catabolic cAMP enzymes with the common ADCY inhibitor MDL-12,330A, a cyclo-alkyllactamide derivative supposedly representing an exception (153). Most inhibitors are reversible, bind to enzyme through non-covalent bonds, and typically address the ATP-binding site or the catalytic portion. With non-covalent inhibitors, cells can quickly become insensitive by recovering enzyme activity. To increase their activity, however, inhibitors can be coupled to proteins that regulate protein expression. A favorable example exists in proteolytic targeting, such as the ubiquitin proteasome system (UPS) (154). Proteolytic targeting chimeric molecules, or PROTACS comprise a UPS recognition motif coupled to an inhibitor via a linker. While a first generation of PROTACs suffered from limited cell-permeability, the second generation has been improved by using a HIF1α peptide fragment as an E3 ubiquitin ligase recognition motif to increase permeability (155). Thus, in addition to the development of more specific inhibitors to achieve selective interference, their inhibitory activity may be improved through proteolytic targeting, particularly by preventing target cell resistance.
CONCLUSiON AND PeRSPeCTive
Because of its central importance as a universal regulator of metabolism and gene expression, systemic intervention of the cAMP metabolism is associated with numerous, sometimes considerable, side effects. Additionally or alternatively to the development of isoform-specific AC and PDE inhibitors, new methods need to be found by which these inhibitors may be delivered to tissues and cells specifically. Novel strategies may encompass the development of highly specific agents, new routes of delivery (cutaneous, inhalation) or the use of nanoparticles for tissue or even cell-specific drug delivery. Since cAMP signaling controls very different processes in different cells, a better understanding of the cAMP-mediated activities in particular cell types could help to pave the way to more specific interventions in cell function. Unlike anabolic and catabolic cAMP metabolism, very few drugs engage in signal transduction yet and, thus, the potential use of such actions remains unclear. Although known for over 60 years, the cAMP signaling still reveals new functional details. Therapeutic intervention of its activities, thus, requires further elucidation of its role in individual cell types and its entanglements with other signaling and metabolic pathways.
AUTHOR CONTRiBUTiONS
All authors listed, have made substantial, direct, and intellectual contribution to the work, and approved it for publication. | 5,278 | 2016-03-31T00:00:00.000 | [
"Biology"
] |
Correlation and Reliability of Behavioral and Otoacoustic-Emission Estimates of Contralateral Medial Olivocochlear Reflex Strength in Humans
The roles of the medial olivocochlear reflex (MOCR) in human hearing have been widely investigated but remain controversial. We reason that this may be because the effects of MOCR activation on cochlear mechanical responses can be assessed only indirectly in healthy humans, and the different methods used to assess those effects possibly yield different and/or unreliable estimates. One aim of this study was to investigate the correlation between three methods often employed to assess the strength of MOCR activation by contralateral acoustic stimulation (CAS). We measured tone detection thresholds (N = 28), click-evoked otoacoustic emission (CEOAE) input/output (I/O) curves (N = 18), and distortion-product otoacoustic emission (DPOAE) I/O curves (N = 18) for various test frequencies in the presence and the absence of CAS (broadband noise of 60 dB SPL). As expected, CAS worsened tone detection thresholds, suppressed CEOAEs and DPOAEs, and horizontally shifted CEOAE and DPOAE I/O curves to higher levels. However, the CAS effect on tone detection thresholds was not correlated with the horizontal shift of CEOAE or DPOAE I/O curves, and the CAS-induced CEOAE suppression was not correlated with DPOAE suppression. Only the horizontal shifts of CEOAE and DPOAE I/O functions were correlated with each other at 1.5, 2, and 3 kHz. A second aim was to investigate which of the methods is more reliable. The test–retest variability of the CAS effect was high overall but smallest for tone detection thresholds and CEOAEs, suggesting that their use should be prioritized over the use of DPOAEs. Many factors not related with the MOCR, including the limited parametric space studied, the low resolution of the I/O curves, and the reduced numbers of observations due to data exclusion likely contributed to the weak correlations and the large test–retest variability noted. These findings can help us understand the inconsistencies among past studies and improve our understanding of the functional significance of the MOCR.
INTRODUCTION
The central nervous system can adjust the functioning of the inner ear via the olivocochlear efferent system. Some efferent fibers originate in the medial region of the superior olivary complex and terminate on the outer hair cells (OHCs) in the cochlea (Warr and Guinan, 1979). These fibers, termed medial olivocochlear (MOC) efferents, can be activated reflexively by sounds presented to the ipsilateral and/or the contralateral ear (Liberman and Brown, 1986;Brown et al., 2003). It has been suggested that the MOC reflex (MOCR) serves to protect the auditory system from acoustic overstimulation and to facilitate auditory perception in noise, among other. However, the evidence in support of these roles is mixed (reviewed by Fuente, 2015;Smith and Keil, 2015;Lopez-Poveda, 2018). Because the effects of the MOCR can be assessed only indirectly in healthy humans, the existing evidence is mostly based on correlations between a psychoacoustic measure of interest (e.g., noise-induced temporary threshold shifts or speech-in-noise recognition) and indirect estimates of the inhibition of basilar membrane (BM) responses by MOCR activation, often referred to as MOCR strength. Different studies have used different techniques to estimate MOCR strength. If the different methods yielded uncorrelated or unreliable estimates of MOCR strength, this could partly explain the discrepant findings regarding the roles of the MOCR in human hearing. The aim of the present study was to investigate the correlation and reliability of three different methods often employed to estimate MOCR strength in humans.
Activation of MOC efferents hyperpolarizes OHCs (Cooper and Guinan, 2003), turning down the gain of the cochlear amplifier at low-to-mid levels and linearizing BM input/output (I/O) curves (Murugasu and Russell, 1996;Dolan et al., 1997;Guinan, 2003, 2006;Guinan, 2006). For a tone in noise, MOC efferents inhibit the cochlear mechanical response to the noise and tone stimuli. As a result, auditory nerve fibers respond less to the background noise and show less 'compressed' rate-level functions (Winslow and Sachs, 1988;Kawase et al., 1993). Animal studies suggest that MOC efferents can protect the auditory system from acoustic trauma (Handrock and Zeisberg, 1982;Kujawa and Liberman, 1997;Maison and Liberman, 2000;Maison et al., 2013) and/or enhance the neural representation of transient stimuli in noisy backgrounds (Nieder and Nieder, 1970a,b). However, the results from human studies are not always consistent with these notions (Fuente, 2015;Lopez-Poveda, 2018).
In animals, the roles of MOC efferents have been studied by interrupting or sectioning the MOCR pathways (e.g., Handrock and Zeisberg, 1982;Warren and Liberman, 1989;Kujawa and Liberman, 1997;Maison et al., 2013). This approach is not always feasible in humans and vestibular neurectomy (the procedure employed to section olivocochlear efferents) is likely ineffective in cutting all olivocochlear efferents (Chays et al., 2003). For these reasons, many human studies have sought to establish a correlation between auditory perceptual tasks hypothesized to depend on the MOCR and an effect of MOCR activation on BM responses. Different methods have been used to assess MOCR effects. For example, many studies have estimated MOCR strength as the level change in click-evoked (CEOAEs) or distortion-product otoacoustic emissions (DPOAEs) induced by contralateral acoustic stimulation (CAS) (e.g., Giraud et al., 1997;Kumar and Vanaja, 2004;De Boer et al., 2012;Stuart and Butler, 2012;Abdala et al., 2014;Mishra and Lutman, 2014;Bidelman and Bhagat, 2015;Mertes et al., 2018Mertes et al., , 2019. Because a contralateral broadband noise (BBN) with sufficient level [≥30 dB sound pressure level (SPL); Moulin et al., 1993] activates the contralateral MOCR, and because otoacoustic emissions (OAEs) require OHC-mediated amplification (Shera and Abdala, 2012), the suppression of CEOAEs or DPOAEs by CAS is thought to be the result of the MOCR reducing cochlear gain. MOCR strength has been also estimated as the CAS-induced change in OAE I/O curves (Moulin et al., 1993;Veuillet et al., 1996;Abdala et al., 1999), in behaviorally inferred BM I/O curves (Yasin et al., 2014;Fletcher et al., 2016), and in tone detection thresholds (Kawase et al., 2003;Aguilar et al., 2015;Nogueira et al., 2019).
It is yet to be shown, however, that the different methods used to assess MOCR strength in humans yield reliable and correlated results. In fact, studies aimed at investigating the facilitating role of the MOCR in speech-in-noise recognition have shown discrepant findings when using different methods to assess MOCR strength. For instance, monaural speech reception thresholds (SRTs) for sentences in noise are correlated with CAS-induced CEOAE suppression (Bidelman and Bhagat, 2015) but not with DPOAE suppression (Mukari and Mamat, 2008). Strikingly, findings can be discrepant even when MOCR strength is assessed using the same method. For example, Bidelman and Bhagat (2015) found SRTs for sentences in noise to be correlated with CEOAE suppression, while Stuart and Butler (2012) did not, something remarkable considering that the two studies measured CEOAE suppression using identical stimuli [60 dB peak-equivalent SPL (pSPL) linear clicks at a rate of 50/s and contralateral BBN of 65 SPL]. Mertes et al. (2018) observed a correlation between CAS-induced CEOAE suppression and the slope of the psychometric function for words in noise, but Mertes et al. (2019) did not find such a correlation for the same speech material. Notably, Mertes et al. (2018) measured CEOAEs using 75 dB pSPL clicks while Mertes et al. (2019) used 65 dB pSPL clicks. It is possible that differences across studies in the speech tests or participants contribute to the discrepant findings, but it is also possible that the effects of CAS on CEOAEs and DPOAEs are not reliable or equivalent to assess MOCR strength.
Here, we investigate the correlation and reliability of three popular ways of assessing the strength of the contralateral MOCR in humans. We measured pure-tone detection thresholds at different frequencies as well as CEOAEs and DPOAEs for different test frequencies and levels (i.e., I/O curves). All measures were obtained with and without CAS to compare the "CAS effect" across measures. They were also obtained multiple times to assess the variability of the CAS effect for each measure. Low testretest variability together with a high correlation of the CAS effect between the different measures would support that the three measures are reliable and consistent, and thus serve equally to assess MOCR strength. By contrast, high test-retest variability and/or a lack of correlation between methods would indicate that different factors are probably involved in the CAS effects for each measure, which would help to understand the inconsistencies among studies and improve our understanding of the functional significance of the MOCR.
Participants
Twenty-eight subjects (21 women) with no self-reported history of hearing impairment participated in the study, although not all of them participated in every test (see below). Their mean age was 27.5 years (standard deviation, SD = 7.5 years; age range = 18-47 years). Air conduction audiometric thresholds were measured using a clinical audiometer (Interacoustics AD229e). All but three of the participants had air conduction audiometric thresholds ≤ 20 dB hearing level (HL) in both ears at frequencies between 125 Hz and 8 kHz (ANSI, 1996). The exceptions were two participants whose threshold was 25 dB HL at 8 kHz in the left and/or right ear, and another participant whose threshold was 60 dB HL at 8 kHz in the right ear. This latter participant was nevertheless admitted for testing because her thresholds were normal over the frequency range of interest for the present study (≤4 kHz). Twenty-six subjects had normal tympanograms (assessed using an Interacoustics AT235h clinical tympanometer and a test tone of 226 Hz at 85 dB SPL). Two listeners had slightly higher than typical values for ear-canal volume, compliance values, and/or tympanic peak pressure in one ear.
Participants were volunteers and not paid for their services.
Tone Detection Thresholds
Absolute detection thresholds in the presence and in the absence of CAS were measured for tones presented monaurally in the left ear of 15 participants and in the right ear of 13 participants (N = 28 participants in total). Pure tone frequencies were 0.5, 1.5, and 4 kHz. The duration of the tones was 300 ms, including 10-ms raised-cosine onset and offset ramps. The CAS was a BBN (0.01-10 kHz). This noise bandwidth was used because it produces the greatest MOCR activation Lilaonitkul and Guinan, 2009). The CAS level was 60 dB SPL. This level is capable of activating the MOCR with minimal or no activation of the middle-ear muscle reflex (Zhao and Dhar, 2010;Aguilar et al., 2013;Mishra and Lutman, 2013;Mertes and Leek, 2016;Feeney et al., 2017). The CAS had a duration of 850 ms, including 5-ms raised-cosine onset and offset ramps. A three-interval, three-alternative, forced-choice adaptive procedure was used to measure tone detection thresholds. Three intervals were presented to the listener accompanied by brief lights in a computer monitor, and the tone was presented in one of the intervals chosen at random. The lights were on for 850 ms, and the inter-interval time (the period between the offset and the onset of the lights) was 500 ms. In the conditions with CAS, the CAS was presented in the three intervals gated with the lights.
The tone started 500 ms after the light onset in the conditions with and without CAS. That is, the tone started 500 ms after the noise onset in the conditions with CAS. Because the MOCR is almost fully activated about 280 ms after the elicitor onset (Backus and Guinan, 2006), we assumed that the CAS-activated MOCR was fully active at the onset of the tone and remained active over the tone duration.
Participants were instructed to identify the interval containing the tone by pressing a key on the computer keyboard, and feedback was given on the correctness of their responses. The level of the tone decreased after two successive correct responses and increased after an incorrect response (two-down, one-up adaptive rule). The tone detection threshold was thus defined as the tone level giving 70.7% correct responses in the psychometric function (Levitt, 1971). The level of the tone changed by 6 dB until the second reversal in level occurred, and by 2 dB thereafter. The procedure continued until 12 level reversals occurred, and the detection threshold was defined as the mean of the tone levels at the last 10 reversals.
Tone thresholds with and without CAS were always measured in pairs without removing the earphones to avoid measurement variance from the earphones fit, and the threshold without CAS was always measured first. A given pair of thresholds was discarded when the within-measure SD for one or the two thresholds in the pair exceeded 4 dB. The exceptions were three participants for whom we accepted SD ≤ 6 dB at 0.5 kHz. Three threshold pairs (with and without CAS) were obtained for each tone frequency. When the acrossmeasure SD of the three thresholds with or without CAS exceeded 4 dB, an additional pair of thresholds was measured. The three (or four) thresholds were averaged and the mean was taken as the tone detection threshold. Thresholds for the three test frequencies were measured in random order across participants.
Click-Evoked Otoacoustic Emissions (CEOAEs)
Click-evoked otoacoustic emissions for the same ear as tone detection thresholds were measured in the presence and in the absence of CAS. CEAOEs were measured using the linear method, in which the responses to four clicks of the same amplitude and polarity were averaged (Kemp et al., 1990). This method was used because although the non-linear method is less sensitive to artifacts, it also cancels linear components of the OAEs and can eliminate much OAEs from the recording (Shera and Abdala, 2012), including the linear part of the MOC effect (Guinan, 2006). For each CEOAE measurement, 1,024 clicks of 75 µs in duration were presented at a rate of 19 Hz. The use of click rates ≤25 Hz minimizes the probability of clicks activating the ipsilateral MOCR (Boothalingam and Purcell, 2015). A 19.5 ms response window was used to extract the CEOAE level from the average waveform. The window started 2.5 ms after the end of the click to minimize stimulus artifact. In addition to the overall CEOAE level, the spectrum of the recording was calculated to obtain CEOAE levels at five frequency bands centered at 1, 1.5, 2, 3, and 4 kHz.
Click-evoked otoacoustic emissions for click levels of 51, 54, 57, and 60 dB pSPL 1 were measured in 18, 28, 18, and 20 participants, respectively. In other words, full CEOAE I/O functions (i.e., CEOAEs for the four click levels) were obtained in 18 participants. Eight CEAOE measures (of 1,024 clicks each) were obtained for each click level. Four measures were obtained without CAS and four measures were obtained with CAS. Measurements with and without CAS were interleaved. For any given click level and frequency band, the mean CEOAE level with or without CAS was calculated when at least three of the four pair of measures (with and without CAS) were valid and when the across-measures SD was ≤3 dB both with and without CAS. A measure was regarded as valid when the signal-to-noise ratio (SNR) was ≥6 dB. The mean CEOAE level must be at least 3 dB higher than the system's artifact level to be included in the analyses. If a measure did not meet these criteria, it was classified as "no response." The CAS had the same characteristics as described for tone detection thresholds, with the exception of its duration. Here, the CAS onset and offset were controlled manually by the experimenter. The CAS started one-to-two seconds before the presentation of the first click and was continuously on until one-to-two seconds after the presentation of the last click.
Distortion-Product Otoacoustic Emissions (DPOAEs)
For 18 participants, 2f 1 − f 2 DPOAEs were measured in the same ear as tone detection thresholds in the presence and the absence of CAS. The primary f 2 frequencies were 1, 1.5, 2, 3, and 4 kHz, and the f 2 /f 1 ratio was fixed at 1.2. The level of primary tone f 2 (L 2 ) ranged from 30 to 50 dB SPL in 5-dB steps, and the level of primary tone f 1 was set equal to L 1 = 0.4L 2 + 39, the rule proposed by Kummer et al. (1998) to obtain largest DPOAEs for L 2 ≤ 65 dB SPL. The duration of the primary tones was 225 ms, and the inter-tone duration was 42 ms. A DPOAE measure for a given f 2 and L 2 combination included 10 stimulus trials. Eight DPOAE measures (of 10 trials each) were obtained for each f 2 and L 2 combination, i.e., four measures were obtained with CAS and four measures were obtained without CAS in interleaved order. The criteria used to calculate the mean DPOAE level across measures were the same as for CEOAEs.
The 2f 1 − f 2 DPOAE recorded in the ear canal is the vector sum of an OAE distortion component generated at the cochlear region tuned around the f 2 primary tone and an OAE reflection component generated at the 2f 1 − f 2 cochlear region (Shera and Guinan, 1999;Talmadge et al., 1999;Kalluri and Shera, 2001;Shera and Abdala, 2012). CAS can affect the distortion and reflection components differently, and thus cause DPOAE levels to be sometimes greater in the CAS than in the control condition (Abdala et al., 2009;Deeter et al., 2009;Henin et al., 2011). For this reason, a suppressor tone near the 2f 1 − f 2 1 CEOAEs were measured only over a 9-dB range because 51 dB pSPL was the lowest level at which participants showed valid OAEs (i.e., very few participants showed valid responses at lower levels) and 60 dB pSPL was the highest level we could use without large artifacts in our system using the linear mode of stimulation. frequency was used in an attempt to suppress the reflectionsource contribution to DPOAE (Heitmann et al., 1998;Talmadge et al., 1999;Kalluri and Shera, 2001;Konrad-Martin et al., 2001;Johnson et al., 2006). The suppressor frequency was 64, 59, 54, 54, and 54 Hz below 2f 1 − f 2 when f 2 was 1, 1.5, 2, 3, or 4 kHz, respectively. The levels of the suppressor (L 3 ) were calculated according to Figure 8 of Johnson et al. (2006). However, because Johnson et al. (2006) observed variability of up to 15 dB in the optimal suppressor level across participants, we decided to use a suppressor level 10 dB below the level determined by the linear fit for their group data. We made that decision in an attempt not to affect the distortion-source component for subjects who needed a lower suppressor level than the mean to remove the reflection-source component contribution. We used the data centered at 2 kHz from Johnson et al. (2006) to calculate the suppressor levels for f 2 = 1, 1.5, and 2 kHz, and the data centered at 4 kHz to calculate the suppressor levels for f 2 = 3 and 4 kHz.
The CAS had the same characteristics as described for tone detection thresholds with the exception of the duration. Here, the CAS onset and offset was controlled manually by the experimenter, as for CEOAEs.
Apparatus
Pure tones and CAS were generated with custom-made Matlab software and played via an RME Fireface 400 soundcard at a sampling rate of 44.1 kHz, and with 24-bit resolution. Stimuli were presented to the participants using Etymotic ER-2 insert earphones. These earphones are designed to give a flat frequency response at the eardrum and have a nominal interaural attenuation of 70 dB that minimizes cross hearing. Stimuli were calibrated by coupling the earphones to a sound level meter (Brüel and Kjaer 2238) through a Zwislocki coupler (Knowles DB-100). Calibration was performed at 1 kHz and the measured sensitivity was applied to all frequencies.
CEOAE and DPOAE were measured using an Intelligent Hearing Systems Smart device (with SmartOAE software version 5.10) equipped with an Etymotic ER-10D probe. CEOAE stimuli were calibrated with a Zwislocki coupler (Knowles DB-100) by measuring peak intensity with a sound level meter (Brüel and Kjaer 2238). DPOAE stimuli were calibrated with the same Zwislocki coupler for each primary frequency (f 1 and f 2 ). In-theear pressure calibration was not performed. The system artifact was assessed by presenting clicks at different levels (CEOAEs) and different combinations of primary frequencies and levels (DPOAEs) to a microphone connected to the coupler.
Participants sat in a double-wall sound attenuating booth during all measurements. For tone detection thresholds, earphones were removed between each pair of measurements with and without CAS. Threshold pairs for a given probe frequency could be measured in the same or in different sessions, depending on the availability of the participant. The time lapse between sessions ranged from minutes to a few days (2.2 days on average). During OAE measurements, participants were asked to remain as steady as possible. The OAE probe remained in the participant's ear throughout the whole OAE measurement session to minimize measurement variance from altering the position of the probe in the ear canal. During OAE measurements, we did not control if participants were attending to the stimulus.
Quantification of CAS Effects
Contralateral acoustic stimulation was expected to activate the contralateral MOCR, and thus to linearize BM I/O curves by inhibiting the gain of the BM at low-to-moderate levels ( Figure 1A). Assuming that the BM response at the tone detection threshold is the same with and without CAS, we expected tone detection thresholds to be higher (worse) with than without CAS ( Figure 1A). Because DPOAEs and CEOAEs require OHC-mediated amplification and CAS reduces such amplification, we also expected CAS to suppress CEOAEs and DPOAEs ( Figure 1B). We quantified the CAS effect as the difference (in dB) 2 in tone threshold, CEOAE level and DPOAE level in the CAS minus the control (no-CAS) condition, such that a positive threshold difference or a negative OAE difference would be consistent with BM inhibition/linearization. It would not be appropriate, however, to compare the increase in tone detection threshold with the suppression of OAE levels Frontiers in Neuroscience | www.frontiersin.org at any one click level or L 2 because the former presumably quantifies the horizontal displacement of BM I/O curve (also termed "effective attenuation"; Puria et al., 1996;Lichtenhan et al., 2016) (Figure 1A) while the latter probably quantifies the vertical displacement of the curve ( Figure 1B). (Note that the horizontal and vertical displacements of the BM I/O curve are different when responses fall within the compressive region of the I/O curve). For this reason, we also quantified the CAS effect as the horizontal displacement of the CEOAE and DPOAE I/O curves. To do it, we first fitted straight lines to the data without and with CAS (Figures 1C,D). The fitting was done only when CEOAEs were present for at least two of the four click levels and when DPOAEs were present for at least three of the five L 2 levels. The correlation between the fit and the data was ≥0.90 for 86% of I/O curves with more than three data points, which shows that the choice of a linear fit was appropriate. An I/O curve was excluded from the analyses when the correlation of the fit was <0.75 in the condition with or without CAS (7% of the cases). The horizontal displacement of CEOAE I/O curves was then calculated by estimating the CEOAE level in the fitted line without CAS produced by a click of 54 dB pSPL, followed by the click level in the CAS-fitted function that produced that same CEOAE level. The horizontal displacement was the difference between this latter value and 54 dB pSPL (arrow in Figure 1C). The CEOAE level without CAS for 54 dB pSPL clicks was obtained by extrapolation when the subject had valid CEOAE responses for two higher click levels. The same procedure was applied to estimate the horizontal displacement of DPOAE I/O curves, except that the displacement was calculated relative to the DPOAE responses for L 2 = 35 dB SPL (arrow in Figure 1D). We calculated the shifts relative to 54 dB pSPL clicks and L 2 = 35 dB SPL because very few participants had OAEs at lower click and L 2 levels ( Table 1).
Quantification of the Reliability of the CAS Effect
The test-retest variability of the different estimates of MOCR strength was assessed in two ways. First, we correlated the magnitude of the CAS effect for trials #1, #2, and #3 and fitted a straight line to the data. If the measures were reliable, i.e., if CAS effect were equal across the three repetitions, the slope would be equal to 1.
The second analysis involved calculating the standard deviation of the CAS effect across trials #1, #2, #3, and/or #4. The more reliable measure would produce the smallest SD across trials. For tone detection thresholds, the SD of the CAS effect was calculated for the three measures at a given frequency. (Fourth measures were not included in the analysis because they were obtained only for some participants and frequencies; see above). For CEOAEs and DPOAEs, the CAS effect was calculated for the first, second, third, and fourth measures, and then the SD of the CAS effect across these measures was calculated. The SD of the CAS effect was calculated when at least three of the four pair of measures (with and without CAS) were valid. In this case, we did not request the across-measures SD to be ≤3 dB because enforcing that criterion would have reduced the actual SD.
Statistical Analyses
Statistical analyses were performed using IBM SPSS v. 23. Normality was tested with the Shapiro-Wilk test, and parametric . This is because some measurements did not meet the inclusion criteria (see section "Materials and Methods") and precluded us from using RMANOVAs or Friedman's tests. Mean and individual results for each condition are depicted in Figure 2. Table 1. Asterisks indicate statistically significant Bonferroni-corrected pairwise comparisons at * p ≤ 0.05, * * p ≤ 0.01, * * * p ≤ 0.001.
or non-parametric tests were used as appropriate to evaluate the statistical significance of the CAS effect on tone detection thresholds, CEOAEs, and DPOAEs (Figure 2), as well as to evaluate the CAS effect for different probe frequencies (Figure 3) and levels (Figure 4). Pearson's coefficient of correlation was used to investigate if there was a correlation between the different estimates of contralateral MOCR strength (Figures 5, 6).
A score was regarded an outlier when it was outside 1.5 times the interquartile range. Outliers were not included in the correlations. Because OAE data were not available for all participants and conditions (Table 1), the statistical tests used in the study focused on optimizing the analyses of the available data. For example, for any given click level, multiple t tests instead of a repeated-measures analysis of the variance (RMANOVA) were used to analyze the effect of CAS at every test frequency ( Figure 2B); the RMANOVA would have excluded participants with missing data in some conditions. Similarly, for CEOAEs and DPOAEs, correlations were performed separately for each probe frequency and level instead of averaging data across all stimulus frequencies and/or levels, something that would have been interesting. We applied two-tailed tests for all analyses. An effect was regarded as statistically significant when the null hypotheses could be rejected with 95% confidence (p ≤ 0.05). Unless otherwise stated, we applied Bonferroni corrections for multiple pairwise comparisons.
CAS Effect on Tone Detection Thresholds, CEOAEs, and DPOAEs
The aims of this study are to investigate (1) the correlation between three different methods often used to assess MOCR strength in humans; and (2) which of the three methods is more reliable. Before addressing these aims, however, we explored if the CAS had the expected effect of increasing tone thresholds and suppressing OAEs. Figure 2 shows tone detection thresholds (Figure 2A), CEOAE levels ( Figure 2B), and DPOAE levels ( Figure 2C) for all participants and test conditions. Note that there are fewer data points than participants were tested because some data did not meet the inclusion criteria (see section "Materials and Methods"). Average CEOAEs and DPOAEs were 13.9 and 16.6 dB, respectively, above the average noise floor (mean across probe levels and frequencies). These values indicate good quality of the OAEs recorded. Analyses showed that CAS increased tone detection thresholds and tended to suppress CEOAEs and DPOAEs, as expected. This trend occurred for all conditions, but the number of statistically significant pairwise comparisons was relatively greater at higher that at lower probe levels [i.e., for 54 and 60 dB pSPL click levels ( Figure 2B) or L 2 = 45 or 50 dB SPL ( Figure 2C)] probably because of the larger number of data points at higher levels ( Table 1).
CAS Effect as a Function of Probe Frequency
We analyzed the CAS effect as a function of probe frequency to investigate to what extent our results are consistent across the three MOCR estimates as well as with previous studies. Figure 3A depicts the CAS effect on tone detection thresholds. The mean (±SD) magnitude of the CAS effect was 1.7 (±2.0), 2.3 (±2.2), and 2.0 (±1.2) dB for 0.5, 1.5, and 4 kHz, respectively. Friedman's test did not reveal significant differences in the magnitude of CAS effect across frequency [χ 2 (2) = 1.5, p = 0.472]. This result is consistent with Aguilar et al. (2015) and Nogueira et al. (2019), who did not find significant differences in the effect of CAS on detection thresholds for 0.5-and 4-kHz tones when the duration of the tones was ≥200 ms. By contrast, Kawase et al. (2003) found greater CAS effect at 2 kHz than at lower or higher frequencies.
On the other hand, the magnitude of the present CAS effect is comparable to that reported elsewhere (Aguilar et al., 2015; but see Kawase et al., 2003).
The magnitude of CAS effect on the 1.5-and 3-kHz components of CEOAEs is depicted in Figure 3B. Results are shown only at the two frequencies with the largest number of participants (Table 1) Our results are consistent with previous studies that found greater CAS effect on CEOAEs for frequency bands centered at or around 1.5 kHz than at 3 kHz (Francis and Guinan, 2010;Lisowska et al., 2014). In addition, the magnitude of CEOAE level suppression is in line with previous studies. For example, Francis and Guinan (2010) used 50 dB pSPL clicks and a contralateral BBN of 60 dB SPL and found CEOAE suppression about −1.5 dB for frequency bands ≤ 2.75 kHz, and about −0.5 dB for frequency bands between 3.25 and 5.25 kHz. Those values are close to the present estimates for 54 dB pSPL clicks, the closest level.
Altogether, the trend and magnitude of present CAS effects are consistent with those reported in previous studies. We found that the CAS effect on CEOAEs and DPOAEs tended to be greater at lower frequencies whereas it was fairly constant across frequencies for tone detection thresholds. This shows that the frequency dependence of the CAS effect was inconsistent for behavioral and OAEs measures.
CAS Effect as a Function of Probe Level
Most physiological studies have shown that MOC activation suppresses BM responses (Murugasu and Russell, 1996;Dolan et al., 1997;Cooper and Guinan, 2006) and the compound action potential (Puria et al., 1996) more at lower than at higher levels, that is, over the range of stimulus levels where the cochlear amplifier gain is greatest (Robles and Ruggero, 2001). The CAS-induced suppression of CEOAEs is also usually greater at lower than at higher click levels (Hood et al., 1996;Veuillet et al., 1996;De Boer and Thornton, 2007;De Boer et al., 2012;Mishra and Lutman, 2013). Figure 4A shows the CAS effect on CEOAE levels as a function of click level for the 1.5 and 3 kHz frequency bands. The amount of CEOAE suppression was not significantly different for 54, 57, and 60 dB pSPL clicks neither at 1.5 kHz [one-way RMANOVA: F(2,18) = 1.46, p = 0.258] nor at 3 kHz [one-way RMANOVA: F(2,18) = 2.17, p = 0.143]. The absence of a level effect may be due to the narrow range of click levels studied. For example, Hood et al. (1996) found CEOAE suppression to be similar for 50, 55, and 60 dB pSPL clicks, and greater for those lower levels than for 65 or 70 dB pSPL clicks.
Within-Subject Correlation of CAS Effect Across Methods
In a first analysis, we investigated the hypothesized within-subject correlation between CAS-induced increase in tone detection threshold and the horizontal displacement of CEOAE or DPOAE I/O curves. Results are shown in Figure 5. We found the expected trend only for DPOAEs, although the correlations were far from statistically significant (Figure 5B). The pattern of trends suggests that increasing the sample size might bring the correlation between threshold shifts and the horizontal displacement of DPOAE I/O curves closer to statistical significance but would unlikely reveal a correlation between threshold shifts and CEOAE I/O curve shifts. In other words, although the CASinduced increase in tone detection thresholds and the horizontal displacement of CEOAEs I/O curves are both expected to be the result of the MOCR linearizing BM responses (Figure 1), those measures are not equivalent in revealing MOCR effects, at least when using the limited range of click levels used here. Our result agrees with Fletcher et al. (2016), who did not find a correlation between CEOAE suppression and the reduction of cochlear mechanical gain inferred from temporal masking curves.
In a second analysis, we investigated the hypothesized withinsubject correlation between the CAS-induced suppression of CEOAEs and DPOAEs for CEOAEs and DPOAEs obtained at fixed, and roughly matched levels. BM responses to tones can be predicted from BM responses to clicks (Recio et al., 1998) but click and tone levels must be different to obtain the same BM response magnitude with the two stimuli. For example, in Recio et al. (1998), BM responses to 54-dB pSPL clicks predicted accurately the magnitudes of BM responses to 40-dB SPL tones in the chinchilla cochlea. Here, it is hard to know which click level and L 2 produced the same BM response magnitude without CAS. For this reason, we opted to correlate the CAS effect for conditions with the greater number of data points. Figure 6A shows the within-subject correlation of the CAS effect on CEOAEs for 60 dB pSPL clicks and on DPOAEs for L 2 = 50 dB SPL. The correlation was not significant at any probe frequency. Moreover, the expected trend occurred only at 4 kHz. Complementary analyses (not shown) revealed no significant correlations when using 60 dB pSPL clicks and L 2 of 45 dB SPL, or 57 dB pSPL clicks and L 2 of 50 or 45 dB SPL. The lack of correlation suggests that it is not appropriate to assume that the CAS effects on CEOAEs for a single click level and DPOAEs for a single L 2 provide related information or can be used equivalently. It remains uncertain, however, whether associations would emerge across a broader parametric range (e.g., for other probe levels or averaging several probe levels).
In a third and last analysis, we investigated a potential within-subject correlation between the CAS-induced horizontal displacement of CEOAE and DPOAE I/O curves. Figure 6B shows that the expected trend occurred at intermediate frequencies (1.5, 2, and 3 kHz), and that the correlation was indeed statistically significant at 2 kHz. This suggests that the horizontal displacement of CEOAEs and DPOAEs I/O curves may be used somewhat 'equivalently, ' at least at these frequencies.
It remains uncertain, however, to what extent these displacements are reflecting MOCR effects. For instance, Lichtenhan et al. (2016) measured the CAS effect on the horizontal displacement of FIGURE 8 | Across-measures standard deviation of CAS effect for tone detection thresholds, CEOAEs, and DPOAEs. Results are for different probe frequencies, click levels (CEOAEs) and L 2 (DPOAEs). Open symbols depict mean data and filled symbols depict individual results. The numbers above each set of circles indicate the number of participants included in the analysis. One outlier for L 2 = 30 dB SPL and f 2 = 3 kHz is not shown in the figure and was omitted from the mean. DPOAE I/O functions and on the compound action potential I/O functions in humans and found average trends to be discrepant (their Figure 4), which suggests that factors different from the MOCR are involved in one or both measures.
In summary, the CAS effect on tone detection thresholds was not correlated with the horizontal displacement of CEOAE and DPOAE I/O curves measured in the same subject. Similarly, for fixed stimulus levels, CAS-induced CEOAE suppression was not correlated with CAS-induced DPOAE suppression. The results also showed, however, that the horizontal displacements of CEOAE and DPOAE I/O curves were correlated with each other, at least for mid-frequency probes. The overall lack of correlation can be due to many factors, including the limited parametric space studied (e.g., the clicks and primary levels used here may represent different points in the CEOAE and DPOAE amplitude growth function), the limited resolution of I/O curves (e.g., the 9 dB range of click levels for CEOAE I/O curves may be too narrow to properly define the amplitude growth function), and/or the reduced numbers of observations due to data exclusion. However, other factors such as a low reliability of the measures could be another possible cause (see below).
Reliability of CAS Effects
In the preceding sections, it has been shown that the three different methods used to estimate the MOCR strength are not correlated with each other (Figures 5, 6). This can be partly due to the low reliability of the measures. Figure 7 illustrates acrosstrial correlations for tone detection thresholds (Figures 7A,B), CEOAEs (Figures 7C,D), and DPOAEs (Figures 7E,F) in the conditions without (left panels) and with CAS (mid panels), as well as for the CAS effect (right panels). In all panels, the dashed lines illustrate 1-to-1 test-retest correspondence, i.e., zero test-retest variability. For measures obtained with and without CAS, most symbols are located along the dashed line, indicating small test-retest variability. By contrast, for the CAS effect, symbols are away from the dashed line (right-most column in Figure 7), indicating high test-retest variability. This variability can be quantified by the slope of a linear fit to the data in each panel of Figure 7. For tone detection thresholds at 4 kHz (Figure 7B), the slope of the fitted function for measures #2 and #3 (red line) was 0.94 dB/dB in the condition without CAS, 0.88 dB/dB in the condition with CAS, and 0.10 dB/dB for the CAS effect. Because the slope was very different from 1 dB/dB in the latter case, we conclude that the CAS effect on 4-kHz tone detection thresholds is not reliable. Similar patterns are observed for CEOAEs (Figures 7C,D) and DPOAEs (Figures 7E,F), something surprising considering that OAE measures with and without CAS were obtained without removing the OAE probe from the participant's ear. Altogether, the present results indicate that neither CAS-induced increases in tone threshold nor OAE suppression are reliable estimates of MOCR strength.
If one of the three methods considered here must be chosen to estimate MOCR strength, however, it would be useful to know which one is the most reliable to prioritize its use over the other one(s). Figure 8 shows the SD of the CAS effect across different trials for tone detection thresholds, CEOAEs and DPOAEs. At all frequencies, the SD was greater for DPOAEs than for tone detection thresholds or CEOAEs, demonstrating that CEOAEs or tone detection thresholds provide more reliable estimates of CAS effects than do DPOAEs. However, as noted earlier, reliability of the CAS effect on tone detection thresholds or CEOAEs can be also low at some frequencies (Figure 7).
The greater test-retest reliability of the CAS effect for CEOAEs than for DPOAEs is consistent with previous studies. Stuart and Cobb (2015) and Mertes and Goodman (2016) used Cronbach's alpha, where a value of 1 indicates perfect reliability, to assess the intra-session test-retest reliability of CAS effect on CEOAES. Stuart and Cobb (2015) reported a Cronbach's alpha greater than 0.8, and Mertes and Goodman (2016) reported a Cronbach's alpha greater than 0.95. Kumar et al. (2012) and Kalaiah et al. (2018), however, reported a mean (across frequency) intrasession Cronbach's alpha for DPOAEs of 0.5 and 0.3, respectively. It is unclear why reliability of the CAS effect was lower for DPOAEs than for CEOAEs. Kumar et al. (2012) and Kalaiah et al. (2018) proposed that attentional status might have contributed to the low test-retest reliability of DPOAEs. We, however, measured CEOAEs and DPOAEs in the same participants, and there is no reason to think that attentional status was more variable when measuring DPOAEs than CEOAEs. On the other hand, one might argue that the large test-retest variability for DPOAEs is related with the dual source (reflection and distortion) generation mechanism. DP-grams, however, are highly stable across measurement sessions (Gaskill and Brown, 1990;Zhang et al., 2007). That is, the amplitude and phase of the reflection and distortion component would not change (or not too much) from one trial to another. Hence, although possible, it is uncertain how the MOCR would affect differently the reflection and/or distortion components from one trial to another.
The present SDs of the CAS effect (Figure 8) are also in line with previous studies. Stuart and Cobb (2015) reported an intrasession SD of ∼0.4 dB when they used 60 dB pSPL clicks. We observed a mean (across frequency) SD of 0.7 dB for 60 dB pSPL clicks, the most similar condition. Kumar et al. (2012) quantified the variability of the CAS effect on DPOAEs as the standard error across-measures. They reported a mean (across frequency) standard error of 0.8 dB for L 2 = 55 dB SPL. Here, the across frequency mean standard error was 1.0 dB for L 2 = 50 dB SPL.
GENERAL DISCUSSION
We have shown that, on average, the use of a contralateral broadband noise increased tone detection thresholds, suppressed CEOAEs and DPOAEs, and horizontally shifted CEOAE and DPOAE I/O curves to higher levels, as expected. However, no correlations were found between the CAS effect on tone detection thresholds and on the horizontal displacement of CEOAEs or DPOAEs I/O curves (Figure 5), or between the CAS-induced suppression of CEOAEs at DPOAEs for a given stimulus level (Figure 6A). The horizontal displacements of CEOAE and DPOAE I/O curves were, however, correlated with each other, at least for the conditions with the greatest number of subjects ( Figure 6B). We also found that the CAS effect on tone detection thresholds and CEOAEs showed the lowest testretest variability, suggesting that their use should be prioritized over the use of DPOAEs.
Possible Factors Responsible for the Lack of Correlation Across the Different Measures
The lack of correlation across the different MOCR strength estimates may be due to various factors. First, the restricted parameters employed could be a possible reason. We correlated CAS effects on CEOAEs and DPOAEs for fixed probe levels that may represent different points in the CEOAE and DPOAE amplitude growth function and thus result in weak or absent correlations. In addition, CEOAE and DPOAE I/O curves comprised only 2 or 3 points for some participants. This limited I/O curves resolution may have been insufficient to accurately define the amplitude growth. Further studies should test whether correlations emerge after exploring a broader parametric range.
Second, the CAS-induced increments in tone detection thresholds may reflect 'central masking' in addition to, or instead of, a linearization of BM responses by contralateral MOCR activation. That is, the CAS could have interacted with the test tone in the central auditory nervous system making tone detection harder, a phenomenon referred to as 'central masking.' Evidence in favor of central masking on tone detection thresholds has been reported previously. Smith et al. (2000) demonstrated that, in macaques, the tone threshold increment with CAS remained to some extent when MOC efferents were sectioned. Marrufo-Pérez et al. (2018) showed that detection thresholds for short (50 ms) tones increased more when the tone and CAS onset coincided (early condition) than when the tone onset was delayed 300 ms from the CAS onset (late condition). Because the time course of MOCR activation is around 300 ms (Backus and Guinan, 2006), one would expect greater threshold increments in the 'late' than in the 'early' condition if the MOCR were the only responsible for the increments, but this was not the case. In addition, several studies have demonstrated that bilateral cochlear implant users show an increase in the detection threshold of a probe signal in the presence of contralateral electric stimulation (Van Hoesel and Clark, 1997;James et al., 2001;Lin et al., 2013;Aronoff et al., 2015;Lee and Aronoff, 2018) despite the electrical stimulation delivered by cochlear implants bypasses OHCs and hence is independent from the MOCR. It is hard to differentiate the contribution of the MOCR from central masking. In addition, it is uncertain why some participants showed lower tone detection thresholds with than without CAS (Figure 3A), especially considering that both MOCR activation and central masking should have resulted in higher tone detection thresholds.
Third, the attentional state of the participants during the OAE measurements might have affected the results. Several studies have reported that auditory or visual selective attention can alter transient evoked OAEs (Froehlich et al., 1993a;De Boer and Thornton, 2007;Garinis et al., 2011;Namasivayam et al., 2015), DPOAEs Srinivasan et al., 2012Srinivasan et al., , 2014Wittekindt et al., 2014), or the compound action potential (Delano et al., 2007), presumably by activation of the MOCR. In the present tone-detection experiment, participants must have attended to both visual (the lights displayed in the computer screen) and acoustic cues (the tones) during the measurements. CEOAEs and DPOAEs, by contrast, were recorded without controlling the attentional state of the participant. Therefore, it is uncertain if and to what extent participants were attending to the acoustic stimuli during OAE measurements. Moreover, some participants slept during OAE recordings, and sleeping can decrease efferent activity (Froehlich et al., 1993b). These factors could be partly responsible for the weak (or lack of) correlation between the CAS effect on tone detection thresholds, CEOAEs and/or DPOAES.
Fourth, the middle-ear muscle reflex (MEMR) could have confounded the results to some extent. We set the level of the contralateral BBN at 60 dB SPL because this level has been often used as MOCR elicitor (e.g., Abdala et al., 1999Abdala et al., , 2009Abdala et al., , 2014Wagner et al., 2008;Francis and Guinan, 2010;Wicher and Moore, 2014;Aguilar et al., 2015;Boothalingam and Purcell, 2015;Mertes and Leek, 2016). Using the same level of CAS for all participants, however, may not be ideal because some listeners can have a MEMR threshold as low as 50 dB SPL for BBN (Zhao and Dhar, 2010;Feeney et al., 2017). The contraction of the middle-ear muscle changes middle-ear transmission and hence OAEs. If our contralateral stimulation activated the MEMR in some participants but not in others, this would introduce uncertainty and variability in the measures of MOCR strength. In addition, if the probability of MEMR activation was different for DPOAEs, CEOAEs, or threshold measurements, this could have contributed to the poor correlation and reliability of the measures.
Fifth, the lack of correlation between CAS-induced CEOAE and DPOAE suppression for a given stimulus level (i.e., for a given click level and L 2 ) may have occurred to some uncertain extent because the third tone used when measuring DPOAEs did not suppress totally the reflection component. CAS changes the phase of the reflection component but not (or not so much) the phase of the distortion component, thus resulting in an increase of the DPOAE level when the two components change from canceling each other in the condition without CAS to combining in a constructive fashion in the condition with CAS (Deeter et al., 2009;Henin et al., 2011). As described in the Section "Materials and Methods, " the level of the suppressor tone was 10 dB below that suggested by Johnson et al. (2006). This level may have been insufficient to suppress the reflectionsource contribution for some participants, which could explain why CAS sometimes enhanced rather suppressed DPOAEs (e.g., Figures 3C, 4B), thus resulting in CAS effect on DPOAEs to be an unreliable MOCR estimate.
Sixth, we did not control for the effects of standing waves in the ear canal, which can be present above 2-3 kHz and lead to inaccurate measurement of stimulus levels. Standing waves occurs when the stimulus presented to the ear canal (forward waveform) interacts with the stimulus reflected from the eardrum (backward waveform). These waveforms can enhance or cancel each other when are in phase or out of phase, respectively, resulting in a difference in the probe level between the microphone and eardrum of up to 20 dB (Stinson et al., 1982;Siegel, 1994). If standing waves were present during OAE recordings, they probably introduced noise into the measurements and consequently, the MOCR gauged by OAEs.
Seventh, as described previously, test-retest repeatability of the CAS effect for tone detection thresholds, CEOAES, and DPOAEs was very low for some probe frequencies (Figure 7) despite the various OAE trials (with and without CAS) were measured in a single session without removing the OAE probe. This low reliability can also contribute to the low (or lack of) correlation across the three methods used to estimate the MOCR strength. It is uncertain why the test-retest repeatability was low. One or more of the factors described in the preceding paragraphs (e.g., attentional status) could be responsible for it. Complementary Bland-Altman analyses (Bland and Altman, 1999) revealed that there was not a systematic bias of the measures from trial #1 to #3, i.e., the difference of the CAS effect between trials 1 and 3 was zero on average for tone detection thresholds, CEOAEs, and DPOAEs (results not shown). This indicates that the factor(s) that causes the low repeatability of the measures was independent of trial order.
Eighth, all participants were normal-hearers with presumably normal efferent system reflexes. It is possible that the natural scatter or variation in the MOCR strength was not large enough to capture a correlation well, if it exists.
Lastly, a potential problem is that CAS-induced changes in OAEs level need not reflect the reduction in the cochlearamplifier gain, as is usually assumed. For example, Berezina-Greene and Guinan (2017) demonstrated that SFOAE amplitude increased at some frequencies and decreased at others when MOC efferents were activated by brainstem shocks in guinea pigs, and the increments occurred despite the animals showed a reduction in the cochlear-amplifier gain. Similar results might occur for CEOAEs insofar as CEOAEs and SFOAEs are generated by the same mechanism (Kalluri and Shera, 2007;Francis and Guinan, 2010;Shera and Abdala, 2012). Indeed, we found that CAS sometimes increased CEOAEs in some conditions (e.g., Figure 3A).
In summary, because the correspondence between the two OAE indices was explored across a limited (and maybe not always matching) range of stimulus levels and frequencies, and because many factors were not or could not be (e.g., central masking) controlled for, it is not surprising that correlations were not observed. Further research is needed to investigate which factors are mostly responsible for this lack of correlation and how their effects can be controlled for, as well as to design better measures of MOCR strength in humans.
Implications
Current evidence supporting the roles of the MOCR in human hearing is mixed (reviewed by Fuente, 2015;Smith and Keil, 2015;Lopez-Poveda, 2018). The discrepant results across studies can be due to some extent to the methodology used. As in our study, many previous studies did not control for the attentional state of the participants (e.g., Kumar and Vanaja, 2004;Kim et al., 2006;Bidelman and Bhagat, 2015), the presence of fine structure in DPOAEs (e.g., Kim et al., 2006;Mukari and Mamat, 2008), or the presence of standing waves (e.g., Giraud et al., 1997;Kumar and Vanaja, 2004;Kim et al., 2006;Mukari and Mamat, 2008;Bidelman and Bhagat, 2015). In addition, many studies correlated a single estimate of the contralateral MOCR strength (e.g., CAS-induced suppression of CEOAEs for a single click level or DPOAEs for a single combination of primary levels L 1 and L 2 ) with performance scores in a psychoacoustical task of interest (e.g., Kumar and Vanaja, 2004;Kim et al., 2006;Mukari and Mamat, 2008;Stuart and Butler, 2012;Mishra and Lutman, 2014;Bidelman and Bhagat, 2015;Mertes et al., 2018Mertes et al., , 2019. Here, we have shown that CEOAE suppression for a given click level is not correlated with DPOAE suppression for a given L 2 ( Figure 6A). Hence, it is not surprising that studies reached different conclusions about the roles of the MOCR in human hearing when the MOCR strength was estimated with two different methods and a single probe level [e.g., Mukari and Mamat (2008) and Bidelman and Bhagat (2015)]. On the other hand, some studies have measured DPOAE or CEOAE suppression by performing only one measure without CAS and another measure with CAS (e.g., Kumar and Vanaja, 2004;Stuart and Butler, 2012;Bidelman and Bhagat, 2015;Mertes et al., 2019). Here, we have shown that the suppression of CEOAEs and DPOAEs can be highly variable from trial to trial (Figures 7, 8). Therefore, it is also not surprising that findings were also discrepant across studies that aimed at investigating the roles of the MOCR in human hearing using the same methodology but assessing MOCR strength with only one measure without and with CAS [e.g., Stuart and Butler (2012) and Bidelman and Bhagat (2015)].
Altogether, our study suggests that many confound factors enter into MOCR measurement and that previous studies may have used a simplistic way of evaluating MOCR strength. How to optimize MOCR measurements must be addressed in further studies. In addition, other ways of analyzing OAEs could be explored. For example, Dragicevic et al. (2019) studied low-frequency (1-35 Hz) oscillatory amplitude changes in DPOAEs and electroencephalography to assess whether cortical oscillations modulate cochlear responses during selective attention. Their results were according to their hypothesis, and they propose the auditory efferent system as the most probable neural pathway responsible for modulating cochlear responses. It is possible that using such novel methods for OAE analyses help to investigate the roles of the MOCR in human hearing.
CONCLUSION
(1) On average, contralateral acoustic stimulation (CAS) increased tone detection thresholds and decreased CEOAE and DPOAE levels in normal hearing listeners.
(2) The magnitude of CAS effect tended to be greater for lower (1.5 kHz) than for higher (3-4 kHz) frequencies for CEOAEs and DPOAEs. The effect of CAS on tone detection thresholds, however, was similar in magnitude for 0.5, 1.5, and 4 kHz probe tones. (3) The CAS effect on CEOAEs was not different for 54, 57, and 60 dB pSPL clicks. The CAS effect on DPOAEs was greater for L 2 = 35 dB SPL than for L 2 = 50 dB SPL at 1.5 but not at 4 kHz. (4) The CAS-induced change on tone detection thresholds was not correlated with the CAS-induced horizontal displacement of CEOAE or DPOAE I/O curves. (5) The CAS effect on CEOAEs for a given click level was not correlated with the CAS effect on DPOAEs for a given L 2 . (6) The horizontal displacements of CEOAEs and DPOAEs I/O curves induced by CAS tended to be correlated with each other, at least for conditions with the greater number of data points. (7) The test-retest variability of the CAS effect was high overall but smaller for tone detection thresholds and CEOAEs than for DPOAEs. (8) The weak correlations and poor reliability observed here could be related with inherent limitations of the study, such as the small range of clicks and L 2 levels used, and/or with factors not related with the MOCR. Nonetheless, the present findings show that the different estimates of the MOCR strength cannot be used independently and assume that they provide similar results.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Comité de Bioética, Universidad de Salamanca. The participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
MM-P performed the research, analyzed the data, and wrote the first draft of the manuscript. MM-P and EL-P edited the manuscript and wrote the manuscript. PJ provided technical tools. EL-P designed the research. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by Ministerio de Ciencia e Innovación (grant PID2019-108985GB-I00) and the European Regional Development Fund. | 13,129.2 | 2021-02-16T00:00:00.000 | [
"Physics"
] |
Distributed MEMS Sensors Using Plasmonic Antenna Array Embedded Sagnac Interferometer
A micro Sagnac interferometer is proposed for electron cloud distributed sensors formed by an integrated (micro-electro-mechanical systems) MEMS resonator structure. The Sagnac interferometer consists of four microring probes integrated into a Sagnac loop. Each of the microring probes is embedded with the silver bars to form the plasmonic wave oscillation. The polarized light of 1.50 µm wavelength is input into the interferometer, which is polarized randomly into upstream and downstream directions. The polarization outputs can be controlled by the space–time input at the Sagnac port. Electrons are trapped and oscillated by the whispering gallery modes (WGMs), where the plasmonic antennas are established and applied for wireless fidelity (WiFi) and light fidelity (LiFi) sensing probes, respectively. Four antenna gains are 2.59 dB, 0.93 dB, 1.75 dB, and 1.16 dB, respectively. In manipulation, the sensing probe electron densities are changed by input source power variation. When the electron cloud is excited by the microscopic medium, the change in electron density is obtained and reflected to the required parameters. Such a system is a novel device that can be applied for brain-device interfering with the dual-mode sensing probes. The obtained WGM sensors are 1.35 µm−2, 0.90 µm−2, 0.97 µm−2, and 0.81 µm−2, respectively. The WGMs behave as a four-point probe for the electron cloud distributed sensors, where the electron cloud sensitivities of 2.31 prads−1mm3 (electrons)−1, 2.27 prads−1mm3 (electrons)−1, 2.22 prads−1mm3(electrons)−1, and 2.38 prads−1mm3(electrons)−1 are obtained, respectively.
Introduction
Sagnac provided the earliest demonstration of the viability of an optical experiment capable of demonstrating the state of rotation of a frame of reference, by making measurements within that frame. The fringe pattern recorded at the output of this interferometer is sensitive to any phase difference between the two opposing propagation beams. Sagnac interferometer is one of the interferometer types, which can be applied in many applications [1][2][3].
The principle is that the Sagnac interferometer setup usually involves a ring configuration where the light beam splits into two beams. The two beams travel around the ring path but in opposite directions, and the two beams undergo interference on returning to the entry point of the Sagnac interferometer. There are different types of Sagnac interferometers that have been designed, developed, and employed for sensing applications [4][5][6][7][8]. The microscale system known as micro-electro-mechanical system (MEMS) technology is used to develop resonators for various applications [9][10][11][12], where the resonators such as panda ring, micro ring, and MZI have been applied. Recently, microring resonators have been widely used in both theoretical and experimental works [13][14][15][16], where the realistic applications have been confirmed. Various works of MEMS using microring resonators have also been found [17][18][19][20]. The review of the micromachined resonator is presented by Abdolvand et al. [17], where the basic model with the electric circuit, resonant modes, and fabrication process are explained. Another MEMS application is the accelerometer-based optical modulator using MZI was presented [9], which was fabricated on silicon-on-insulator, where one branch of MZI is fixed, and another branch uses a floating waveguide. The waveguide-based MZI and gyroscope were already designed and presented [21,22]. Other forms of MEMS and interferometers have been proposed, where Arumona et al. [23] have proposed the use of microring resonator embedded Farby-Perot interferometer, which can be useful for sensing applications, especially for nano/microscale measurement regimes. The Fabry-Perot structure is different from the Sagnac one. Both are designed and used for microring structure and space-time control, where Fabry-Perot one has applications in quantum spectroscopy, quantum sensor, and microscopy regime. More details of interferometers and microring resonators can be found in the given references [24][25][26][27]. A microring embedded Sagnac interferometer is proposed for an integrated MEMS resonator structure, which can be applied for nano/microscale sensors, especially for micromachine applications. The integrated MEMS resonator provides compact, lightweight, and low-cost devices. The use of the integrated MEMS resonator structure has been proposed in various applications [28][29][30][31]. The integrated photonic circuits have also been applied in quantum communication and molecular sensing [32].
In this work, the proposed planar waveguide Sagnac interferometer dimensions are in the form of the integrated MEMS resonators. The main emphasis of the current work is to design an integrated Sagnac interferometer for electron cloud sensors. Four microrings are embedded with the silver bars at the center of the microrings, from which the electrooptic sensing probes can be formed. An isolator is used at the input to control the reflected signal. Four whispering gallery modes (WGMs) are formed by the nonlinear effect coupled by the two side rings, from which the electron cloud is trapped and oscillated by the plasmonic waves. These WGMs act as four-probe sensors.
The polarized laser is fed into the Sagnac interferometer, from which the output is split by a beamsplitter. The OptiFDTD software is employed for the design and simulation, while the Matlab program employs the extracted parameters from the simulation results to plot the graphs and obtain other results. In this current work, the Sagnac interferometer structure waveguide for brain interfacing device sensors is proposed. The polarized laser is applied to form the quantum distributed sensors. The quantum sensors and human quantum consciousness investigation can be applied using the electron cloud distribution within the brain-device interfacing system.
Theoretical Background
The Sagnac interferometer circuit is shown in Fig. 1. The input signal is the polarized laser [23,33], which is given by an Eq. (1): where k z = 2 is the wave numberof the wave vector in the z-axis, is the wavelength, E 0 is the initial amplitude of the field, and z is the propagation distance in the z-axis.
The propagation of light pulse within the nonlinear material and in refractive index ( n ) is given as [23,33] where n 0 , n 2 , I and P , and A eff are the linear and nonlinear refractive indices, optical intensity and optical power, and effective core area, respectively. The light pulse follows the two paths for propagation which are clockwise and anticlockwise. After traveling through two paths, a phase shift is introduced in the output pulse.
The electron behavior of the silver bars at the four center microrings is described by the Drude model [21], which is given as where n e is the electron density, m is the mass, is the angular frequency, 0 is the relative permittivity, and e is the electron charge. The plasma frequency ( p ) at the resonance is obtained from the angular frequency, which is given by Eq. (4) [21]: where A, ω, and t are amplitude, angular frequency, and time. A space-time modulation signal is applied to achieve polarized output. The ± shows both axes of time.
The excited electrons are trapped inside the four microring resonators, where the four of WGMs' formations are taken place. The output at throughput ( E th ) and drop ports ( E dr ) are given by [21,34] where m 2, m 3, m 5, and m 6 are constants. The system outputs are normalized intensities, which are given as Fig. 1 The device fabricated/simulated structure of proposed work, where a micro-electron cloud sensors network system E in , E th , E dr , and E add are electric fields at input, throughput, drop and add ports, respectively. κ is coupling constants. PBS polarizing beamsplitter, PD photodetector. The polarized electron cloud components can be obtained, b an equivalent sensing probe circuit. The micro rings are embedded with silver (Ag) nano-bars. The optical isolator is applied to project the feedback to the laser source where I th , I in , and I dr are the throughput, input, and drop port intensities, respectively. The phase shift is given in Eq. (10) [35]: where 2 , 4 are coupling coefficients, is insertion loss, α is attenuation coefficient, β is propagation constant, and L = 2 r , where r is the radius of the center microring.
The integrated system for the microring distributed electron cloud sensor network is designed and simulated in OptiFDTD. The four microring circuits inside the Sagnac form the WGM. The designed system is explained using Eqs. (1), (2), (3), (4), (5), (6), (7), (8), (9) and (10). Figure 1 shows the designed structure of the microring distributed electron cloud sensors system. The input signal is applied at the input port. The input light is a polarized laser. The materials of Sagnac and microring are silica and Si, respectively. The light travels in the upper and lower branches of the Sagnac loop. The Sagnac loop is embedded with four microring resonator circuits. The radius of center microring (R D ) is bigger than the side ring radius (R L , R R ). The design parameters of the proposed system are shown in Table 1. The applied microring circuit ports depend on the light flow direction in the Sagnac loop, which is the two-way propagation, where the Sagnac closed loop is the required result.
Methods
The system is designed and simulated in OptiFDTD for 20,000 time-steps. The data is extracted from OptiFDTD, and graphs are plotted using MATLAB software, where Figs. 2, 3, 4, 5, 6 and 7 show the results of the designed system. The working of the designed system is as follows. First, the input signal of wavelength 1.50 µm is applied at the input port, which is a polarized laser as shown in Eq. (1). Half-light energy propagates into the upper branch (upstream) and the other half into the lower branch (downstream) directions, respectively. The signal flow of the upper branch is in the clockwise direction. The lower branch is the anticlockwise direction, which is shown in Fig. 1 where four microrings are integrated with the Sagnac loop. The two rings are with the upper branch and the other two with the lower branch. The micro rings are embedded with silver nano-bars. Light propagates into the input port of microring-1(antenna-1) and then from the throughput port of antenna-1 enters into the input port Similarly, from the throughput signals of antenna 4, enter the input port of antenna 3 and from the throughput of antenna 2 to antenna 1. Both paths of propagated lights return at the 3 dB coupler. The optical isolator is applied at the input port to prevent the reflection of the backlight. The light goes into outer microring at the output port. The propagation of the light pulse in the nonlinear material is given by Eq. (2). The real-time experiment of the Sagnac interferometer consists of a beamsplitter. Here, the proposed system is manipulated using space-time modulation at the add port of the microrings. The Sagnac output enters into the microring, which is polarized light. The light propagates inside the microring circuits, from which the phase shift between the polarization components is obtained by the space-time projection control. The Kerr effect is a nonlinear effect induced by the system, and with suitable parameters, as shown in Table 1, where the WGMs are formed. The WGM is the trapping of light at the center microring. These WGMs form the antennas inside the Sagnac which consist of the trapped electrons resulting from the illumination of the silver nano-bars by light at the center microring. Figure 2 shows the graphical results of the simulated structure. Figure 2a shows the formation of the four WGMs at the center microrings. Figure 2b shows the plasmons that propagate through the system with intense electromagnetic field. Figure 3a shows the frequency and input intensity plot of the four WGMs. Figure 4 shows the antenna gain plot. Antenna gain is plotted by varying the input power from 10 to 15 mW. A linear trend of gain is achieved with input power. The gain of antennas 1 to 4 are 2.59 dB, 0.93 dB, 1.75 dB, and 1.16 dB, respectively. Figure 5 shows the Table 1, where z is the propagation axis antennas' directivities. Equation (5) gives the space time applied at the add port, and Eqs. (6) and (7) give the microring output at throughput and drop ports. The output at the throughput and drop ports are normalized using Eqs. (8) and (9). Figure 6a-b are the plot of the normalized electron density of antennas 1 to 4 in the frequency domain and wavelength domain. The frequency domain is employed for the WiFi (wireless fidelity) band, while the wavelength domain is employed for LiFi (light fidelity) band. Antennas 1 to 4 are recognized for both WiFi and LiFi transmission. Gain and directivity are calculated using the standard formula [21,37]. Figure 7 shows the output intensity plot of the four WGMs. The input power is varied from 10 to 15 mW. A linear trend of the output intensity of the four WGMs is achieved with input power. The sensitivities of 1.35 µm −2 , 0.90 µm −2 , 0.97 µm −2 and, 0.81 µm −2 are obtained, respectively. The plasma frequency and electron density are related as given in Eq. (4). In the silver nano-bars, the electron density oscillations depend on the plasma frequency. Figure 8 The motivation is to design and model the quantum distributed sensors using large area MEMs sensors. The trapped electron clouds are plotted in terms of electron components (spins) density and used for distributed sensors. The new findings are the MEMs sensors and four-point probe technique, which can apply to largearea sensors using the distributed sensing probes. It has the potential of a brain-interfacing device, where selfcalibration among the four-point sensing probes can be applied. From Fig. 1, the trapped electron clouds are distributed within the circuit. The polarized control is applied to the sensing probe add port. The total electron cloud phase shift of each sensing probe within the system is given by Eq. (10), which is obtained by the polarized electron cloud phase shift of Eq. (5). The result of each probe can be identified by the different traveling times. A polarizing beamsplitter is arranged before the detector at the Sagnac interferometer output, from which the electron cloud spin-up or spin-down components can be obtained by polarization orientation arrangement. In manipulation, when the input light power varied, the sensor sensitivity relates to electron cloud density can be obtained.
Conclusion
A Sagnac interferometer and microring integrated circuit for electron cloud sensors is proposed. When light propagates inside the Sagnac loop, it goes in both clockwise and anticlockwise directions. The microrings are embedded with silver nano-bars. The electron cloud is trapped and oscillated inside the microrings, where the WGM formation takes place and the plasmonic wave generates. Four plasmonic antennas formed at the center microrings, where the frequency wavelength spectra are employed for WiFi and LiFi sensing probes, respectively. The proposed MEMs sensors can be applied to sense the physical parameters related to the electron excitation, where the changes can be detected and measured. The new findings are the MEMs sensors and four-point probe technique, which can apply to large-area sensors using the distributed sensing probes. It has the potential of a brain-interfacing device, where self-calibration among the four-point sensing probes can be applied. In application, the deep-brain signal detection related to quantum consciousness for human deeplearning investigation can be realized. | 3,516.8 | 2021-05-19T00:00:00.000 | [
"Physics"
] |
Perpendicular transmission of acoustic waves between two substrates connected by sub-wavelength pillars
We discuss theoretically the acoustic resonant transmission and zeros of transmission between two substrates connected by sub-wavelength pillars. The features of the transmission coefficient are explained in terms of the coupling of the incident waves with the Fabry–Perot oscillations inside the pillars and with the surface waves of both substrates. We discuss the dependence of the selective and zero transmission frequencies, in particular Fano resonances resulting from the proximity of a resonance to a zero of transmission, on the geometrical and physical parameters of the materials constituting the pillars and the substrate. These phenomena are studied in both one- and two-dimensional periodicities where the substrates are connected by a series of parallel plates or by a square lattice of cylindrical pillars, respectively. Finally, the calculation is extended to a periodic stacking of slabs and pillars that constitute a type of three-dimensional phononic crystal.
Introduction
Since the original work of Ebbesen was published in 1998 [1], the extraordinary optical transmission through a metallic film patterned with periodical cylindrical holes has attracted a great deal of interest ( [2] and references therein). The physical origin of this effect has been widely discussed in term of coupling between the surface plasmon polariton excitation with the Fabry-Perot resonances in the apertures. Over the last few years, the study of this phenomenon has been extended to acoustic waves that are incident on one-dimensional (1D) acoustic gratings with sub-wavelength slits [3][4][5] or on 2D panels periodically perforated with holes [6][7][8]. In those systems, a significant enhancement of the amplitude of the acoustic waves through very narrow apertures is seen. The origin of this phenomenon called extraordinary acoustic transmission [4,5] or acoustic resonant transmission [3,[6][7][8] is because of the complex interplay of guided modes inside apertures and structurally induced waves on the surface of the gratings. Besides the acoustic resonant transmission, acoustic shielding can also be achieved, over a wide range of wavelengths [9][10][11].
Fundamental applications of such structures have been highlighted, as a tunable phononic crystal consisting of double crystal slabs. Varying the distance between the two plates permits us to tailor the bandwidth of a broadband sound blockage [12][13][14]. Sound collimation was also achieved by engineering a single slit through a perfect slab with corrugated surfaces [15][16][17]. Zhu et al [18] made the demonstration that a holey-structured metamaterial can act as a nearly perfect imaging device by reproducing the deep sub-wavelength information of an object.
It is worth noting that in all these previous works the embedding medium is a fluid and the acoustic resonant transmission or screening phenomena take place through holes or slits in a membrane. The aim of this paper is to consider the opposite situation where the transmission occurs between two solid substrates across a periodic array of sub-wavelength pillars. We discuss the existence of selective and rejective transmissions and the possibility of Fano resonances as a function of the geometrical and physical parameters of the constituting materials (a preliminary account of this work is presented in the conference papers [19]). In section 2, we present the model and the method of calculation, namely the finite difference time domain (FDTD) method, to obtain the transmission spectra. In section 3, we investigate the normal incidence acoustic transmission between two substrates connected by either 1D or 2D periodic arrays of pillars composed, respectively, of parallel plates or a square lattice of cylindrical pillars. Then we extend our calculation to the transmission across a 3D phononic crystal composed of a periodic stacking of slabs and pillars. The conclusions are presented in section 4. Figure 1 is a schematic view of two silicon substrates connected by a 1D acoustic grid of period a composed of rectangular silicon plates. The case of a 2D periodicity is sketched in figure 7. The silicon is taken as a cubic material with elastic constants C 11 = 165.7 GPa, C 12 = 63.9 GPa and C 44 = 79.62 GPa and mass density ρ = 2331 kg m −3 . For the purpose of the FDTD simulations, the hollow parts of the structure are filled with air of density ρ air = 1.4 kg m −3 and velocity 340 m s −1 . Indeed, in these calculations, the continuity between the solid and air media replaces the use of vanishing normal stresses when the solid material is in contact with vacuum. Figure 1(b) shows the elementary unit cell, which is composed of a rectangular plate of width d and height h sandwiched between two homogeneous media. The transmission spectra presented in the following sections are performed with the help of a homemade FDTD code. In our calculations, the unit cell is discretized in two or three directions of the space, depending on whether the periodicity is in one or two directions. We use a mesh interval equal to a/100, which corresponds to the necessary spatial interval for a good definition of the solid-vacuum boundaries and the convergence of the results. The equations of motion are solved taking into account all the components of the displacement field, i.e. U x , U y and U z in 3D structures with periodicity in two directions (see figure 7), or U x and U z in 2D simulations corresponding to a 1D periodicity (figure 1). The time integration step is defined by t = x/(4c l ), where c l is the longitudinal velocity of sound in silicon, and the number of time steps is equal to 2 21 , which is the necessary time for good convergence of the numerical calculations.
Model and method of calculation
As seen in figure 1, the z-axis corresponds to the direction of propagation, perpendicular to the grid, and the x-axis is chosen along the grid. Periodic conditions are applied on each side of the 2D box in the x-direction. As the box is finite along z, perfect matching layers are applied on the top and bottom of the unit cell. A broadband wave packet is launched from the top, in the incident medium, in front of the periodic grid. This wave is a longitudinal pulse, with a polarization and Gaussian profile along the z-axis but uniform in the x-direction. The transmitted signal is recorded as a function of time in the outgoing medium, after the grid, and integrated along the x-axis over one unit cell for the component U z of the displacement field. Finally, the signal is Fourier transformed and normalized by an equivalent signal propagating through a homogeneous silicon bulk to yield the transmission coefficient. In all the transmission curves presented in the paper the frequencies are given in the dimensionless unit = ωa/2πc t , where c t = 5844 m s −1 is the transverse velocity of sound in silicon. The regular oscillations appearing in the low-frequency part of the spectrum can be associated with the excitation of Fabry-Perot resonances along the height of the plates. Figure 3 represents the evolution of these oscillations as a function of the geometrical parameters of the plates, i.e. (h/a) and (d/a). When we increase the height, the oscillations shift to lower frequencies, whereas decreasing the width of the plate enhances the amplitude of the oscillations. The frequencies of the peaks and their separation are closely related to the height h and, as a consequence, to the nature of the plates, but almost independent of the width of . For these two modes, mainly the longitudinal component (U z ) of the displacement field is different from zero. In both cases, the displacement field displays a strong enhancement inside the vertical plates, which results either in a full transmission (point A) or a significant rejection (point B). A second remarkable feature of the spectrum of figure 2 is the existence of periodic zeros of transmission occurring at the reduced frequencies = 0.85, 1.7, 2.6 and 3.4. These frequencies are dependent upon the properties of the substrate but are not much affected by changing the properties of the vertical plates (provided they remain relatively thin as in figure 2). These zeros of transmission can be associated with the excitation of a surface mode at the boundary of the upper substrate as shown in the displacement field for both the longitudinal (U z ) and transverse (U x ) components of point C (figure 4(c)). One can note an enhancement of the field in the vicinity of the surface, whereas the wave does not penetrate inside the plates. Such an excitation can be explained by the fact that, at sufficiently high frequency, the normal incident wave with a wave vector k || (parallel to the interfaces) equal to zero, can be coupled to surface modes having a wave vector equal to a reciprocal lattice vector, namely p(2 /a), where p is an integer. This explains the periodic occurrence of the zeros of transmission. In addition, one can check that their frequencies shift by changing the period a of the structure. These frequencies are only slightly dependent upon the properties of the vertical plates as far as the latter remain thin. The third remarkable feature appears when a resonance of the structure becomes very close to a zero of transmission, yielding a very narrow and sharp Fano resonance corresponding to a high selective transmission, such as point D in figure 2. The displacement field of the latter mode, presented in figure 4(d) for the longitudinal (U z ) and transverse (U x ) components, shows an enhancement of the acoustic wave inside the plates and in the vicinity of the surfaces of both substrates. Therefore, it corresponds to a coherent coupling between the diffracted waves excited on both surfaces and the Fabry-Perot resonant modes inside the junctions. The quality factor of the Fano resonance is very sensitive to the proximity of the resonance with the zero of transmission. This is illustrated in figure 5 where the frequency of the resonance is shifted by changing slightly the height of the pillars. By increasing h from 1.4a to 1.7a, the Fano resonance is shifted to lower frequencies and traverses the zero of transmission, which remains independent of h.
One-dimensional (1D) periodicity
Finally, above some threshold frequency, the transmission spectrum displays randomly fast oscillations. It is likely that this behavior happens when the plates can support transversely excited modes whose number increases when going to higher frequencies. It is also worth noting that in the case of full transmission, the ratio of transmission to unit area reaches 5 in the above calculations. Actually, this factor can even be increased further if we significantly decrease the area of the apertures. We will not discuss in detail the latter situations, which require a very fine mesh in the simulations and therefore increase the computation time too much. To highlight the above trends, we present in figure 6 the transmission coefficients after the materials of either the substrates or plates are changed. Besides silicon, we use steel with the following elastic parameters: c l = 5825 m s −1 , c t = 3227 m s −1 and ρ = 7780 kg m −3 . Figure 6(a) recalls the results already presented in figure 2 in the frequency range up to = 1. When the material of the plates is changed from silicon to steel ( figure 6(b)), the zero of transmission is not shifted since its frequency is essentially related to the substrate material, but the Fabry-Perot resonances inside the plates become closer to each other. This is understandable owing to the lower values of acoustic velocities in steel than in silicon. This also produces a shift of the Fano resonance, which is now slightly more separated from the zero of transmission than in figure 6(a). In contrast, when the substrates are made of steel and the plates are made of silicon (figure 6(c)), one recovers the first Fabry-Perot oscillation as in figure 6(a) but the position of the zero of transmission is now shifted according to the acoustic velocities of the substrate (from = 0.85 to = 0.51). Finally, figure 6(d) illustrates the case of two different substrates. Now, the transmission spectrum displays two zeros occurring respectively at = 0.85 to = 0.51 that support the relationship between the zeros of transmission and localized waves at the surfaces of both substrates. In this example there are no Fano resonances resulting from the proximity of a resonance mode with one of the transmission zeros. Let us also note that in this case the maxima of transmission do not reach unity because part of the incident wave is always reflected. The general trends obtained from the above discussions are the following. The Fabry-Perot oscillations are strongly dependent upon the nature of the plates and their heights, but almost independent of the nature of the substrates. The zeros of transmission are linked to the nature of the substrates, and almost independent of the material constituting the plates as far as the plates remain thin as compared to the period. Finally, the Fano resonance exists only when the two substrates are similar and its frequency depends on both the nature of the substrates and of the plates.
2D periodicity
In this section, we discuss the characteristic features of the transmission between two substrates across a 2D array of cylindrical pillars in the square lattice geometry ( figure 7(a)). The unit cell figure 7(c), can occur due to the coupling of the incident wave with the surface acoustic waves of the substrate. Since the incident wave has a wave vector k || = 0, the excited surface wave should correspond to a k || equal to one reciprocal lattice vector. Since we are dealing with a 2D square lattice geometry, the selected wave vectors k || will be in units of 2 /a: 1, √ 2, 2, Let us also note that in the case of full transmission, the ratio of transmission to unit area reaches a factor of 8 in the above calculations, which can be actually increased by decreasing the area of the pillars.
3D phononic crystal composed of a stacking of slabs and pillars
The previous effects can be enhanced and/or modulated if the space between the substrates contains a 3D phononic crystal composed of an alternating repetition of slabs and pillars along the vertical direction ( figure 8(a)). For the sake of simplicity, we assume that the pillars are in the 1D geometry presented in section 3.1 ( figure 8(a)). The thickness of each silicon plate is e and the number of periods is denoted as N. The corresponding transmission spectra are presented The case N=1 is the one already reported in section 3.1 (figure 2), displaying low-frequency Fabry-Perot oscillations, a zero of transmission at = 0.85 and the Fano resonance. By increasing the number of periods N, one can observe the formation of pass bands and band gaps as is usually the case in a phononic crystal, with the apparition of additional oscillations in the transmission coefficient inside each band due to the periodicity of the phononic crystal in the vertical direction. However, one set of pass bands (indicated by green arrows) originates from the initial Fabry-Perot oscillations in the pillars and is already present in the case N = 1. The second set, indicated by purple arrows, starts to form for N > 1 and is due to Fabry-Perot oscillations inside the silicon slabs of thickness e.
In figure 9, we show the displacement field associated with modes A and B in figure 8 (the case N = 4). For these two sets of peaks, only the longitudinal component is different from zero. It can be noted that mode A corresponds to an enhancement of the field inside the pillars, whereas mode B is associated with an enhanced vibration inside the silicon slabs. In addition, we have checked that the frequencies of the first set of bands is very sensitive to the physical parameters and heights of the pillars and almost independent of the thickness e of the slabs, whereas the second set of bands can easily be shifted by changing the thickness e. The two sets of bands can be tuned independently by varying the parameters of either slabs or pillars, leading to the possibility of a separation, an interaction or a modification of the shape of the transmission bands. For instance, the band at = 0.4 remains well separated from its neighboring bands, whereas the band at = 0.74 results from a superposition of two bands coming from each set.
Finally, one can note that the zero of transmission (occurring at = 0.85) remains unchanged when N increase from 1 to 4. The same holds for the Fano resonance which 12 becomes sharper as long as N increases and gives rise to a very selective transmission peak. The longitudinal displacement field of the latter (mode C in figure 8) is displayed in figure 9(c) and shows an enhancement of the acoustic wave both inside the pillars and in the vicinity of the surfaces of all slabs as in the monolayer example (figure 4).
Conclusion
In conclusion, we have analyzed several fundamental properties of normal transmission occurring between two solid substrates, across a periodic array of sub-wavelength pillars. Two different pillars have been considered: rectangular plates and cylindrical pillars. In both cases, we have shown that it is possible to achieve low-frequency oscillations, zero of transmission and Fano resonance peak. The low-frequency oscillations correspond to the Fabry-Perot resonances in the pillar. The frequency and amplitude of the oscillations can be tuned by the variation of the height and width of the pillar, respectively. The second remarkable effect corresponds to zero of transmission, which has been explained as a coupling between the normal incident wave and the surface mode of the substrate. Finally, we found a high selective transmission, identified as a Fano resonance peak. The origin of this sharp peak corresponds to a coherent coupling between the diffracted waves excited on both the surfaces and the Fabry-Perot resonant modes inside the junctions. All these effects have been enhanced considering several periodic arrays of subwavelength pillars separated by silicon plates. Depending on the frequency domain, prospective applications of these mechanisms can be anticipated such as selective filters, sound blockers, sensing effect or the management of the thermal conductivity. | 4,268.6 | 2012-07-19T00:00:00.000 | [
"Physics"
] |
Common Principles and Specific Mechanisms of Mitophagy from Yeast to Humans
Mitochondria are double membrane-bound organelles in eukaryotic cells essential to a variety of cellular functions including energy conversion and ATP production, iron-sulfur biogenesis, lipid and amino acid metabolism, and regulating apoptosis and stress responses. Mitochondrial dysfunction is mechanistically linked to several neurodegenerative diseases, cancer, and ageing. Excessive and dysfunctional/damaged mitochondria are degraded by selective autophagic pathways known as mitophagy. Both budding yeast and mammals use the well-conserved machinery of core autophagy-related genes (ATGs) to execute and regulate mitophagy. In mammalian cells, the PINK1-PARKIN mitophagy pathway is a well-studied pathway that senses dysfunctional mitochondria and marks them for degradation in the lysosome. PINK1-PARKIN mediated mitophagy relies on ubiquitin-binding mitophagy adaptors that are non-ATG proteins. Loss-of-function mutations in PINK1 and PARKIN are linked to Parkinson´s disease (PD) in humans, and defective mitophagy is proposed to be a main pathomechanism. Despite the common view that yeast cells lack PINK1- and PARKIN-homologs and that mitophagy in yeast is solely regulated by receptor-mediated mitophagy, some studies suggest that a ubiquitination-dependent mitophagy pathway also exists. Here, we will discuss shared mechanisms between mammals and yeast, how mitophagy in the latter is regulated in a ubiquitin-dependent and -independent manner, and why these pathways are essential for yeast cell survival and fitness under various physiological stress conditions.
Introduction
Mitochondria are highly dynamic double-membrane surrounded organelles that are essential for eukaryotic life. In mammalian cells, mitochondria generate most of the cellular ATP (∼90%) by oxidative phosphorylation (OXPHOS) [1]. Besides this important bioenergetic function, mitochondria regulate other essential cellular functions such as β-oxidation of fatty acids, heme and phospholipids biosynthesis, amino acid metabolism, redox homeostasis, stress response, and cell fate decisions [2]. The spatial organization and function of mitochondria rely on both nuclear-and mitochondria-encoded proteins, which act together, very often as multimeric protein complexes, to fulfill a pleiotropy of mitochondrial functions [3]. Mitochondria are very diverse in structure and highly dynamic both at an intracellular as well as at an intramitochondrial level [4].
Errors in the spatial organization of mitochondria can result in mitochondrial dysfunction, which is deleterious for the life of cells and organisms. For instance, defects in respiratory chain complexes promote the generation of reactive oxygen species (ROS) and the loss of the membrane potential (∆ψ) across the inner mitochondrial membrane. Mitochondria can also produce ROS as by-products of aerobic OXPHOS [2]. Mitochondrial ROS can damage mitochondrial proteome and lipids and even cause mitochondrial DNA (mtDNA) mutations. The dissipation of the mitochondrial membrane potential also compromises mitochondrial function owing to defects in mitochondrial protein import. Cells have evolved several mitochondrial quality control mechanisms to restore and preserve the fitness of mitochondria in responses to varying degrees of mitochondrial damage [5][6][7][8][9]. Low or transient mitochondrial damage can be repaired by either intraorganellar protein quality control machineries or machineries, which work at the organelle's surface in the cytosol. However, when damage is prolonged or severe, mitochondria are selectively recognized and degraded via a process known as mitophagy [9]. In this article, we will discuss the emerging roles of mitophagy under different physiological or stress conditions. We will highlight the multiple levels of mitophagy regulation in yeast and human, focusing on the common and general principles as well as the specific aspects in this regard.
Mitochondrial Quality Control at Multiple Levels
Mitochondrial function declines with age and is a pathological hallmark of several neurodegenerative disorders, diabetes, and cancer [10]. For instance, neuronal cells rely heavily on proper mitochondrial function owing to the high requirement for Ca 2+ buffering and ATP production at the synapse [6]. One mechanism suggested to contribute to mitochondrial dysfunction is impaired mitochondrial quality control [11,12]. Hence, it is not surprising that mitochondrial function is compromised during aging. However, different surveillance mechanisms have evolved in response to cellular stress to maintain mitochondrial fitness at the molecular, organellar, and cellular levels [6,7]. Eukaryotic cells have four main mitochondrial quality control mechanisms that function via intraorganellar proteostasis (the mitochondrial protease-chaperone network), the cytosolic ubiquitin-proteasome system (UPS), mitochondrial-derived vesicles (MDVs)/lysosome, and lastly mitophagy in response to low-to-high levels of mitochondrial damage [6,7] (Figure 1).
The mitochondrial protease-chaperone network serves as the first line of defense at the molecular level that removes misfolded or damaged proteins in mitochondria when mitochondrial damage is low [6,7]. In addition, S. cerevisiae mitochondria can serve as essential proteolytic compartments to import and degrade cytosolic misfolded protein aggregates by the matrix localized ATP-dependent Lon proteases [13]. Human mitochondrial DNA (mtDNA) encodes 13 subunits (∼1% of mitochondrial proteome) of four OXPHOS complexes (complexes I, III, IV, and V) [2]. Thus, 99% of mitochondrial proteins are encoded by the nuclear genome, synthesized in the cytosol as precursor proteins, and imported into mitochondria by different import pathways [2]. However, when the capacity of mitochondrial protein import machineries is overwhelmed in cells, possibly at high energy-demand conditions or when mitochondria are energetically compromised, this leads to the accumulation of nuclear-encoded mitochondrial precursor proteins at the organelle s surface [14,15]. Cells evolve several protein quality control pathways that remove stalled mitochondrial precursor proteins by SUMO/Ubiquitin-mediated proteasomal degradation and prevent their misfolding/aggregation on the organelle's surface [14][15][16]. These surveillance pathways protect mitochondrial functions during import stress.
When these quality control mechanisms are insufficient, e.g., owing to severe mitochondrial damage or depolarization at the inner membrane, individual dysfunctional organelles are segregated from the healthy network and subsequently selectively degraded through mitophagy [8,18]. It is important to note that the selective spatial isolation of mitochondria by mitochondrial fission is initially ensured by a rapid inactivation of mitochondrial fusion by stress-induced proteolytic processing of the fusion factor OPA1 [19,20]. Mitochondrial quality control pathways. In response to moderate mitochondrial damage, misfolded or damaged proteins are initially degraded by the mitochondrial protease-chaperone network (1.). The inner mitochondrial membrane-localized AAA + metalloproteases (i-AAA and m-AAA) and the mitochondrial matrix localized ATP-dependent Lon (Pim1 in yeast) protease are main players removing misfolded/damaged proteins. Heat stress-induced aggregation-prone cytosolic proteins are partly imported into mitochondria for degradation in a Hsp104-dependent disaggregation manner. Stalled mitochondrial precursor proteins are degraded via distinct pathways following ubiquitination and proteasomal degradation (2.). These cytosolic ubiquitin-proteasome system (UPS) pathways prevent an accumulation of stalled precursors on the organelle's surface, safeguarding the translocation channel from being clogged, and thus restore full protein import capacity. Upon mild/local mitochondrial oxidative damage, mitochondrial-derived vesicles (MDVs) are generated that are degraded in the lysosome (3.). In response to severe mitochondrial damage or when other quality control pathways fail, the entire organelle is selectively removed via mitophagy at the cellular level (4.). All levels of mitochondrial quality control presumably occur in parallel, yet the exact interplay and regulation is a matter of current research.
When these quality control mechanisms are insufficient, e.g., owing to severe mitochondrial damage or depolarization at the inner membrane, individual dysfunctional organelles are segregated from the healthy network and subsequently selectively degraded Mitochondrial quality control pathways. In response to moderate mitochondrial damage, misfolded or damaged proteins are initially degraded by the mitochondrial protease-chaperone network (1). The inner mitochondrial membranelocalized AAA + metalloproteases (i-AAA and m-AAA) and the mitochondrial matrix localized ATP-dependent Lon (Pim1 in yeast) protease are main players removing misfolded/damaged proteins. Heat stress-induced aggregation-prone cytosolic proteins are partly imported into mitochondria for degradation in a Hsp104-dependent disaggregation manner. Stalled mitochondrial precursor proteins are degraded via distinct pathways following ubiquitination and proteasomal degradation (2). These cytosolic ubiquitin-proteasome system (UPS) pathways prevent an accumulation of stalled precursors on the organelle's surface, safeguarding the translocation channel from being clogged, and thus restore full protein import capacity. Upon mild/local mitochondrial oxidative damage, mitochondrial-derived vesicles (MDVs) are generated that are degraded in the lysosome (3). In response to severe mitochondrial damage or when other quality control pathways fail, the entire organelle is selectively removed via mitophagy at the cellular level (4). All levels of mitochondrial quality control presumably occur in parallel, yet the exact interplay and regulation is a matter of current research.
Mitophagy-An Overview
The first report showing the presence of mitochondria within autophagosomes in mammalian cells was demonstrated in 1957 [21]. The term "mitophagy" for the selective turnover of damaged and/or superfluous mitochondria by autophagy machinery was coined by John Lemasters [22]. Mitophagy is crucial for maintaining mitochondrial quality control and limiting somatic mitochondrial DNA (mtDNA) mutations with aging. Indeed, mitophagy is a fascinating pathway as it is directly linked to cellular metabolism, differentiation, physiology, and a broad spectrum of pathologies. For instance, during red blood cell (RBC) maturation, mitochondria are removed by Nix-dependent mitophagy, where Nix (also called Bnip3L) acts as a mitophagy receptor [23,24]. Moreover, Nix-/-mice accumulate mitochondria and developed mild anemia with reduced mature RBCs from erythroid precursor cells [23,24]. PARKIN, an E3 ubiquitin ligase encoded by the PARK2 gene, is implicated with Parkinson's disease (PD). Several loss-of-function mutations in the PARK2 gene were detected in PD patients. PARKIN is recruited selectively to damaged mitochondria and promotes mitophagy by ubiquitination of its substrates at the outer mitochondrial membrane (OMM). Thus, mitophagy has an important role in development, health, and disease. Over the last decade, compelling evidence from yeast and mammalian cells showed that the removal of damaged/superfluous mitochondria from cells is specific [9]. Several mitophagy regulatory factors are recruited on the outer surface of mitochondria and promote their recognition and sequestration into autophagosomes for clearance. Mitophagy can prevent the accumulation of dysfunctional/damaged mitochondria within cytosol, and thus limits an increase of ROS levels or pro-apoptotic factors.
In response to stress and starvation, eukaryotic cells often elicit an evolutionarily conserved non-selective autophagy to degrade and recycle cytosolic constituents [25]. The cargo for autophagy (organelles and macromolecules) are randomly sequestered into double-membrane autophagosomes. The autophagosome's outer membrane subsequently fuses with lysosomes (or vacuoles in yeasts) for substrate degradation [25]. While starvation-induced autophagy is an early response (after ∼2 h of starvation), mitochondria are turned over selectively at later stages of starvation (e.g., ∼12-24 h), employing the core autophagy complex, Atg1-Atg13-Atg17-Atg31-Atg29 [26][27][28]. However, it is not fully understood why mitochondrial degradation occurs delayed to bulk autophagy during prolonged starvation in yeast and mammals. One possibility is that substrate specificity may partly be determined by the steric hindrance of the substrates where smaller substrates are degraded rapidly, while larger substrates are turned over at later time points during amino acid starvation. Why and how autophagosomes sense, select, and sequester specific substrates in an ordered fashion upon starvation is still unclear.
Mitophagy in Yeast
In baker's yeast (Saccharomyces cerevisiae), mitophagy is induced by prolonged respiratory growth or a shift from respiration to nitrogen starvation [26,29]. In addition, yeast mitophagy can be induced in respiring cells by treatment with rapamycin, an inhibitor of the mammalian target of rapamycin (mTOR) [30]. Respiration is believed to be a prerequisite for mitophagy under these conditions, but the reason for this is unclear. It could be linked to increased oxidative stress, which was reported to increases steady-state levels of Atg32, a mitochondria-anchored receptor required for mitophagy. Interestingly, N-acetylcysteine (NAC), an antioxidant treatment, reduced Atg32 levels by ∼2-3-fold, significantly suppressing mitophagy [29]. During respiratory growth, Atg32 is maximally induced within the mid-log phase (30 h), then reduced in the post-log phase (36-72 h) [29]. Thus, Atg32 is temporally upregulated during respiratory growth conditions and subsequently degraded in an autophagy-dependent and -independent fashion [29]. However, it raises the question of whether other mitophagy regulators are expressed or activated to promote mitophagy during respiratory conditions when Atg32 is almost depleted in cells (e.g., 72 h of growth) [29].
Yeast mitophagy depends on the adaptor protein Atg11 and the receptor protein Atg32 [26,29] (Figure 2). Atg32 is specifically involved only in mitophagy and does not regulate other autophagy types such as bulk autophagy or the cytoplasm-to-vacuole targeting (CVT) pathway under mitophagy-inducing conditions. However, the atg32∆ mutant did not show any detectable growth phenotypes under mitophagy-inducing conditions, raising the question of whether Atg32-dependent mitophagy is physiologically essential for yeast cell survival and stress resistance [26]. Translated Atg32 is translocated to mitochondria and proteolytically processed by the Yme1 i-AAA protease at its Cterminus. Subsequently, Atg32 is phosphorylated by the Casein kinase 2 (CK2) at two residues, Ser114 and Ser119, that allow interaction with the scaffold protein Atg11 and Atg8 at the PAS. Atg11 may also recruit Dnm1 and the ERMES complex components for mitochondrial fission before mitochondria are sequestered into double-membrane autophagosomes. Atg32 phosphorylation is reversed by a phosphatase Ppg1 and the Far complex. Note that Yme1-dependent processing of Atg32 was observed when mitophagy was initiated by nitrogen starvation. Atg32 is an integral membrane protein that exposes its N-terminal domain towards the cytosol and the C-terminal domain in the mitochondrial inter-membrane space (IMS) [29].
Atg32 has no apparent mammalian homolog based on amino acid sequence similarity. An Atg32 functional homolog was proposed to exist in mammals with the following molecular features: mitochondrial localization; WXXL/I/V motifs; LC3 interaction; clusters of acidic amino acids (D/E); and single membrane-spanning topology [29]. Using these molecular profiles of Atg32, recently, Bcl2-like 13 (Bcl-2-L-13) was identified as a functional homolog of Atg32 by screening with UniProt database (http://www.uniprot.org/) as a search tool. Mouse Bcl-2-L-13 is a 434 amino acids protein that contains a C-terminal single transmembrane domain (TMD) and one functional LC3-interacting region (LIR) at residues 273-276 (WQQI) at its N-terminal domain that faces cytosol [33] (Figure 2). Upon mitochondrial damage by carbonyl cyanide m-chlorophenylhydrazone (CCCP), Bcl-2-L-13 is localized to mitochondria and promotes mitophagy, independent of the E3 ubiquitin ligase PARKIN. Thus, Bcl-2-L-13 can function as a mammalian mitophagy receptor and partially rescues mitophagy defect when exogenously expressed in atg32∆ yeast [33]. Whether Bcl-2-L-13-mediated mitophagy has a major physiological significance in humans or mice is still unclear.
Atg32 is modified at both N and C-terminus after its recruitment at the mitochondria. Upon mitophagy induction, the mitochondria-localized Atg32 is phosphorylated by CK2 at its cytosolic N-terminus, which is essential to form a complex with Atg8 and Atg11 at the PAS. In addition, Atg32 is proteolytically cleaved at its C-terminus by the inner mitochondrial membrane i-AAA (intermembrane space ATPases associated with diverse cellular activities) protease, Yme1 [40]. This processing is compromised when Atg32 is either tagged at its C-terminus in wild-type (WT) cells, or Yme1 is deleted/mutated to its catalytic inactive form with a single point mutation E541Q [40]. This causes defective mitophagy caused by nitrogen starvation. The exact mechanism of Yme1-mediated mitophagy is not clear. However, in yme1∆ cells, the interaction between Atg32 and Atg11 adaptor is reduced, leading to impaired recruitment of mitochondria to the PAS. However, other studies indicate no mitophagy defect in cells lacking Yme1, suggesting that Yme1-dependent processing may be strain and/or condition-specific [41,42]. Nevertheless, these two processes, CK2-mediated N-terminal phosphorylation and Yme1-dependent C-terminal processing of Atg32, control nitrogen starvation-induced mitophagy.
The function of Yme1 under post-log/stationary phase mitophagy is different from that under nitrogen starvation. Yeast cells that are grown for 2-3 days (post-log/stationary phase) in non-fermentable carbon sources (respiratory media) such as lactate/glycerol/ethanol tend to proliferate and produce reactive oxygen species (ROS). Therefore, cells promote selective mitochondrial degradation to limit the abundance of mitochondria (healthy or damaged). Cells that lack Yme1 showed severe mitochondrial damage (defective morphology) revealed by transmission electron microscopy (TEM) associated with the vacuolar rim [43]. This raises the question of whether damaged mitochondria in yme1∆ cells are degraded by microautophagy or still depend on Atg32-mediated macromitophagy. However, yme1∆ cells showed a~1.5-3.0-fold greater mitophagy rate than wild type (WT) cells grown on a non-fermentable the carbon source for 2-3 days [41,43].
Mitochondrial Fission and Yeast Mitophagy
As intact mitochondria, present as interconnected tubules in many living cells, have larger dimensions than autophagosomes, sequestration of damaged mitochondria may be facilitated subsequent to mitochondria fragmentation. It is shown that the mitochondrial fission machinery separates damaged/dysfunctional mitochondria from the healthy mitochondrial network in mammalian cells [19,20,44,45]. Such a spatial separation promotes mitophagy by selective sequestering of smaller damaged mitochondria within autophagosomes and subsequent lysosomal degradation [44,45]. Thus, blocking of mitochondrial fission leads to defective mitophagy and accumulation of damaged mitochondria in mammalian systems [44]. Is mitochondria fission machinery a prerequisite for mitophagy in yeast? In S. cerevisiae, mitochondrial fission is mediated by the fission factors Dnm1 and Fis1 [46]. Cells lacking either DNM1 or FIS1 significantly reduce nitrogen starvationinduced mitophagy [46]. This is consistent with the previous study where deletion of DNM1 significantly suppresses mitophagy in mdm38∆ cells [47]. Still, rapamycin-induced mitophagy was shown not to depend on Dnm1 or Fis1 [48]. Atg11 was reported to recruit Dnm1 to "marked" mitochondria that are destined for degradation [46]. It was also suggested that the ER-mitochondria encounter structure (ERMES) complex might also participate in mitochondrial fission during mitophagy [46,49]. However, the exact molecular mechanism by which mitochondrial fission machinery and as yet unidentified fission factors can regulate the early step of mitophagy in yeast is still unclear. Thus, in an updated mitophagy model, Atg32 recruits Atg11 to "marked" mitochondria upon induction of mitophagy. Atg11 subsequently interacts and recruits Dnm1 and other fission components to these "marked" mitochondria and facilitates their fragmentation [46]. These fragmented mitochondria are then transported to the PAS, where other core Atg proteins are recruited, initiating the autophagosome formation. In mammalian cells, upon energy stress, MFF (mitochondrial fission factor) recruits Drp1 (dynamin-related protein 1), a GTPase (lacks hydrophobic transmembrane domain), from the cytosol to the mitochondrial outer membrane and catalyzes mitochondrial fragmentation for efficient autophagosomal engulfment of mitochondria and mitophagy [50]. Notably, the Drp1-MFF interaction and mitochondrial fission are mediated after adenosine monophosphate (AMP)-activated protein kinase (AMPK)-mediated phosphorylation of MFF at Ser 155 and Ser 172 [50].
Transcriptional and Translational Regulation of Atg32 Activity
Upon mitophagy induction, Atg32 transcripts and protein levels are controlled by transcriptional and co-translational regulation. It has recently been shown that Ume6-Sin3-Rpd3, a transcriptional repressor complex, directly binds to the promoter (URS1 consensus 5 -TCGGCGGCT-3 ) of ATG32 and ATG8 ∼2.5 times higher than negative control TFC1 promoter (Figure 2). The yeast cells lacking either SIN3 or RPD3 or UME6 showed a ∼2.5-fold higher expression of ATG32 and ATG8 mRNAs. Thus, Ume6-Sin3-Rpd3 complex negatively regulates autophagy and mitophagy. In addition, co-translational N-terminal protein acetylation (Nt-acetylation) also regulates mitophagy under respiratory growth conditions [51].
Co-translational Nt-acetylation is a widespread irreversible modification of proteins in eukaryotes, affecting 50-70% of the yeast proteome and approximately 90% in higher eukaryotes. Nt-acetylation is achieved as soon as nascent polypeptide chains emerge at the ribosome exit tunnel during translation. Nt-acetylation is accomplished by the major cytosolic N-terminal acetyltransferase A (NatA) complex, which binds to the large ribosomal subunit near the ribosome exit tunnel. Yeast NatA complex is comprised of one catalytic subunit, Ard1, and the other adaptor subunit, Nat1. The NatA complex catalyzes acetylation of the second amino acids, Ala, Val, Ser, Thr, Gly, and Cys, of the nascent polypeptide once the N-terminal methionine residues are co-translationally cleaved by methionineamino peptidase (MetAP). Nt-acetylation targets nascent proteins for degradation/stability, folding, protein-protein interactions, and translocation to subcellular compartments.
Loss of Nt-acetylation in natA∆ yeast cells by deleting either Ard1 or Nat1 or both proteins showed a drastic reduction (more than 90%) in mitophagy without affecting bulk autophagy. These mutants mimic a defective mitophagy phenotype of the atg32∆ mutant. How does the NatA complex regulate mitophagy in yeast? What are the endogenous substrates of NatA that control mitophagy without affecting core autophagy machinery? Interestingly, NatA is estimated to N-terminally acetylate ∼40% of the yeast and mammalian proteome. Atg32 seems to be a direct substrate for NatA-mediated Nt-acetylation because it contains "Val" at the second amino acid position. However, the substitution of valine to proline (V2P) in Atg32 can prevent Nt-acetylation by NatA, which was still functional in promoting mitophagy as wild-type Atg32. This suggests that Atg32 is not a direct substrate of NatA for Nt-acetylation during mitophagy. Therefore, an indirect regulation was proposed for NatA-mediated Nt-acetylation during mitophagy. NatA may acetylate Rpd3 and Sin3 transcription repressor complex on valine and serine at their second position, respectively. Nt-acetylation of Rpd3 and Sin3 can serve as a specific degradation signal for polyubiquitination and proteasomal degradation via the Ac/N-end rule pathway. Thus, it is plausible that NatA may reduce Rpd3 and Sin3 transcription repressors' protein half-life to enhance transcription of ATG32 and ATG8 during mitophagy. This study warrants further research to understand how Nt-acetylation can regulate mitophagy by acetylating yet-undiscovered substrates among ∼40% of the yeast proteome. Future studies can dissect what fraction of mitochondrial, cytosolic, or nuclear proteins is Nt-acetylated by NatA. The Nt-acetylation field is just emerging to understand the molecular mechanism and the physiological relevance of this process during mitophagy and the other environmental stress response pathways.
Furthermore, Atg33 was identified as the second candidate from a genome-wide screen that peculiarly regulates the post-log phase mitophagy [52]. Atg33 may promote degradation of aged mitochondria; however, the exact function of Atg33 in mitophagy is still unclear.
Yeast Model of Mitochondrial Damage and Mitophagy
In yeast, bona fide models that can mimic mitochondrial damage and initiate mitophagy are still lacking. It appears that yeast is resistant to mitophagy induction by applying known mitochondrial toxins that impair OXPHOS [48]. However, some studies could show that mitophagy can be induced by mitochondrial damage in this model organism by genetic approaches [47,53,54].
The sphingolipid metabolism pathway is conserved from yeast to human, where a small amount of bioactive lipid ceramide is locally synthesized at the cytosolic side of the endoplasmic reticulum (ER) and mitochondria [55]. Several enzymes of sphingolipid metabolism, including ceramide synthase and reverse ceramidase, were shown to be localized to ER-mitochondria contact sites [55]. Interestingly, yeast and mammalian cells synthesize ceramides from sphingolipids under environmental stress conditions such as heat shock [56] and serum starvation [57].
Ceramides regulate mitochondrial function and morphology/dynamics via direct interaction with the electron-transport chain (ETC) [58]. In addition, mitochondrially localized ceramides were shown to regulate mitochondrial translation at least for a few subunits such as COX3 [58]. Saccharomyces cerevisiae Isc1, an inositol phosphosphingolipid phospholipase C involved in de novo ceramides and phytoceramides biosynthesis from the complex sphingolipids. Isc1 translocates to the mitochondria when respiration is induced by a transition from fermentation to nonfermentable carbon sources (e.g., glycerol or lactate) [58]. Yeast cells lacking Isc1 show mitochondrial fragmentation and reduced chronological life span (CLS) owing to failed mitochondrial translation and perturbation in sphingolipid metabolism [58,59].
Yeast isc1∆ cells reduce α-hydroxylated phytoceramides' synthesis by ∼90% from mitochondria and show hypersensitivity towards oxidative stress (H 2 O 2 ) and ethidium bromide (EtBr) [60]. The isc1∆ mutant consequently causes respiratory deficient, "petite" phenotype [60]. Recently, it was shown that yeast isc1∆ cells, when grown at post-log phase (respiratory condition), led to compromised mitochondrial function and fragmentation by the fission factor Dnm1 (Drp1 in mammals) [59]. Under these conditions, Dnm1 is also induced, suggesting Dnm1 is essential to fragment the damaged mitochondria [59]. Mitochondrial fragmentation eventually hyperactivates mitophagy (Atg32-dependent) as an adaptive stress response. Furthermore, ATG32 deletion in an isc1∆ background (i.e., isc1∆atg32∆ mutant) further exacerbates the growth defect and results in ∼40% CLS reduction of isc1∆ cells [59]. Under the identical condition, the mitophagy-deficient mutant atg32∆ essentially behaves as the wild-type strain without showing any detectable growth phenotype. These observations suggest that Atg32 is physiologically essential to promote degradation of damaged mitochondria and maintain cell growth and survival of ceramide-deficient isc1∆ mutant ( Figure 3). Thus, Atg32-dependent mitophagy prevents excessive cell death or premature ageing caused by severe organelle damage in an isc1∆ mutant [59]. Yeast isc1∆ cells in response to mitochondria damage also activate the mitogen-activated protein kinase (MAPK) Hog1 (homologue of mammalian p38) required for mitophagy [39,59]. Phosphorylation of mitophagy receptor Atg32 at Ser-114/119 by CK2 is crucial for Atg32-Atg11 interaction and mitophagy [38]. Hog1 is required for Atg32 phosphorylation, but it cannot directly phosphorylate Atg32, suggesting that Hog1 functions upstream of CK2 [39]. However, the molecular mechanism by which Hog1 can control CK2-mediated Atg32 phosphorylation is still unclear.
How do yeast isc1∆ cells enhance mitophagy mechanistically compared with wildtype (WT) cells? In HL-60 human cells, it was reported that ceramides directly activate mitochondrial protein phosphatase 2A (PP2A), which can dephosphorylate its substrate, Bcl2 [61]. Bcl2 is an anti-apoptotic protein whose function relies on phosphorylation at an evolutionarily conserved site, serine 70 [61]. Therefore, high ceramide concentration can lead to apoptotic cell death owing to the dephosphorylation of Bcl2 by PP2A. In contrast, Kanki s group recently showed that Atg32 phosphorylation is regulated by PP2A-like phosphatase, Ppg1 [35]. Ppg1 functions as a negative regulator of Atg32 phosphorylation as it dephosphorylates Atg32 and inhibits mitophagy without affecting other selective autophagy pathways [35]. Therefore, it is plausible to hypothesize that isc1∆ cells may inactivate Ppg1 because of reduced endogenous ceramide levels, leading to the hyperactivation of mitophagy. However, it is not entirely understood if Ppg1 localizes near its substrate Atg32 or mitochondria to control mitophagy.
Ubiquitin-Dependent Regulation of Mitophagy
Eukaryotic cells have several mitophagy mechanisms, regulated via multiple signaling cascades under different mitochondrial and cellular stress conditions. Mitophagy pathways are classified as ubiquitin-dependent and -independent. Most of the core autophagy-related proteins that are involved in mitophagy are conserved from yeast to humans. The wellcharacterized PINK1-PARKIN pathway regulates ubiquitin-dependent mitophagy, which is conserved in metazoans, such as Drosophila melanogaster, Caenorhabditis elegans, and mammals [37,62]. There are no apparent PINK1 or PARKIN homologs in bacteria, yeast, or plants.
Kanki´s group recently showed that Atg32 phosphorylation is regulated by PP2A-like phosphatase, Ppg1 [35]. Ppg1 functions as a negative regulator of Atg32 phosphorylation as it dephosphorylates Atg32 and inhibits mitophagy without affecting other selective autophagy pathways [35]. Therefore, it is plausible to hypothesize that isc1Δ cells may inactivate Ppg1 because of reduced endogenous ceramide levels, leading to the hyperactivation of mitophagy. However, it is not entirely understood if Ppg1 localizes near its substrate Atg32 or mitochondria to control mitophagy. . Regulation of Atg32-dependent mitophagy in the wild type (WT) and isc1∆ mutant strains. S. cerevisiae WT cells, when grown over time on non-fermentable carbon sources, maintain a basal level of mitophagy limiting excessive mitophagy. This apparently depends on the recruitment of Isc1 to mitochondria that catalyzes ceramide synthesis from sphingolipids. Mitochondrial ceramides, in turn, may activate mitochondrial localized protein phosphatase Ppg1, thus counteracting CK2-mediated phosphorylation of Atg32 and limiting excessive mitophagy. The isc1∆ cells show mitochondrial damage and fragmentation linked to increased protein expression of Dnm1. The isc1∆ mutant cells show enhanced Atg32-mediated mitophagy promoting cellular viability. (B) Isc1 and Atg32-dependent cellular viability. The mitophagy-deficient atg32∆ strain does not show any defect in the cell growth and viability. The ceramide-deficient isc1∆ strain can still maintain cell viability (∼80% of the wild-type level) by enhancing mitophagy, which acts as a protective mechanism here. However, additional deletion of ATG32 in isc1∆ cells (i.e., isc1∆atg32∆ double mutant) drastically reduces cell survival as Isc1-dependent enhancement of mitophagy cannot operate in the absence of Atg32.
The PINK1-PARKIN Pathway in Mammals
In many metazoan cell types, mitophagy is regulated by PTEN-induced putative kinase protein 1 (PINK1) and PARKIN, a RING-HECT hybrid E3 ubiquitin ligase (which have no yeast homolog), and loss-of-function mutations in two genes encoding these proteins are linked to Parkinson's disease (PD), the second most common neurodegenerative disease in humans. Together, these two proteins safeguard a protective mitophagy response to mitochondrial stress and limit the accumulation of damaged/toxic mitochondria. In cells bearing healthy mitochondria, PINK1 (a Ser/Thr protein kinase; also known as PARK6) is rapidly degraded via the N-end rule pathway after partial import into mitochondria, PARL-dependent processing and retranslocation [63]. PARKIN (PARK2, also known as PRKN) remains in an autoinhibited state in the cytosol [5,[64][65][66][67]. Thus, PINK1 is barely detectable in cells with healthy mitochondria. However, upon mitochondrial damage (e.g., mitochondrial depolarization), active PINK1 is stabilized on the mitochondrial surface and recruits the E3 ubiquitin ligase PARKIN [68,69]. Subsequently, PINK1 phosphorylates its substrates, ubiquitin (Ub), at the conserved residue Ser65 (generating pSer65-Ub) and the residue Ser65 in the N-terminal Ub-like (UBL) domain of PARKIN (generating pSer65-PARKIN) [70][71][72][73]. Thus, phosphorylated ubiquitin pSer65-Ub is reversible and barely detectable in basal conditions, but rapidly induced by mitochondrial damage in cells and amplified by functional PARKIN [74].
Interestingly, USP30 shows a preference for K6 Ub linkages once it is localized to mitochondria [82]. Knockdown of USP15 or USP30 rescues the mitophagy defect with pathogenic PARKIN mutations in PD patient fibroblasts and Drosophila, improving mitochondrial integrity and organismal survival [79,81]. Thus, genetic and pharmacological inhibition of USP15 or USP30 may represent a therapeutic strategy for PD pathology caused by reduced PARKIN levels and defective mitophagy [85].
Does PINK1-dependent ubiquitin phosphorylation at Ser65 impact DUB activity and selectivity? Wauer and colleagues showed that ∼12 DUBs, including USP2, USP8, USP15, USP30, Ataxin-3, and USP21, hydrolyze phosphoUb chains with significantly less activity from in vitro reconstitution assay [86]. In the case of USP30, structural and biochemical analysis shows that phosphorylation of the distal ubiquitin in the K6-linked ubiquitin dimer can preclude access to USP30 [77]. In addition, a single phosphorylation of the distal ubiquitin of a tetraubiquitin chain is sufficient to prevent DUB-mediated hydrolysis [77]. Thus, pSer65-Ub has an additional function beyond PARKIN activation. At mitochondria, pSer65-Ub can make a DUB-resistant mitophagy signal by phosphocapping of K6 Ub chains, preserving recruitment sites for ubiquitin-binding mitophagy receptors that link the mitochondria to autophagosomes. Interestingly, in addition to these DUBs, phosphatase and tensin homolog (PTEN)-long (PTEN-L) was recently identified as a novel negative regulator of the PINK1-PARKIN mitophagy pathway that dephosphorylates pSer65-Ub in vivo and in vitro via its protein phosphatase activity [87] (Figure 4). analysis shows that phosphorylation of the distal ubiquitin in the K6-linked ubiquitin dimer can preclude access to USP30 [77]. In addition, a single phosphorylation of the distal ubiquitin of a tetraubiquitin chain is sufficient to prevent DUB-mediated hydrolysis [77]. Thus, pSer65-Ub has an additional function beyond PARKIN activation. At mitochondria, pSer65-Ub can make a DUB-resistant mitophagy signal by phosphocapping of K6 Ub chains, preserving recruitment sites for ubiquitin-binding mitophagy receptors that link the mitochondria to autophagosomes. Interestingly, in addition to these DUBs, phosphatase and tensin homolog (PTEN)-long (PTEN-L) was recently identified as a novel negative regulator of the PINK1-PARKIN mitophagy pathway that dephosphorylates pSer65-Ub in vivo and in vitro via its protein phosphatase activity [87] (Figure 4). Mitochondrial localization of PARKIN and its ubiquitin ligase activity is enhanced drastically (∼4400-fold) when pSer65-Ub binds to pSer65-PARKIN. This can create a feedforward loop by providing additional ubiquitin molecules for PINK1 phosphorylation. Some PARKIN substrates (labeled as "Y") are degraded by the ubiquitin-proteasome system (UPS) during mitophagy. Mitochondria decorated with polyubiquitin/phospho-ubiquitin chains (shown on "X") are recognized by ubiquitin-binding mitophagy receptors and sequestered by autophagosomes for lysosomal degradation. (B) Ub-dependent mitophagy in yeast. Upon mitophagy induction in respiring cells with treatment with rapamycin, mitochondrial outer membrane (MOM) proteins can be ubiquitinated by unknown E3 ubiquitin ligase(s). The autophagy machinery recognizes ubiquitinated mitochondria for subsequent vacuolar degradation. The cytosolic Ubp3-Bre5 deubiquitinase complex can inhibit mitophagy when it is recruited to mitochondria presumably by removing ubiquitin moieties from MOM proteins.
In mammalian cells, ubiquitin-binding mitophagy receptors contain ubiquitin-binding domains (UBDs) and a four-residue short hydrophobic sequence, known as LC3-interacting region (LIR) motif ( [W/Y/F]XX [L/I/V]) [5]. These mitophagy receptors recognize and bind to ubiquitylated MOM proteins via their UBD domains on the one hand and to LC3II-conjugated autophagosomal double-membrane via their LIR motifs on the other hand [5]. In mammals, five ubiquitin-binding mitophagy receptors were identified that are linked to mitophagy: p62 (SQSTM1), NBR1, NDP52 (CALCOCO2), optineurin (OPTN), and TAX1BP1 [5,88]. Thus, cells lacking these five receptors (termed Penta KO) fail to remove mitochondria after activating the PINK1-PARKIN pathway [88]. p62 (SQSTM1), the best-characterized and the first identified autophagy cargo receptor, has both a ubiquitin-associated (UBA) domain to interact with ubiquitinated protein substrates and an LIR motif for binding to LC3/GABARAP-positive autophagosomes [89]. Several studies showed that p62 is dispensable for mitophagy initiation without promoting autophagosome biogenesis around damaged mitochondria. Recently, it has been reported that p62 can promote PINK1-PARKIN independent mitophagy [90] by executing juxtanuclear clustering of damaged mitochondria that resemble p62-mediated 'aggresomes' of ubiquitinated aggregated proteins [91]. However, various pathogenic PARKIN mutations interfered with p62-mediated clustering of damaged mitochondria, suggesting p62 collaborates with PARKIN for mitochondria clustering at perinuclear regions [91]. Thus, the role of p62 in mitophagy is controversial and requires further study for understanding the functional redundancy of p62 in mitochondrial elimination in a context-specific (e.g., tissue and cell-types) manner. Nevertheless, upregulation of p62 may serve as an attractive therapeutic target to counteract aging and Parkinson's disease (PD) to restore/enhance alternative mitophagy pathways in pathological conditions where the PINK1-PARKIN pathway is perturbed.
Interestingly, it has been recently reported that OPTN is sufficient to rescue mitophagy in Penta KO cells when OPTN is stably re-expressed in these cells along with mitochondrialtargeted non-cleavable linear lysine-less di-ubiquitin (i.e., mito-2Ub K0) [92]. Mito-2Ub K0 induces mitophagy without any chemical-induced mitochondrial depolarization that is a prerequisite for PINK1-mediated ubiquitin phosphorylation [92]. This study suggests that mito-2Ub K0 can bypass the PINK1-PARKIN pathway and promote mitophagy via preferentially binding with the ubiquitin-associated (UBA) domain of OPTN. Importantly, in addition to binding to ubiquitin and LC3, OPTN also binds with ATG9A vesicles that supply lipids for de novo synthesis of autophagosomal membranes in close proximity to the ubiquitinated mitochondria [92]. The OPTN-ATG9A interaction is mediated by the leucine zipper domain (residues 143-164 aa) of OPTN [92]. Thus, the ubiquitin-OPTN-ATG9A axis functions in concert with the known ubiquitin-OPTN-ATG8 axis in mitochondrial clearance using specific interaction domains within OPTN [92]. In contrast, another study reported that the NDP52 receptor initiates mitophagy by recruiting FIP200, a core component of the autophagy initiation complex (i.e., ubiquitin-NDP52-FIP200 axis) [93]. Thus, among all five mitophagy receptors, only OPTN and NDP52 are crucial for mitochondrial clearance as both recruit the core components of the autophagy machinery to initiate autophagosome biogenesis directly near ubiquitinated cargo [92,93].
Interestingly, it has been shown that p38 (MAPK14), a yeast homolog Hog1, is involved in promoting starvation-or hypoxia-induced mitophagy in mammals, confirming the evolutionarily conserved function of upstream MAPK signaling pathway for regulating mitophagy from yeast to mammals [94]. Surprisingly, both Hog1 and p38 in yeast and mammalian cells failed to translocate to mitochondria and instead remained in the cytosol in response to mitophagy-inducing conditions [39,94]. These results raised the possibility that p38 may have cytosolic substrates that directly/indirectly phosphorylate unknown mitophagy regulators (e.g., Bcl-2-L-13; mammalian counterpart of yeast mitophagy receptor Atg32), which can allow recruitment of mitophagy machinery.
In addition to the conserved MAPK signaling pathway, the tumor suppressor p53, a multifunctional transcription factor, has been shown to regulate mitophagy. p53 is activated by a variety of cellular stresses, including genotoxic stress, oxidative stress, ribosomal stress, hypoxia, and starvation [95]. Nuclear p53 can positively and negatively regulate the expression of its target genes, resulting in several cellular responses by different stress signals. Interestingly, p53 functions as a negative regulator of the PINK1-PARKIN mitophagy pathway. Nuclear p53 represses the transcription of PINK1 [96], whereas cytosolic p53 can sequester and inhibit PARKIN translocation to mitochondria [97]. Thus, genetic and pharmacological inhibition of p53 can activate PARKIN-dependent mitophagy, preserving mitochondrial integrity and protecting against glucose tolerance, heart failure, and cardiac aging in mice [97,98].
In addition to p53, the redox-sensitive transcription factor Nrf2 (nuclear factor erythroid 2-related factor 2) is also involved in mitophagy regulation by binding to a putative antioxidant response element (ARE) of target genes [99]. Under basal conditions, Nrf2 is efficiently ubiquitinated by Keap1 (kelch-like ECH-associated protein 1)-Cul3 E3 ligase complex and constitutively degraded via the ubiquitin-proteasome pathway [99]. However, in response to stress (e.g., oxidative stress), Keap1 undergoes conformational changes in its cysteine residues, leading to its inactivation. Consequently, Nrf2 is stabilized and translocates into the nucleus to initiate the transcription of several cytoprotective genes, including PINK1 and p62 [100,101]. Thus, activation of the Nrf2-ARE signaling pathway can directly regulate mitophagy. Indeed, PMI (p62-SQSTM1-mediated mitophagy inducer), a synthetic compound, promotes p62-mediated mitophagy via Nrf2 stabilization in mammalian cells [102].
Besides the well-studied PINK1-PARKIN system, PARKIN-independent mechanisms have also been identified to promote mitophagy in mammals that rely on ubiquitinindependent mitophagy receptors. These receptors include NIX/BNIP3L, FUNDC1, AM-BRA1, Prohibitin 2 (PHB2), MCL-1, cardiolipin (CL), ceramide, FKBP8, ATAD3B, and Bcl2-L-13 (mammalian homolog of Atg32) that directly bind to LC3 family proteins via LIR motifs and regulate selective mitochondrial clearance by sequestering mitochondria into autophagosomes [23,24,33,[103][104][105][106][107][108][109][110]. For instance, Nix-dependent mitophagy regulates RBC maturation from erythroid precursor cells [23,24] and FUNDC1 mediates mitochondrial clearance upon hypoxia [103]. Note that these mitophagy receptors are often post-translationally modified for their efficient interaction with LC3 during mitophagy. In addition, a recent report showed that two mitochondrial matrix proteins, NIPSNAP1 and NIPSNAP2, can serve as autophagy-related receptors by accumulating on the mitochondrial surface in response to mitochondrial depolarization [111]. NIPSNAP1 and NIPSNAP2 function as "eat-me" signals for mitophagy by interacting with LC3/GABARAP proteins, autophagy receptors, NDP52, p62, NBR1, TAX1BP1, and autophagy adaptor ALFY, as well as with PARKIN [111]. NIPSNAP1 and NIPSNAP2 are new players of mitophagy that have a neuroprotective function in vivo. For example, NIPSNAP1-deficient larvae of zebrafish model organism show neurodegenerative phenotypes, including loss of tyrosine hydroxylase (Th1)-positive dopaminergic neurons, increased oxidative stress, increased cell death (apoptosis), and reduced locomotor activity as a consequence of impaired mitophagy [111]. Such phenotypes were also observed in the zebrafish model upon loss of PINK1 function [112], showing the relevance of NIPSNAP1 activity in the PINK1-PARKIN mitophagy pathway in the brain. However, it is unclear how NIPSNAP 1 and NIPSNAP2 are stabilized on the mitochondrial surface and whether NIPSNAP proteins can impact PARKIN-dependent ubiquitination of MOM substrates in multiple cell types under different mitochondrial stress conditions.
Ubiquitin-Dependent Mitophagy in Yeast
Ubiquitin-dependent mitophagy is well-studied in mammals, as discussed above. We recently identified 96 mitophagy regulators (86 positive and 10 negative) from a combined genetic and biochemical high-throughput screen [30]. The cytosolic Ubp3-Bre5 deubiquitinase complex is recruited to mitochondria upon mitophagy induction in respiring cells by rapamycin ( Figure 4). The Ubp3-Bre5 complex inhibits mitophagy, while it promotes different types of selective autophagy, including bulk autophagy, the CVT pathway, and ribophagy [30,113]. Cells lacking Ubp3 or Bre5 show ∼1.5-2.0-fold higher rapamycininduced mitophagy than the wild-type cells. However, substrates of the Ubp3-Bre5 complex have not yet been identified that can show a reciprocal response for mitophagy and other selective autophagy pathways. Thus, future studies will define the molecular mechanisms by which the ubiquitination and deubiquitination machinery can regulate mitophagy to allow yeast cells for adaptation under different growth conditions [114]. For instance, PARKIN was translocated to mitochondria upon oxidative stress or aging and extended the chronological life span (CLS) and oxidative stress resistance of the respiring yeast cells via mitophagy initiation [114].
Mitophagy in Neurodegenerative Diseases
It has been observed that loss-of-function mutations in PINK1 or PARKIN gene lead to defective mitophagy and the accumulation of dysfunctional mitochondria, contributing to autosomal recessive Parkinson's disease (PD) [115,116]. Interestingly, these proteins also function in the same pathway to prevent mitochondrial damage in the Drosophila model, and defects in mitophagy result in reduced lifespan, apoptotic muscle and dopaminergic neuron degeneration, male sterility, fragmentation of mitochondrial cristae, and hypersensitivity to multiple stresses including oxidative stress and endoplasmic reticulum (ER) stress [117][118][119].
Ser65-phosphorylated ubiquitin (pSer65-Ub)-positive mitophagy granules were found to be accumulated in the human brain during aging and in PD patients with Lewy body disease [74]. Lewy bodies (LBs) that are α-synuclein-rich cytosolic inclusions serve as a pathological hallmark of PD and many other neurodegenerative diseases [120]. LB-like inclusions often trap fragmented membranes, vesicular structures, and organelles (mitochondria, autophagosomes, and lysosomes), and impair pathways of protein degradation or mitochondrial homeostasis [120]. In addition, pSer65-Ub were increased in brain tissue of mice model lacking PARKIN and bearing mutations in POLG (the catalytic subunit of the mitochondrial DNA polymerase gamma), which leads to mitochondrial dysfunction as a result of mtDNA mutations, premature aging, and defective respiratory chain assembly [121]. Thus, in addition to α-synuclein, pSer65-Ub serves as promising biochemical and imaging biomarkers of PD pathology. A low cellular abundance of pSer65-Ub from cerebrospinal fluid or blood might be detected by mass spectrometry-based phosphoproteomics to identify PD patients with defective mitophagy [122].
Mitophagy defects are not limited to the pathogenesis of Parkinson's disease (PD), but are also involved in the pathology of other neurodegenerative diseases, including Alzheimer's disease (AD), Huntington's disease (HD), amyotrophic lateral sclerosis (ALS), and multiple sclerosis (MS), which are characterized by progressive degeneration of neurons, resulting in cognitive impairment and memory loss [123]. The pathological hallmarks of AD are the intracellular accumulation of amyloid-β (Aβ) plaques/aggregates and neurofibrillary tangles (NFTs), composed of hyperphosphorylated Tau (p-Tau) [123][124][125][126]. Both Aβ plaques and NFTs lead to mitochondrial impairment by different mechanisms, such as perturbation of oxidative phosphorylation (OXPHOS), alteration of mitochondrial dynamics, and the loss of mitochondrial proteostasis [123][124][125][126]. Thus, dysfunctional mitochondria are progressively co-accumulated with Aβ peptides in neurons of AD patients [126]. A recent study showed that mitophagy induction rescues Aβ and tau pathology in transgenic C. elegans and mice models of AD [127]. Thus, PINK1-PARKIN-mediated mitophagy plays a protective role in eliminating neurotoxic Aβ species and defective mitochondria at both neuronal and organismal levels.
Mitochondrial dysfunction is also associated with HD pathogenesis caused by aberrant expansion of CAG repeat in the coding region of the huntingtin (HTT) gene, resulting in expanded polyglutamine (polyQ) aggregation and neuronal death [123]. It is shown that basal mitophagy is markedly reduced in mice models of HD (i.e., HTT expressing mice), suggesting defective mitophagy is one of the causes of HD progression [128]. Interestingly, HTT-induced neurodegeneration was partially rescued upon PINK1 overexpression in fly and mice HD models [129]. Therefore, there is an increasing interest in stimulating mitophagy (PINK1-PARKIN-dependent and -independent) to treat HD and other neurodegenerative diseases.
Amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) share a common mechanism for mitochondrial toxicity and neurodegeneration caused by protein misfolding and aggregation of mutant superoxide dismutase 1 (SOD1), TAR DNA-binding protein 43 (TDP-43), and RNA-binding protein FUS [123,130]. A relatively low PARKIN expression was observed in ALS/FTD mice model expressing mutant human TDP-43, suggesting an increased vulnerability to mitochondrial dysfunction and defective PINK1-PARKIN pathway linked to ALS/FTD pathogenesis [131]. In addition, through a poorly defined mechanism, mitophagy was suggested to regulate the pathogenesis of multiple sclerosis (MS), the most common neurodegenerative disease in young adults [132].
Conclusions and Perspectives
Mitophagy is a multistep process that maintains cellular and organismal fitness during aging or intracellular/environmental stress conditions. Moreover, mitophagy coordinates with mitochondrial biogenesis and dynamics to maintain mitochondrial homeostasis [133]. For example, mitophagy impairment compromises stress resistance, longevity, and mitochondrial function in C. elegans animal model [133]. Indeed, mitophagy-deficient animals show phenotypes such as elevated mitochondrial reactive oxygen species (ROS), increased mitochondrial DNA mutations, decreased ATP levels, mitochondrial membrane depolarization, and elevated cytoplasmic Ca 2+ concentration [133]. Furthermore, neuronal mitophagy declines during human pathologies and ageing, leading to an accumulation of defective organelles [128,133]. Understanding the molecular mechanisms underlying mitophagy pathways in yeast and metazoans will have potential value to modulate mitophagy for therapeutic intervention in neurodegenerative diseases. Recently, new small-molecule compounds were identified that could amplify the catalytic activity of PINK1 and PARKIN (WT and PD-related mutant), and thus offer an effective therapeutic strategy to manipulate and rescue the efficient clearance of defective organelles via mitophagy in PD patients [134,135]. However, kinetin triphosphate (KTP), an ATP neo-substrate, can only pharmacologically boost or restore PINK1 activity upon mitochondrial depolarization by CCCP, which may limit its application under the pathophysiological setting [135]. Further studies are required to validate the drug efficacy of these compounds in rodent genetic PD models. Furthermore, PINK1 and PARKIN activity may be modulated by screening and identifying potential synthetic or natural small-molecule inhibitors against specific DUBs (USP15, USP30, and USP35) that antagonize the PINK1-PARKIN pathway. Alternatively, the pharmacological activation of USP8 by small-molecule-compounds may increase mitophagy in PD patients with decreased PINK1 or PARKIN activity.
Author Contributions: R.K. and A.S.R. designed the concept and wrote the article. R.K. generated all figures. All authors have read and agreed to the published version of the manuscript. | 10,227 | 2021-04-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Evaluation of fractionally distilled Picea abies TMP-turpentine on wood-decaying fungi: in vitro, microcosm and field experiments
Synthetic and heavy metal antifungals are frequently used as wood preservatives. However, they exhibit relatively inert biodegradation and toxic properties when leached; this makes their replacement with environmentally degradable yet functional alternatives a key target in the wood protection industry. In this context, distilled fractions of raw thermomechanical pulp turpentine (TMP-T) from Picea abies were assessed for their wood protecting capabilities against wood-decaying fungi. Antifungal bioactivity of fractions and some of their combinations were screened on agar-plates against the brown-rot fungus Coniophora puteana. Addition of TMP-T fractions showed a significant fungal growth rate reduction, while mixtures indicated the presence of synergistic and antagonistic effects. One fraction, obtained after distilling 1 L TMP-T at 111–177 °C at 0.5 mbar, showed complete growth inhibition of Antrodia sinuosa, Serpula lacrymans, Serpula himantioides and significant inhibition of Antrodia serialis, Antrodia xantha, Gloeophyllum sepiarium, Heterobasidion parviporum at a concentration of 1000 ppm. This fraction was further examined for long- and medium-term effects on wood decay in microcosm soil-jar and field experiment, respectively. The known antifungal compounds benzisothiazolinone, 2-octyl-4-isothiazolin-3-one, 3-iodo-2-propynyl N-butylcarbamate and two commercial wood preservatives were used as reference treatments. Commercial preservatives instilled long-term efficacy against C. puteana wood decay in a soil-jar microcosm experiment, but no noticeable protection with antifungal compounds or the present treatments was found. However, a moderate effect by the TMP-T fraction from the in vitro assay was observed and the TMP-turpentine distillation residue showed a similar fungal inhibition effect to the most potent commercial treatment after 29 months in the field.
Introduction
Wood-decaying fungi are ecologically important for the release of sequestered carbon and nitrogen (Boddy et al. 2008), and although their ability to degrade wood is of critical importance in nature, they cause detrimental deterioration to wooden structures such as houses, logs and outdoor wood. To enhance the service life of wood products, synthetic preservatives (Råberg and Hafrén 2008), waxes (Brischke and Melcher 2015;Liu et al. 2018), linseed oil (Humar and Lesar 2013) and other essential oils (Hyvönen et al. 2006;Medeiros et al. 2016;Wang et al. 2005) have frequently been used as topical or impregnated protective measures against wood-decaying fungi (Humar and Lesar 2013). With increasingly pervasive public environmental concerns, products from nature have in the last decades experienced a renaissance as wood protective measures as well as providing companies with the necessary goodwill incentive to look for more eco-friendly replacements (Sen 2001). As an alternative to synthetic wood preservatives, products from trees carry a natural and latent potential as fungicide sources for inhibiting wood-decaying fungi (Chong et al. 2009;Schultz and Nicholas 2000;Valette et al. 2017).
Trees co-evolved during millions of years with their degraders, a process that honed and primed their natural defences (Stokland 2012). Apart from the defensive bark barrier, trees produce resin-type compounds, phytoalexins, as a response to fungal attack (Kuć and Rush 1985). Phytoalexins are a diverse group of natural compounds that include alkaloids, glycosteroids and terpenoids. An essential oil that contains many of the terpenoids present when trees protect themselves is the essential oil from spruce (Arnerup et al. 2011). In the thermomechanical pulping process, wood essential oil is obtained in large quantities as a by-product by hydrodistillation of wood chips. The resulting essential oil is hereafter referred to as thermomechanical pulp turpentine (TMP-T). Essential oils from various sources have previously shown inhibitory or biocidal effects on an array of different organisms: parasites (Izumi et al. 2012), bacteria (Miladinovic et al. 2012), moulds (Akgül and Kivanç 1989), insects (Boulogne et al. 2012) and filamentous fungi (Inouye et al. 1998). Additionally, they are generally considered safe for humans and have found long-term use in drug formulations, food preservation and as flavouring agents (Bakkali et al. 2008;Smith et al. 2005). In general, terpenes consist of isoprene units where a monoterpene involves a linear or cyclic combination of two isoprene units. The main constituents of the TMP-T used in this study have been characterised previously as the bicyclic monoterpene isomers α-and β-pinene and, to a lesser extent, terpenoids, sesquiterpenes, higher terpenes, phenols, fatty and resinous acids (Lindmark-Henriksson 2003).
Indigenous wood extractives as wood preservatives in naturally durable wood species against some wood-decaying fungi have been shown to elicit antifungal activities previously (Kirker et al. 2013). Efficient and comprehensive use of any natural by-product such as TMP-T is vital for greener and circular economies. However, the antifungal effect of TMP-T fractions from Norway spruce has not been established neither in vitro nor in outdoor applications. The aim of the present study was to screen whether antifungal activity of turpentine could be increased by separating its constituents through fractional vacuum distillation.
Chemicals
A range of different chemicals was tested for their antifungal activity. TMP-turpentine was obtained from the thermomechanical pulp and paper mill SCA Ortviken (Sundsvall, Sweden). The turpentine obtained from SCA Ortviken is exclusively extracted from Norway spruce. TMP-T (1 L) was previously divided by vacuum distillation into 23 fractions (Online Resource 1), and the fractional constituents are summarised in Table 1. An ether extract from SCA Ortviken debarking water containing the antioxidants piceatannol, resveratrol and isorhapontigenin was included as it has exhibited complete inhibition of A. xantha, G. separium and A. serialis at a concentration of 676 ppm (Hedenström et al. 2016). Benzisothiazolinone (BIT; SKU: 561487; CAS number: 2634-33-5), 2-octyl-4-isothiazolin-3-one (OIT; SKU: 46078; CAS number: 26530-20-1), tebuconazole (TA; SKU: 32013; CAS number: 107534-96-3) and 3-iodo-2-propynyl N-butylcarbamate (IPBC; SKU: 521949; CAS number: 55406-53-6) were purchased from Sigma-Aldrich. TA and OIT were of PESTANAL analytical standard grade. Beckers impregnation oil (Beckers Färg, Sweden) and Yunik Cronol were purchased from the local hardware store. Both Beckers impregnation oil and Yunik Cronol are sold as wood preservatives for untreated wood, and both contain IPBC. For a paint to be considered for ecolabelling according to the European Commission article 2014/312/EU, the maximum concentration of IPBC is 110 ppm. All other commercial wood preservative substances were used in their molar equivalents.
Fungal species and strains
Eight species of wood-decaying fungi associated with damage of wood structures, stored timber or living trees were selected: Antrodia serialis, Antrodia sinuosa, Antrodia xantha, Coniophora puteana, Gloeophyllum sepiarium, Heterobasidion parviporum, Serpula himantioides and Serpula lacrymans. The Antrodia species are brown-rot fungi that primarily attack construction timber and fresh wood close to the ground (Schmidt 2006;Schmidt and Moreth 1995). Coniophora puteana causes brown-rot both indoors and outdoors in stored wood and constructions. It is considered one of the most destructive fungi in temperate and boreal areas (Kleist and Schmitt 2001). Gloeophyllum sepiarium is a brown-rot fungus attacking stored timber and finished timber subjected to moisture, such as poles, fences, sleepers and window constructions (Schmidt 2006). Heterobasidion parviporum is a white-rot fungus causing root and butt rot in living Norway spruce and is one of the economically most damaging plant diseases in northern temperate regions (Asiegbu et al. 2005). It colonises living trees by infecting adjacent stumps and is typically a problem after pre-commercial thinning operations. Serpula himantioides is a brown-rot fungus attacking outdoor structural timber and is only occasionally found indoors, whereas S. lacrymans is considered as an extremely destructive fungus in sheltered indoor conditions in temperate and southern boreal regions (Schmidt 2006). To provide the most natural and non-laboratory domesticated fungal specimens, saproxylic fungi were isolated from their natural habitats. All species except for S. lacrymans were isolated from fruiting bodies collected from conifer logs in forest stands close to Sundsvall, central Sweden. The S. lacrymans strain originated from an infected house in southern Norway. All fungi samples were isolated in the same manner as follows: small pieces (~ 1-2 mm) of the basidiocarp were inoculated in a sterile bench onto selective Hagem agar supplemented with: benomyl (4 mg), thiabendazole (125 mg), penicillin (30 mg), streptomycin (30 mg) and tetracycline (30 mg) L −1 to inhibit growth of bacteria, ascomycetes, zygomycetes and other unwanted specimens. Growing mycelium from each species was retrieved from the selective agar-plates and transferred to another plate with selective medium. Thereafter, the fungal isolates were inoculated on standard Hagem agar. All strains were added to Mid Sweden University's saproxylic fungal collection. Species identities of isolated strains were assigned based on their fruiting body as well as studying the texture and colour of the aerial mycelium (Stalpers 1978). In some cases, samples were also taken for microscopic identification (Ryvarden and Melo 2014). In addition, all strains were sequenced using the internal transcribed spacer (ITS) region and their respective contigs were deposited into GenBank with the following acces-
Agar-plate test
Initially, a binary assay was applied to determine the most effective half of fractioned TMP-T. Raw TMP-T (5 mL) was distilled, and the first fraction was collected from the lowest boiling point to 200-240 °C at atmospheric pressure. Based on the table in Online Resource 2, the distilled fraction was chosen to represent fractions 1-15, while the remaining turpentine residue represents fractions 16-23 including the previously distilled residue. The wood-decaying fungus C. puteana was used as a model species because of its fast-growing properties, symmetrical growth on agar, worldwide distribution and frequent association with undesired brown-rot decay of buildings and stored wood.
Agar-plate tests were performed as previously described (Hedenström et al. 2016) with minor modifications. The present model specimen was grown on 9 cm Hagem agar-plates (Palmer 1971) supplemented with raw TMP-T and TMP-T fractions in triplicates before autoclaving the nutrient medium for 20 min at 120 °C. Even though turpentine containing samples were stirred prior to pouring the plates, a slight film formation, which may cause the concentration perceived by the fungus to be higher than anticipated, was observed at agar surfaces. Addition of an emulsifier might have homogenised samples further; however, fungi-emulsifier interactions may be species-dependent, used as a nutrient source and exhibit differential effects depending on the emulsifier used. For the interested reader, an in-depth review regarding mainly single-cell microorganisms was recently published ( Van de Vel et al. 2019). A 3 mm plug was put face up onto the supplemented plates' centre from an already established culture of C. puteana (7 days). The inoculum plug was taken from actively growing hyphae at the perimeter of the fungus. The diameter of the mycelia was measured in millimetres with a ruler after the third day following inoculation. Coniophora puteana grown on Hagem agar without supplement of TMP-T was used as a positive control.
After the binary assay, fractions 16-23 were selected for further inquiries into their fungal growth inhibitory potential. Note that when TMP-T is fractioned further, the individual fractions will contain a higher concentration of their respective components compared to raw TMP-T. Hence, the initial concentration determined in the preliminary experiment above was reassessed and thus lowered. Concentrations 1000 ppm and 5000 ppm were tested at two different times with separate control samples. Apart from testing fractions 16-23 individually, groups of fractions were combined to assess any synergistic and antagonistic effects. Scardavi (1966) defined interactions as additive, and the effect is compounded when two or more constituents are added together: synergistic, more than the sum of their parts; or antagonistic, less than the sum of their parts. Synergistic effects are highly beneficial since they permit a lower dosage of each amendment than if they were used separately (Tallarida 2001). It should be noted that additive effects were not tested as mixing fractions, while keeping the same total concentration as individual fractions is a zero-sum approach. The groups were mixed in identical proportions as follows: G1 = Fraction 16,17,18,19;G2 = 20,21,22,23;G3 = 18,19,20,21,and G4 = 16,17,22,23. Dose-response linearity was estimated by dividing mean growth rate measurements at 1000 ppm by five and comparing with corresponding values at 5000 ppm. Likewise, for expected linear growth estimation of groups G1-G4, a linear combination of the individual fractions responses was made.
Wood treatments
To test antifungal effects of these TMP-T extracts in comparison with other wood preservatives, decay tests were performed on wood pieces. The treatments included one TMP-T fraction, an ether extract from debarking water and four commonly used active ingredients of commercial wood preservatives dissolved in ethyl acetate and two wood impregnation oils. As some TMP-T residue constituents (Lindmark-Henriksson 2003) are known to exhibit antifungal effects on Trametes versicolor and Irpex lacteus (Eberhardt et al. 1994;Kusumoto et al. 2014), the TMP-T distillation residue was also investigated. Due to its relatively high viscosity, the residue was diluted with ethyl acetate (3:1). Two types of controls were used, the first control group was untreated wood and the second control group was painted using ethyl acetate without supplement to evaluate whether residual ethyl acetate affected fungal growth. Ethyl acetate was used as some of the fractions did not completely dissolve in hexane. Four active components of commercial wood preserving paints were dissolved in ethyl acetate to a concentration of 0.356 mM for IPBC, BIT, OIT and TA. TMP-T fraction 23 was used at 1000 ppm and 7500 ppm. To enable a rough comparison with TMP-T fraction 23, the concentration of the ether extract was used at 1000 ppm. Beckers wood impregnation oil and Yunik Cronol were used as supplied in their formulated form.
A wooden board (Picea abies) was purchased at the local hardware store and cut into rectangular 75 × 30 × 21 mm blocks with a volume of 4.4 × 10 −5 m 3 . Mean density of the board was determined as 485 kg m −3 using the water displacement method after conditioning at 20 °C and 70% relative humidity until constant weight. Wood pieces were sanded with increasingly fine grain size until a fine polish was achieved. Ring widths were imaged with a scanner and imported into the image analysing software WINDENDRO (version 2014) for standard dendrochronological ring width assessment (Speer 2010). The average ring width over four samples and twenty rings was determined as 2.35 mm. To reduce evaporation of semi-highboiling fractions at the end of experiment, a gentler temperature than the standard temperature of 103 °C was chosen. Oven dry weight (ODW) was recorded to the nearest 0.001 g after drying at 65 °C for 3 days with an average dry weight of 18.8 grams. After drying, wood pieces were singly streaked on all sides using a paintbrush and each treatment was executed in triplicate. After application of treatments, wood pieces were left to dry overnight on a ventilation bench at room temperature. Negative weights after application and drying were considered as equal to zero. The weight taken after the application of paint was subsequently used to determine the weight change at the end of the experiment.
Furthermore, based on the weights after treatment (WAT), it was assumed that wood pieces treated with ethyl acetate including the present compounds of interest absorbed aqueous vapour while film-forming commercial treatments and the distillation residue completely excluded water vapour absorption from the time of application and drying until gravimetric measurements. Thus, the mean of ethyl acetate treated wood samples was subtracted from the other treatments that used ethyl acetate as a solvent and no corrections were made for the distillation residue and the commercial treatments.
Microcosm wood decay tests on soil
A laboratory soil contact microcosm system was devised to imitate on-site degradation over a long period of time. The substrate used was a mixture of mineral soil, sand and potting soil in a 3:2:1 ratio with the addition of water. This soil composition is used as plant nursery soil, and due to the rich amount of nutrients and airiness, it facilitates hyphal growth and soil penetration, respectively. Mineral soil from one of the field treatment sites: Tunadal, 62° 25′ 05.1″ N 17° 22′ 35.5″ E, was dried at 105 °C in the oven with medium ventilation. After sieving the soil with a 2 mm sieve, dried soil (3 L), sand (1.5 L), potting soil from Hasselfors Garden AB (Örebro, Sweden) (1 L) and distilled water (1.4 L) were mixed and homogenised. After equilibrating 1:1 soil mixture and distilled water (w/v) at room temperature, the pH of the mixture was measured with a 744 Metrohm pH-meter as 6.5. Glass jars (0.5 L) with soil mixture were sterilised by autoclaving at 120 °C for 20 min before inoculated with a 5 mm diameter agar plug with one strain of C. puteana and subsequently sealed with parafilm. To allow the fungus time to start foraging, wood pieces were added to the jars after 4 days. To speed up fungal colonisation, an additional 5 mm plug was added beside the wood piece. In addition, sterile water (2 mL) was added to improve the moisture conditions for the fungus. After 29 months in room temperature (approx. 20 °C), the experiment was ended, and wood pieces containing fungal biomass were oven dried and gravimetrically measured as previously described.
Field wood decay tests
Treatments and wood pieces were prepared as described for the microcosm paragraph. The experiment was conducted at three different sites to accommodate different wood-decaying microorganisms. The assay included thirteen treatments including controls and seven replicates each, for a total of 273 individual trials. Wood pieces were placed at random positions with a distance of 50 × 50 cm, within a total area of 600 × 300 cm at three different locations: Bergsåker, 62° 24′ 41″ N 17° 13′ 45″ E, which is a plane grassland area on sandy-loam soil with sparsely placed deciduous trees, Tunadal, 62° 25′ 05.1″ N 17° 22′ 35.5″ E, which is a sloping cultivated forest area facing east with Norway spruce, birch and rowan trees growing on mossy soil, and Nedansjö, 62° 22′ 41.2″ N 16° 49′ 18.5″ E, which is a sloping grassland area with surrounding bush wood facing east with a small stream close by. Areas suitable for growth of wood-decaying fungi in terms of moisture content and temperature were selected. After leaving the painted wood pieces in direct ground contact (use class 4 according to EN 335 2013) for 29 months, they were oven dried and gently brushed to remove mud, moss and surface growing mycelia. Painting the wood pieces rather than impregnating them according to, for example, EN 252 (2015), was performed to accelerate wood decay and decrease experimental time. The ratio between final oven dry weight and oven dry weight before field experiment was used as a measure of biodegradation by wood-decaying fungi or other organisms.
Statistical analysis
Analyses of variance (ANOVA) were conducted on growth rates and the percentage weight loss data after arcsine square-root transformation in MATLAB 2018b (Math-Works). Post hoc analysis of treatment means for microcosm and in vivo experiments were compared using Tukey-Kramer's honest significance test with a significance level of p < 0.05.
Corrected effect sizes (Hedges' g) and their corresponding confidence intervals were calculated as: where df = degrees of freedom as n i + n c − 2; x c = mean of control; x i = mean of treatment; s = standard deviation; n = number of replicates.
Results and discussion
The turpentine made by distilling Picea abies wood chips from thermomechanical pulp production is a complex by-product that consists of 100+ compounds present at different concentrations and with varying effect on fungal growth (Ljunggren et al. 2020). Distilled fractions of turpentine were evaluated for their efficacy against wood-decaying fungi, and differential fungal growth response reveals antifungal relevance according to boiling point and applicability.
Efficacy of fractioned turpentine on agar-plates
A one-way ANOVA between compounded fractions 1-15, 16-23 including residue, raw turpentine and control was performed as a rapid approach to assess differential treatment effect (Fig. 1a). A significant treatment effect at the p < 0.05 level [F 3,8 = 234.74, p ≪ 0.05] followed by Tukey's post hoc analysis showed that growth of C. puteana was reduced by both turpentine fractions at 10,000 ppm. The highest reduction was noted for the compounded fraction 16-23. The relatively low response of the compounded fraction 1-15 is in contrast to the commonly reported notion that biological activity of essential oils is primarily caused by their major components (Bakkali et al. 2008;Burt 2004;Lopez-Reyes et al. 2013;Shukla et al. 2012).
The main constituents of fraction 1-15, and similarly raw turpentine, are 62/38 (±)-α-pinene) and 3/97 (±)-β-pinene (Groth 1958), limonene and other volatile monoterpenes (Lindmark-Henriksson 2003). As a major component in Pinus rigida essential oil, α-pinene has been proposed as a potentially active component against mould fungi (Salem et al. 2016). Moreover, Rivas da Silva et al. (2012) showed that (+)-α-pinene and (+)-β-pinene have a microbicidal effect against the yeasts Candida albicans and Cryptococcus neoformans, the mould fungus Rhizopus oryzae and methicillin-resistant Staphylococcus aureus bacterium with activities in the range of 117-4150 ppm. Their corresponding (-) enantiomers did not show any activity up to 20,000 ppm. Due to the lower effect observed for these fractions, it was shown that major TMP-T components were less active than minor components when targeting the wood-decaying fungi C. puteana. A likely explanation may be that wooddecaying fungi are more adapted to (+)-α-pinene degradation or disposal than yeasts and moulds. However, the fact that antagonistic effects between the monoterpenes limonene and α-pinene have been suggested (Maree et al. 2014) complicates any decisive conclusion regarding individual antifungal efficacy.
Moreover, monoterpenes are effective against Candida albicans (Martínez et al. 2014), Saccharomyces cerevisiae (Belletti et al. 2004), but also wood decayers such as Trametes hirsuta, Schizophyllum commune and Pycnoporus sanguineus (Zhang et al. 2016). However, their use as natural bioactive agents is primarily restricted by their volatile nature, where the effect may be lost shortly after treatment. The larger cousins of monoterpenes, i.e. sesquiterpenes, diterpenes and their terpenoid derivatives, are less volatile, and their relatively lower vapour pressures are better suited for long-term usage. The present findings indicate that these substances are more capable of inhibiting growth of wooddecaying fungi. This may imply that Norway spruce produces compounds against to the left and the right corresponds to the experiment performed at a concentration of 1000 ppm and 5000 ppm, respectively. c Fractions 18 and 23 were further tested against seven saproxylic fungi at a concentration of 1000 ppm. *Significant differences as p < 0.05 and ** differences between fractions. Column heights and error bars: column heights show the average fungal growth rate in mm day −1 from triplicate experiments. All error bars represent 95% confidence intervals, and missing error bars are due to zero differences between replicates saproxylic fungi that are supposed to protect the tree for an extended period. An added benefit of less volatile compounds is that they are not as likely to cause adverse effects in human respiratory airways during application. Applying TMP-T fractions with lower volatility will thus increase the protective effect while lowering volatile toxicity. It has been suggested that fractionation of complex natural product extracts, to isolate individual active compounds, is often challenged by a loss of activity due to loss of holistic synergism (Inui et al. 2012). This could explain the phenomenon that fractions 1-15 were less effective when compared to raw TMP-T. However, it was shown that latter fractions increase fungal growth inhibiting capability in contrast with their raw turpentine source at the same concentration. A probable explanation is that crude fractionation by distillation ensures an increased concentration of components compared to raw TMP-T and does not entirely isolate individual compounds. Thus, some measure of leaving potentially beneficial synergistic effects is still present.
Based on the preceding results, fractions 16-23, fraction blends, raw turpentine and standard control (no addition of turpentine) were tested at concentrations of 1000 and 5000 ppm (Fig. 1b). A two-way interaction ANOVA [Fractions: F 13,56 = 50.09, p ≪ 0.05; Concentration: F 1,56 = 782.53, p ≪ 0.05; Fraction × Concentration: F 13,56 = 10.37, p ≪ 0.05] followed by Tukey-Kramer HSD showed that all treatments were effective except for fraction 16. At a concentration of 1000 ppm per fraction, a trend of increasing inhibition from fraction 16 to fraction 19 was interrupted by a decrease in effectiveness by fraction 20 (Fig. 1b). Even at the higher concentration of 5000 ppm, fraction 20 deviated from the trend of increased inhibition as fractions shifted towards less volatile components. This may be attributed to the observation that fungi produce hydrocarbon sesquiterpenes and they should be naturally habituated towards them, but only a few fungal oxygenated sesquiterpenes have been reported (Kramer and Abraham 2012). This is also congruent with the steady state of hydrocarbon sesquiterpenes after Norway spruce defence induction by methyl jasmonate (Martin et al. 2002). Based on the knowledge of the constituents of the fractions (Ljunggren et al. 2020), the current results also support the notion that hydroxylated sesquiterpenes from Norway spruce exhibit more potent antifungal capabilities than their hydrocarbon counterparts.
An additional possibility for the lower activity of fraction 20 could be that the inhibition is curbed by the increased presence of substances with growth-inducing effects. Apart from its difference with other fractions, fraction 20 also exhibited nonlinearity at both low and high concentrations. In addition, nonlinear responses to the applied concentration were observed for TMP-T fractions 16, 18, 20 and 22. Figure 1b reveals potentially synergistic effects for groups G1 and G4. Additionally, Fig. 1b shows that fraction 23 had the highest effect on C. puteana growth rate, i.e. the lowest mean at 1000 ppm, while fraction 18 showed complete C. puteana growth inhibition at 5000 ppm. The appearance of nonlinear effects at higher dosages may be caused by an increase in antagonistic or synergistic compounds, and thus, higher-order interactions may have played a more prominent role here. These results are in line with the findings of Kamo and Yokomizo (2015), who showed that nonlinear effects increase as the chemical concentration goes up.
Building on the interaction ANOVA results, the growth-reducing potentials of fractions 18 and 23 were examined when faced with a panel of different wooddecaying fungi (Fig. 1c). Fraction 23 caused complete growth inhibition of A. sinuosa, S. himantioides and S. lacrymans and substantial inhibition of the other species [Fractions: F 13,56 = 50.09, p ≪ 0.05; Concentration: F 1,56 = 782.53, p ≪ 0.05; Fraction × Concentration: F 13,56 = 10.37, p ≪ 0.05]. The two treatments were also significantly different from each other, except for the Serpula species. The two fungi from the genus Serpula showed a higher susceptibility to both TMP-T fractions than the other tested fungi, potentially signifying their inability to infest standing trees with an active defence mechanism.
Overall, all fractions and raw TMP-T were found to have a negative influence on fungal growth, and the fraction corresponding to the highest boiling point was found to have the highest potential as a fungal growth suppressor and fungicidal medium at a concentration of 1000 ppm. This concentration is less than the lowest concentration with no growth of Alternaria alternata, Fusarium subglutinans, Chaetomium globosum, Aspergillus niger and Trichoderma viride when treated with raw essential oil from Pinus rigida at concentration levels 2500-5000 ppm (Salem et al. 2016). In comparison, a decrease in growth rate after fractionation and complete growth inhibition of three wood-decaying fungi at a concentration of 1000 ppm were observed. Moreover, the treatment at 1000 ppm was likely biocidal as no-growth specimens (Fig. 1c) were unable to start growing when transferred to fresh media after two weeks from experimental onset. These findings support the use of fractional distillation or other separation techniques as a way to increase antifungal efficacy of turpentine.
Microcosm
Based on results from the present growth activity assay, fraction 23 was selected as the best candidate for general fungal inhibition. A microcosm soil experiment was set up to examine its long-term treatment effects. It was hypothesised that treating P. abies wood pieces with waste products from the paper industry, an ether extract of debarking water and TMP-T fractions, would return some of the wood's natural preservatives and increase its resistance against fungal decay. Given that C. puteana was supplied with a limited amount of nutrients and water, it was likewise expected that any change in the degree of decay would be counterbalanced by an initial efficacy of the antifungal treatments and that the experiment would naturally conclude when water levels approached zero due to natural evaporation. Furthermore, loss of efficacy would not be influenced by rain cycles and thus avoid leaching and loss of effective substances, apart from biodegradation by C. puteana.
Results of the microcosm in vitro treatment tests against the wood-decaying fungus C. puteana are listed in Table 2. Mean percentage retention mass increase (see WAT in Table 2) showed additive mass changes for the distillation residue, Beckers impregnation oil and Yunik Cronol after treatment application. A few weeks after inoculation, no fungal growth was seen in jars with Beckers impregnation oil and it was determined that the inoculum did not survive. Supposedly, the compounds Table 2 Weight of wood pieces after oven drying, application of paint, paint retention and mass loss after fungal growth Values following ± = standard error for microcosm data (n = 3) and 95%-confidence intervals for the field experiment Bold values show significant changes compared with the control sample after post hoc Tukey-Kramer's HSD IPBC and propiconazole (not included in this study) supplied in Beckers impregnation oil are released and effective C. puteana fungicides at a distance. This result is consistent with data from EPA 738-R-97-003 that suggest IPBC to be very mobile to mobile (K < 2.64 mL g −1 ) in mineral soils. Propiconazole on the other hand is considered a substance stable to aerobic soil metabolism and moderate to relatively mobile in soil according to EPA 738R-06-027. As this latter compound is not present in Yunik Cronol, it is highly likely that the observed antimycotic effect is caused by an increased concentration of the propiconazole in Beckers impregnation oil or by IPBC-propiconazole synergism.
A slight growth at the end grain of wood pieces treated with Yunik Cronol was observed at the end of the experiment (Fig. 2). No other treatment effects were observed. The mean coefficient of variance (CV) across all samples for mass loss data was determined as 4 ± 2%. Except for treatments that significantly inhibited wood degradation, wood mass loss at the end of the experiment reached its maximum at approximately 71% with typical brown-rot characteristics. Any treatment time lag was thus unnoticeable. The measured mass loss should be at the maximum degradation capacity that C. puteana can exhibit overall or perhaps with the time and resources supplied. Norway spruce stem wood contains 42.0 ± 1.2% cellulose, 27.3 ± 1.6% hemicelluloses, 27.4 ± 0.7% lignin and 2.0 ± 0.6% extractives. Furthermore, it is known that brown-rot fungi leave a chemically modified lignin (Goodell 2003). Reasonably, the remaining wood mass consists of this modified lignin with substantial degradation of the other biopolymers.
Additionally, the commercial products left a decidedly higher mass after treatment of the wood pieces, likely implying that a water excluding and protective layer remained. However, additive mass increase was most noticeable for the distillation residue with an average retention of 33.25 kg m −3 (8.1% relative increase). This is probably due to its low volatility and high molecular weight resin (palmitic, pimaric, abietic, dehydroabietic, behenic, isopimaric) acid content (Lindmark-Henriksson 2003). Residue retention levels were, in relatively comparable surface/volume ratios, lower than those for linseed, rustikal and tung oils (Humar and Lesar 2013). This is to be expected as the present treatments were applied topically, while vacuum/pressure impregnation was used in the previous study, improving penetration depth. In comparison, industrial chromated copper arsenate preservative retention is effective Fig. 2 Microcosm fungal growth after 29 months. Microcosm images from left to right: wood pieces treated with: Beckers impregnation oil, Yunik Cronol and distillation residue at approximately 2.0 kg m −3 (use class 3) and 10.2 kg m −3 (use class 4). However, given enough time and ample resources, the brown-rot fungus C. puteana was able to survive the applied resinous acids from the distillation residue. This finding suggests that shorter application intervals or complementary formulation may be necessary to maintain a long-term and adequate defence.
Field study
Prior to placing the wood pieces in the field, residuals of weight after painting and room temperature equilibration were examined. Three observations deviated from the rest with z-score = 2.42, 2.08 and 2.66 for treatment with ether 3 + fraction 23, ethyl acetate and the control, respectively. Wood density, the presence of knots and other nonconformities may have caused these results but as the dry weight after painting and drying should belong to a normal distribution, and when, additionally, the aim is to have a uniform group before outdoor placement, these outliers from WAT residuals were removed before further data processing. In addition, ten samples were missing in the field at the end of the experiment, which resulted in a total of 260 wood pieces examined across thirteen treatments (Table 2).
Wood pieces were tested on-ground corresponding to EN use class 4, and as it is generally known that the naturally existing fungal community has a substantial effect on the decay, field trials were carried out at three different locations. Treatment considerably affected wood mass loss after 29 months [F 12,214 = 21.73, p ≪ 0.05]. Surprisingly, location had no impact [F 2,214 = 0.1, p = 0.82] and no evidence that treatment effect would depend on location was found [Treatment × Location: F 24,214 = 1.07, p = 0.45]. This could indicate a homogeneous fungal community across locations or broad-spectrum fungal decay resistance of successful treatments. Thus, the categorical variable location was excluded, and treatment types were pooled prior to further calculations.
As expected, CV-values were markedly increased in the field experiment, with the minimum value recorded for Beckers impregnation oil (12.2%) and the highest for treatment with ether fraction 3 + fraction 23 (65.8%). CV-value for the control samples was determined as 46.6% and a mean mass loss of 15.7% (Table 2). Even though these values were considerably higher than microcosm experiments, Beckers impregnation oil, distillation residue and tebuconazole still showed large and significant effect sizes (Cohen 1995) close to or above one (Fig. 3). Fraction 23 was determined as marginally significant as its confidence interval did not include 0 at a 90% confidence level. This result is encouraging when considering the low retention applied. On a per molar basis, tebuconazole at a concentration of approximately 110 ppm is suggested as the most effective treatment, a result that is congruent with previous findings (Schultz and Nicholas 2002;Volkmer and Schwarze 2008). Furthermore, the amount of tebuconazole applied to each substrate could not be measured down to the nearest milligram, which demonstrates its high efficacy at low retention levels. A reason for the higher efficacy of tebuconazole than the carbamate IPBC is probably due to its higher affinity for wood (Kjellow et al. 2010). Its use, however, is limited by harmful effects on animals. In fact, most chemicals classified as wood preservatives are toxic to the environment and exhibit relatively inert biodegradation properties (Salminen et al. 2014). Replacing these hazardous compounds with environmentally degradable yet functional alternatives is a key target in the wood impregnation industry.
Herein, the residue from TMP-T distillation exhibited wood protection qualities that could not be statistically distinguished from a formulated commercial alternative supplemented with the carbamate IPBC and the triazole fungicide propiconazole. Efficacies of some constituents of the residue have shown that abietic acid exhibited an effect on white-rot fungi Trametes versicolor, while both abietic and dehydroabietic acid affected growth of Irpex lacteus (Eberhardt et al. 1994). Furthermore, Kusumoto et al. (2014) showed that abietic and dehydroabietic acids displayed the highest antifungal activities compared with some monoterpenes and monoterpenoids. TMP-T residue may therefore be an interesting alternative to triazole derivatives and monoterpenes.
The current study included an evaluation of potential growth inhibitory effects from different TMP-T fractions and with strong evidence that such effects are present when tested against a wide range of fungal species grown on agar. In lieu of crude oils, an increase in antifungal efficacy by fractional distillation was observed. This strategy may well work to increase antifungal efficacy on other types of pulp and paper by-products, such as tall oil (Koski 2008). Experiments on wood substrate show that additional studies are needed on how to formulate and apply these compounds to ensure long-term effects. Interestingly, the most promising fraction from antifungal assays on agar-plates, TMP-T fraction 23, showed moderately effective wood protective qualities at the low retention applied (< 0.02 kg m −3 ), even at severe wetting conditions comparable to use class 4. Furthermore, TMP-T residue outperformed, on average, a commercial product. Provided that a suitable formulation for their application can be found, the identified fractions may offer environmentally friendly wood protection based on a renewable feedstock.
Future perspectives
Natural compounds continue to draw attention as non-toxic and eco-friendly alternatives to more toxic treatments using synthetic and heavy metal wood preservatives. Nevertheless, due to their presence in a natural cycle, phytochemicals frequently present in woody material are likely guaranteed to have an antagonistic organism capable of their biodegradation. Natural product treatment with TMP-T distillation residue may require shorter treatment intervals for up-keeping a high biodeterioration resistance in outdoor wood. This potentially work-intensive drawback may be ameliorated with impregnation techniques and optimal formulations. As the formulation of active wood preservative components, as well as their application to wood pieces, has a significant impact on their performance (Freeman 2008), a potential remedy may be to add, for example, a drying film that successfully retains active compounds for extended periods and further reduces water uptake (Humar and Lesar 2013). Inspired by the utilisation of highly active antiretroviral therapy in HIV treatments and previously suggested by Schultz and Nicholas (2002), a multi-target strategy could additionally be employed to increase the effectiveness of treatments while keeping a low individual concentration of each treatment. For example, combine: lignans, flavonoids and diterpenes to stave off reactive oxygen species and modulate fungal membrane fluidity; metal chelators such as troponoids to inhibit the Fenton reaction and metal-requiring enzymes laccases, tyrosinases and lipoxygenases; poacic acid to hamper β-1,3-glucan synthesis in the outer hyphal membrane (Piotrowski et al. 2015); alcohol terpenoids from TMP-T to partition in the outer cellular membrane and inhibit H(+)-ATPase that ultimately lead to intracellular acidification and cell death (Ahmad et al. 2010;Valette et al. 2017). Such natural protection schemes could potentially achieve wood decay resistance on a broad scale and may therefore be an interesting alternative as a versatile and multifunctional wood preservative. Hence, formulation with natural compounds may replace synthetic and heavy metal wood preservatives and should attract the attention of the wood preservation community. Herein, it is suggested that the readily available by-product thermomechanical pulp turpentine may be applicable as a component in analogous formulations, but further studies with standard industrial impregnation methods are required to accurately establish toxicity levels, moisture performance (Meyer-Veltrup et al. 2017), evaluate long-term efficacy, for example EN 252 (2015), and ultimately assess the practical use of high-boiling TMP-turpentine fractions at the industrial scale.
Conclusion
In conclusion, TMP-turpentine fractions with higher-boiling constituents revealed significantly enhanced antifungal performance in vitro and in vivo. Taken together, the present results show that fractional distillation of the industrially abundant by-product TMP-turpentine can be used to inhibit wood biodeterioration and increase the service life of wood-based products. | 9,123.8 | 2020-06-03T00:00:00.000 | [
"Materials Science"
] |
Deviation of Enhancing Stereotypes through Lexicalization and Songs in Mulan
This paper investigates the deviation of enhancing stereotypes that takes place in Disney film Mulan. It attempts to reveal the stereotypes that arise from the film in terms of lexicalization and the songs. Also, this study examines the implication of watching this type of movie as it is classified as family genre. The analysis is based on S. Jager and F. Mayer (2009) film analysis on Foucauldian approach of discourse analysis. It connects linguistic discursive practice, non-linguistic discursive practice, and materialization (object). Because of the limitation of the space in presenting the data, this paper focuses only on the linguistic discursive practice in terms of lexicalization and four songs represented in the film. The result shows that in terms of lexicalization, the use of word ‘girl’ compared ‘man’ has an implied deal with the case of stereotypes. Likewise, the symbol of ‘girl’ as a doll and ‘man’ as a sword symbolized in the film also bring the effect on the stereotypes which are characterized by the female and male characters. Then, in terms of songs, the four songs that are sung in the film also convey the stereotypes which can be denoted by the film.
Introduction
Mulan was one of Disney movies that acquired good reception at the time of its release.This film raises the tale from the other countries, that is, a legendary female figure form China.The story tells about the battle of Mulan, a Chinese woman who has lots of limitedness in gaining her freedom.Here, Mulan is described as a rebellious daughter who always confronts problems in the family.As an impact, she is not able and permitted to do things that she thinks right and proper.
As its category is a family movie, most people will tend to say that this film is suitable to be watched by all ages (general audiences).Likewise, the type of the film which is made as an animation will make parents to become less aware of the contents of the movie.They will presume that animated film is intended for children or teenagers like the animated films in general.Moreover, Disney, which is known as creator and producer for many children's cartoons and animations, has a very big impact on the globalization on its films world-wide.The films such as Cinderella, Sleeping Beauty, Snow White, and Aladdin, are inevitably popular among people of leveled generation and ages around the world.The problem will be tightening when we really pay attention deeply into the story.That is, the implication that is offered by Disney through its story and its feasibility to be watched by young generation during this time.The story, which derived from the other culture from the other country, is in fact can lead to different meaning and perceptions.Furthermore, the impact of the story perhaps will be misguided of misunderstood by its viewers, especially for the young ages.Thus, it is considered important to seek deeper to the implied consequences resulted from the point of view of language, non-discursive, and objectification that is advocated from this movie: what is implicated from the movies in terms of enhancing stereotypes and how far can we tolerate this kind of films to be viewed by the young ages.
Stereotypes
Stereotypes can be defined as an authorized issue in enabling the maintenance of discrimination over time and across different segments of experience and sociallife (Cook and Cusack, 2011: 37).This occurrence also can be dissimilar according to the culture, the perception, and the reception of people individually or collectively.
Most stereotypes are argued to take place firstly by stating to the visual or physical appearance.Ethnical, racial, group, individual, men, women, the elderly, and the young are all the factors that may emerge the stereotypes (Zebrowitz, 1996 :79).Then, it may be followed by the difference in sounds (language, including accent, vocabulary, etc), behaviors, habits, as well as relating with religion and political interest.
In this paper, there are at least two stereotypes that are observed.They are cultural and gender stereotypes.Cultural stereotypes deal with specific physical or facial features, judge in (personal) beliefs, norms, customs, and low/ high prejudice to certain culture (Moskowitz, 2005: 506).Meanwhile, gender stereotypes are in line with the social and cultural construction of men and women.That is, to make a judgment in the extent of their distinctive physical, biological, sexual and social functions (Cook and Cussack, 2011: 20).They basically affect and endorse to each other in social-life construction.
The worrisome thing about these stereotypes is that when they already enter to children's psyche or the so-called 'children's trajectory', with certain internalization that affects them as they were getting older (Schneider, 2005: 353).Since children watch the movie, any exposure may have a tendency to bring them in misguided role of understanding.Therefore, they should be monitored continuously dealing with the input and contact that they live in daily experience.Here, the power of media (in this case is movie) is argued as the biggest 'ills' for promoting stereotypes among people, not to mention, children as well.
Lexicalization in the Film
In this film, there is one marked lexicalization that is done by the characters.That is, the use of lexeme a "girl" in the whole of story.It can be observed that all conversations which take place in the movie use this word consistently throughout the film.It can indicate the estimation of a girl as a powerless human being that has negative connotation.Significantly, the word girl is always used to call or name Mulan.There is only one big time when she was called as "woman".That is, this name is given to her when her disguise is revealed by the Royal Guard.In fact, this calling is also not containing a good sense at all since it is used as a connotative idea followed by the metaphors treacherous snake.
This lexeme can be compared directly to the use of word to represent male characters in the film.The word "man" is used to represent the entire male figures in the story.If we compared these two words, "a girl" and "a man", they definitely bring different sense to the viewer.The word "girl" has the features of young, immature, perhaps also powerless, reckless and innocent.Whereas, the word "man" has the features of adult, mature, powerful, and full grown-up.If they are compared mutually from the film, these two lexicalizations offer an obvious idea of gender stereotype.
Moreover, if we look through the film, there are two objects used frequently in the film and can be stated as the representative symbols of the female and male characters in the film.In this extent, the objects signify the imagery of the story as it uses a symbol to underline some idea of the story.The first is related to the two objects that are presented and compared in the story.They are "the doll" versus "the sword".In this story, the doll signifies Mulan where the sword represents men.
A doll is usually an image of fragile, young, innocent, and playful thing that essentially deals with a girl or a young woman.Likewise, in the song, it is also stated that a woman is like a porcelain doll that represents beauty and fragility.Meanwhile, a sword is a symbol of power, dignity, greatness and skill.In this film, the presence of the doll is compared to the sword which characteristically reflects two disparate things as the significance of women and men in the story.
Songs as the Implied Message of the Stereotypes
The other interesting knowledge can be gained from the songs which are represented in Mulan film.The lyrics of the song are the depiction of the character's mind and also can be stated as the significant part of the story.That is, it covers up the idea of the story in a whole.There are four songs which are sung by various characters in the film.The first song is when Mulan is prepared to meet the matchmaker, entitled Honor to Us All (min 00.06).In this song, it can be inferred that women should be pretty, beautiful, innocent, calm, obedient, and the like.The extract of the song can be seen below.This extract of song describes perfectly how to be a woman in Chinese culture.This song represents the "dos" and "don'ts" to be a Chinese woman as well as the demands of the society to be the perfect girl in civilization.There are some criteria to be a perfect girl, such as doing a great hairdo, having a good taste, calm, obedient, work fast-paced, giving a good breeding, and having a tiny waist.It is also mentioned that women should serve the Emperor by "bearing sons", compared to men who do it by "bearing arms".This is somewhat a kind of different culture that happens to men and women in the state of being.From the last lyric, it is also shown how Mulan is hopeless and frightened to be failed.She is not confident to prove herself as her family wanted.
………
The second song is when Mulan fails to impress the matchmaker and shames her family in the song Reflection (min 12.15).In this song, Mulan describes her thoughts in a stressful way.She seems reckless and feels so bad for herself and her family.She states firstly, that if she reveals her true self to the others, it will only break her family's heart and dignity.She does not want to let her family down but the indeed she did it.She bares her feelings through this song.Here are some lines of the song.
[ The third song is presented when Mulan and the soldiers are trained for the army (I'll Make a Man Out of You min 38.04).This song reveals the exact way of how hard Mulan tries to be able to finish the training and try her best to act as a man.It is noted from the lyric that to 'be a man' is the thing related to strength, power, endurance, and toughness.In this song, it is also shown how the soldier must follow every single order that is dictated by the captain.Here, the expectancy of men's obedience in the hierarchical system is also shown through this extract line of the song.
[Shang] ……… Did they send me daughters When I asked for sons?……. Mister, l'll -Be a man [the soldiers] -With all the strength of a raging fire [Shang] Related to women's stereotype, the song A Girl Worth Fighting For (min.47.37) describes overtly how men perceive women and how they want women to be look like or behave.When the soldiers debate of the girls, Mulan seems to offer different opinion of a woman that she thinks worth to fight for.However, none of the men agrees with her.The criteria of a worth girl seems to discard all the things that are related to cleverness, power, voice, and the like.The significant lines of the song are presented as below. [Ling]
The Impact of the Film Related to Stereotypes
It is proposed that Mulan's ideological messages are freedom, right of passage, intolerance, choice, greed, and the brutalities of men chauvinism.It is also argued that Mulan, as the lead character of the film, challenges these stereotypes, especially on women (Giroux, 1999: 111&117).However, the fact is that, from the findings that have been presented, the issue of stereotypes is not heavily much disproved from the film.
From the time duration of 87 minutes, around 70 minutes of the film tells the all kinds of act and language behavior that somewhat enhancing the stereotypes to women and Chinese culture.It is only the last 17 minutes of the film where there is a phase of realization of the role of Mulan as a heroine.But, again, it is only a little portion of the film compared to the whole narrative which is presented.Similarly, if we dig up deeper to the essence of the story, there are some female figures who are presented in the story, such as Mulan's mother, grandmother, and the maids.However, until the end of the story, the viewpoint and the treating of them are still the same from the beginning.It can be seen extensively at the end of the story when Mulan's grandmother met Captain Shang at the yard.She said "Sign me up for the next war."It implies how much women are still amazed to men's stuff and felt it affordable to be chased.
Likewise, it is clearly shown from the story that Mulan is actually neglected and even is underestimated by Shang after her disguise uncovered.She is treated so badly and her attempt to explain the reason why she did it is not heard at all.Even when she was dumped by the royal guard, Shang did nothing.Her effort to tell Shang that the Huns are still alive is also in vain until she makes her own effort to save the emperor.However, at the end of the story, Mulan still expects him as her lover.She seems thoughtless and does not consider of what he has done to her before.It shows implicitly how women are still weak in the story and Mulan, eventually, is still innocent to realize the happening.She cannot prove herself or get recognition as well.
If we observe further, this happening is really a big deal for the viewers, particularly the young people who usually watch Disney films.Their films, which are popular or even being the favorite of the children, are in reality does not convey a proper heading of them.They can make generalization of what is good or bad based on the film which can be misleading.Nonetheless, parents typically do not really notice about it since they assert that Disney films are safe and indeed intended for children.This presumption makes Disney film, in this case is Mulan becomes the potential recklessness of its delicate viewers.
Conclusion
In conclusion, Mulan contains some stereotypes related to gender and culture.These indication can be confirmed from the film analysis based on Jager and Maier (2009).On the findings, there are at least some important points to be stressed from Disney film Mulan.
The lexicalization of saying a girl rather than a woman is also significant in implying how females are seen and treated in the film.Subsequently, the four songs that are sung by the characters in the film also show the general perception related to women and also men.This is considerably important in showing the tendency to stereotype of the film.
In entailing these all things with the impact of the film, it can be stated that Mulan somehow conveys the deviation of its viewer to subconsciously permit the stereotypes related to women and culture.This occurrence becomes more disturbing when Disney film is mostly being the favorite of the young ages as its viewers.Parents also typically neglect this issue as they did not realize how much the impact of this tendency in constructing the troublesome compensation that the children will experience in dealing with real life. | 3,430 | 2015-10-01T00:00:00.000 | [
"Linguistics"
] |
IMPROVED HERMITE HADAMARD TYPE INEQUALITIES FOR HARMONICALLY CONVEX FUNCTIONS VIA KATUGAMPOLA FRACTIONAL INTEGRALS
In this paper, we prove three new Katugampola fractional HermiteHadamard type inequalities for harmonically convex functions by using the left and the right fractional integrals independently. One of our Katugampola fractional Hermite-Hadamard type inequalities is better than given in [17]. Also, we give two new Katugampola fractional identities for differentiable functions. By using these identities, we obtain some new trapezoidal type inequalities for harmonically convex functions. Our results generalize many results from earlier papers.
Introduction
Let f : I R !R be a convex function de…ned on the interval I of real numbers and a; b 2 I with a < b.The inequality is well known in the literature as Hermite-Hadamard's inequality.There are so many generalizations and extensions of inequalities (1) for various classes of functions.One of this classes of functions is harmonically convex functions de…ned by • Işcan.
In [7], • Işcan gave the de…nition of harmonically convex functions as follows: De…nition 1. [7] Let I Rn f0g be a real interval.A function f : I !R is said to be harmonically convex, if for all x; y 2 I and t 2 [0; 1].If the inequality in (2) is reversed, then f is said to be harmonically concave.
In [9], • Işcan and Wu presented Hermite-Hadamard type inequalities for harmonically convex functions in fractional integral forms as follows: with > 0 and h(x) = 1=x.
In [20], Şanl¬ et al. proved the following three Riemann-Liouville fractional Hermite-Hadamard type inequalities for harmonically convex functions by using the left and the right fractional integrals seperately as follows: where h (x) = 1 x and > 0. Theorem 6.Let f : I (0; 1) !R be a harmonically convex function and a; b 2 I with a < b.If f 2 L [a; b], then the following inequality for the right Riemann-Liouville fractional integral holds: where h (x) = 1 x and > 0. Theorem 7. Let f : I (0; 1) !R be a harmonically convex function and a; b 2 I with a < b.If f 2 L [a; b], then the following inequality for the Riemann-Liouville fractional integral holds: where h (x) = 1 x and > 0. The following de…nitions of Katugampola fractional integrals could be found in [4,11].
R be a …nite interval.Then the left and right-side Katugampola fractional integrals of order > 0 of f 2 X p c (a; b) are de…ned by and with a < x < b and > 0, respectively.
(See [10], for the de…nition of the set X p c (a; b)) It is easily seen that if one takes ! 1 in the De…nition 8, one has the De…nition 3.
In [17, For the Theorem 10 , the correct inequality should be expressed as follows: In (8), if one takes ! 1, one obtaines the inequality (4) in the Theorem 4.
De…nition 11. [19, page 12]
A function f de…ned on I has a support at x 0 2 I if there exists an a¢ ne functions A (x) = f (x 0 ) + m (x x 0 ) such that A (x) f (x) for all x 2 I.The graph of the support function A is called a line of support for f at x 0 .
Theorem 12. [19, page 12] f : (a; b) !R is a convex function if and only if there is at least one line of support for f at each x 0 2 (a; b).
In literature, there are so many studies for Hermite-Hadamard type inequalities by using the left and right fractional integrals (such as Riemann-Liouville fractional integrals, Hadamard fractional integrals, Katugampola fractional integrals etc.).In all of them, the left and right fractional integrals are used together.As much as we know, the studies [20] are the …rst two works by using only the right fractional integrals or the left fractional integrals.
In this paper, our aim is to obtain new Katugampola fractional Hermite-Hadamard type inequalities by using only the right or the left fractional integrals separately for harmonically convex functions.
Katugampola Fractional Hermite Hadamard Type Inequalities for
Harmonically Convex Functions Theorem 14.Let f : then the following inequality for the left katugampola fractional integral holds: where > 0, > 0 and h then by using Remark 13 the function g (x) = f 1 x is convex on 1 b ; 1 a .Hence using Theorem 12, there is at least one line of support .From (10) and harmonically convexity of f , we have for all t 2 [0; 1].Multiplying all sides of (11) with t 1 and integrating over [0; 1] respect to t, we have This completes the proof.
(2) if one takes ! 1, and after that if one takes = 1, one has the inequality (3).
where > 0, > 0 and h (x) = 1 x .Proof.Let > 0. Since f is harmonically convex on [a ; b ], then by using Remark 13 the function g (x) = f 1 x is convex on 1 b ; 1 a .Hence using Theorem 12, there is at least one line of support for all x 2 1 b ; 1 a and m 2 . From (10) and harmonically convexity of f , we have for all t 2 [0; 1].Multiplying all sides of (14) with t 1 and integrating over [0; 1] respect to t, we have This completes the proof.
(2) If one takes ! 1, and after that if one takes = 1, one has the inequality (3).
Theorem 18.Let f : I (0; 1) !R be a function such that f 2 X p c (a ; b ), where a ; b 2 I with a < b.If f is a harmonically convex function on [a ; b ] , then the following inequality for katugampola fractional integrals hold: where > 0, > 0 and h (x) = 1 x Proof.Adding the inequalities (9) and (12) side by side, then multiplying the resulting inequalities by 1 2 , we have the inequalities (15).
(2) if one takes ! 1, and after that if one takes = 1, one has the inequality Corollary 20.The left hand side of (15) is better than the left hand side of (8).
Proof.Since f is harmonically convex on [a ; b ], it is clear from
Lemmas
In this section we will prove two new identities used in forward results.
Lemma 21.Let f : I R !R be a di¤ erentiable function on I , a ; b 2 I with a < b.If the fractional integrals exist and f 0 2 L [a ; b ], then the following equality for the left katugampola fractional integral holds: where > 0 and > 0.
Proof.It could be prove directly by applying the partial integration to the right hand side of the equation ( 16) as follows: This completes the proof.
Proof.It could be prove directly by applying the partial integration to the right hand side of the equation (17) as follows: This completes the proof.
Some new conformable fractional trapezoid type inequalities for harmonically convex functions
In this section, we will prove some new conformable fractional trapezoid type inequalities for harmonically convex functions by using Lemma 21 and Lemma 23. q is harmonically convex on [a ; b ] for q 1, then the following inequality for the left katugampola fractional integral holds: where Proof.By using Lemma 21, power mean inequality and harmonically convexity of f 0 q , we have (19) Calculating the appearing integrals in (19) we have, q is harmonically convex on [a ; b ] for q > 1 and 1 q + 1 p = 1, then the following inequality for the left katugampola fractional integral holds: jf 0 (a )j q Z 5 (a; b; ; ) + jf 0 (b )j q Z6 ( ; ; p) with > 0 and > 0.
Proof.By using Lemma 21, Hölder inequality and harmonically convexity of f 0 q , we have Calculating the appearing integrals in (20), we have jf 0 (a )j q Z 8 (a; b; ; ) + jf 0 (b )j q Z 9 (a; b; ; ) Proof.Similarly the proof of the Theorem 25, by using Lemma 23, power mean inequality and harmonically convexity of f 0 q , we have (28).
Proof.Similarly the proof of the Theorem 27, by using Lemma 23, Hölder inequality and harmonically convexity of f 0 q , we have (29).
Theorem 2 . [ 7 ]
Let f : I Rn f0g !R be a harmonically convex function and a; b 2 I with a < b.If f 2 L [a; b] ; then the following inequalities hold: de…nitions of the left and right side Riemann-Liouville fractional integrals are well known in the literature.De…nition 3. Let a; b 2 R with a < b and f 2 L [a; b].The left and right Riemann-Liouville fractional integrals J a+ f and J b f of order > 0 are de…ned by
Theorem 4 .
Let f : I (0; 1) !R be a function such that f 2 L [a; b], where a; b 2 I with a < b.If f is a harmonically convex function on [a; b], then the following inequalities for fractional integrals hold:
Theorem 5 .
Let f : I (0; 1) !R be a harmonically convex function and a; b 2 I with a < b.If f 2 L [a; b], then the following inequality for the left Riemann-Liouville fractional integral holds: Theorem 2.1], Mumcu et al. presented Hermite-Hadamard type inequalities for harmoncally convex functions in Katugampola fractional integral forms as follows: Theorem 10.Let > 0 and > 0. Let f : I (0; 1) !R be a function such that f 2 X p c (a ; b ), where a ; b 2 I with a < b.If f is a harmonically convex function on [a; b], then the following inequalities hold:
Theorem 16 .
Let f : I (0; 1) !R be a function such that f 2 X p c (a ; b ), where a ; b 2 I with a < b.If f is a harmonically convex function on [a ; b ] , then the following inequality for the right katugampola fractional integral holds:
( 1 )
if one takes ! 1, one has the inequality [20, Lemma 3].(2) if one takes ! 1, and after that if one takes = 1, one has the inequality [7, 2.5.Lemma].Lemma 23.Let f : I R !R be a di¤ erentiable function on I , a ; b 2 I with a < b .If the fractional integrals exist and f 0 2 L [a ; b ], then the following equality for the right katugampola fractional integral holds:
Theorem 25 .
Let f : I R !R be a di¤ erentiable function on I , a ; b 2 I with a < b.If f 0 2 L [a ; b ] and jf 0 j
Theorem 29 .
Let f : I R !R be a di¤ erentiable function on I , a ; b 2 I with a < b .If f 0 2 L [a ; b ] and jf 0 jq is harmonically convex on [a ; b ] for q 1, then the following inequality for the right katugampola fractional integral holds:f (a ) + f (b ) | 2,686.6 | 2019-08-01T00:00:00.000 | [
"Mathematics"
] |
Dark Matter, Dark Photon and Superfluid He-4 from Effective Field Theory
We consider a model of sub-GeV dark matter whose interaction with the Standard Model is mediated by a new vector boson (the dark photon) which couples kinetically to the photon. We describe the possibility of constraining such a model using a superfluid He-4 detector, by means of an effective theory for the description of the superfluid phonon. We find that such a detector could provide bounds that are competitive with other direct detection experiments only for ultralight vector mediator, in agreement with previous studies. As a byproduct we also present, for the first time, the low-energy effective field theory for the interaction between photons and phonons.
I. INTRODUCTION
To understand the origin and nature of dark matter has been a central topic in both theoretical and experimental physics for a long time. In particular, if considered as a new kind of particle, the presence of dark matter would constitute one of the strongest evidences for physics beyond the Standard Model. A large share of the efforts so far has been devoted to the study of the so-called Weakly Interacting Massive Particle, i.e. dark matter particles with masses of order 100 GeV and interaction strengths comparable to the weak interactions. These searches did not lead to any positive result, yet.
Among these, the concept of employing a detector based on superfluid He-4 was first presented in [37][38][39], and then further developed in [40][41][42][43]. In particular, the interaction of the dark matter with the bulk of the detector can produce collective excitations, which could then be detected [44,45], allowing a sensitivity to dark matter as light as the keV. If the dark matter interacts with the Standard Model via a scalar mediator, such a detector could provide very promising bounds. In [42,43] the problem has been formulated in terms of a relativistic effective field theory (EFT) for superfluids [46][47][48], which allows to describe the interactions of the He-4 phonon with itself and with the dark matter in a simple way, starting from a standard action principle. Such an approach has already been proved to be successful in a number of phenomenological applications -see e.g. [49][50][51][52][53][54].
In this paper we continue this program by studying the case of a sub-GeV dark matter charged under some new U d (1) group and interacting with the Standard Model via a new vector mediator (the dark photon) which mixes kinetically with the photon [55,56].
To this end, we write down the most general relativistic low-energy EFT for the interaction between the photon and the bulk of He-4 which, to the best of our knowledge, appears here for the first time. With this at hand, we study the process of emission of a single phonon by the passing dark matter and discuss the result in the context of the present direct, cosmological and astrophysical constraints for the dark photon mass and coupling. In agreement with [41], we find that a He-4 detector could be competitive with the current bounds for ultra-light dark photons.
Conventions: Throughout this paper we work in natural units, = c = 0 = µ 0 = 1, and adopt a "mostly plus" metric signature. Moreover, we use Greek indices to span the full spacetime coordinates and latin indices to span the spatial ones only.
II. RELATIVISTIC EFT FOR SUPERFLUIDS
Let us now briefly review the EFT for superfluids, which we will then use to build the most general interaction between the phonon of He-4 and the photon. For an extensive treatment we refer the reader to, for example, [42,47,48].
From an EFT viewpoint a superfluid is a system characterized by a U (1) internal symmetry associated to a conserved number of particles (e.g. atoms), whose charge Q is at finite density. On top of that, the superfluid spontaneously breaks a number of spacetime and internal symmetries, namely boosts, time translations (generated by H) and the internal U (1). However, it preserves the combinationH = H − µQ, with µ being the relativistic chemical potential. Since H is broken the states of the system cannot be classified according to its eigenvalues arXiv:1911.04511v2 [hep-ph] 29 Jan 2020 anymore; one rather needs to useH. 1 The Goldstone boson associated with the above symmetry breaking pattern corresponds to the low-energy collective excitation of the superfluid, i.e. the phonon. The easiest way to implement such a pattern is arguably via a single real scalar field, ψ(x), which shifts under the internal U (1), ψ → ψ + a, and acquires a vacuum expectation value proportional to time, ψ(x) = µt. The phonon corresponds to the fluctuation of the field around its equilibrium configuration, ψ(x) = µt + c s µ/n π(x), where c s is the superfluid sound speed andn its equilibrium number density. The prefactor has been chosen in order for the field to be canonically normalized.
Given that the breaking is spontaneous, the most general low-energy action for the phonon will have to be invariant under all the above symmetries, and feature the lowest possible number of derivatives. The only possible invariant is X = −∂ µ ψ∂ µ ψ, which corresponds to the local chemical potential (i.e. in presence of fluctuations). The most general action is [46,47] Here P (X) is the pressure of the superfluid [42]. For a strongly coupled system like He-4, the analytic form of P (X) is hard to obtain from first principles. Nonetheless, it can be extracted from data [59], which is the approach adopted here. In the second line of Eq. (1) we have expanded in small fluctuations around the background. Higher order terms would give all possible selfinteractions of the phonon at low energies [42,43], which will not be necessary for the current study. Indeed, we will focus on the emission of a single phonon, which is the simplest observable and does not involve any further interaction of the phonon with itself. From Eq. (1) we see that the dispersion relation for an on-shell phonon is ω(q) = c s q. We stress that all the effective couplings are completely fixed by the superfluid equation of state -e.g. c s ≡ c s (P ) -which are extracted directly from data [59].
The EFT described above is only valid at small momenta, namely when the momenta involved are smaller than a UV cutoff, Λ ∼ 1 keV. 2 In particular, this means that it cannot incorporate higher momentum excitations like maxons or rotons. In the rest of this paper we assume to work in this regime.
Although to have a complete description of all possible excitations one would need to perform a numerical study, we stress that in [43] it has been shown that the results obtained by means of the EFT match with those obtained with more traditional techniques [40,41]. The latter have been tuned on neutron scattering data, and include maxons and rotons as well. It follows that, for the observables of interest, most of the contribution comes from final state phonons, for which the EFT gives an accurate description.
III. EFT FOR THE INTERACTION BETWEEN THE DARK SECTOR AND THE HE-4
For the sake of clarity we focus on the case of a fermionic dark matter, χ(x), charged under some dark U d (1) group. 3 As already anticipated, we assume for this particle to interact with the Standard Model via a dark photon, V µ (x), which couples to the photon via kinetic mixing, and acquires a mass from some mechanism happening at energy scales much higher than the ones under consideration.
If we assume that the kinetic mixing is the only coupling between the dark sector and the Standard Model this implies that the interaction of the dark matter with He-4 must happen via a dark photon, which then converts into a photon. The low-energy action for the interaction between the photon and the superfluid will have to be invariant under the full Poincaré group, under the global U (1) of the superfluid, as well as under the gauge electromagnetic U em (1). Moreover, since the He-4 is electrically neutral, it is not possible to build any non-derivative coupling with the photon field, A µ (x); the interaction must happen via higher multipoles [41].
Following these rules, the most general low-energy EFT for the case of interest is described by where F µν and V µν are the field strengths for the photon and dark photon respectively, and the gauge covariant derivative of the dark sector is D µ = ∂ µ +igV µ . Moreover, we assume for the dark sector to be perturbative, i.e. g 4π. Finally, the last line of Eq. (2) describes the most general coupling between the photon and any number of superfluid phonons at low energies. The functions a and b are a priori completely generic, i.e. they cannot be found solely on symmetry grounds. 4 However, as we now show, they can be determined in terms of the static properties of the superfluid, namely of its electric and magnetic polarizabilities. Consider the system at equilibrium, ψ(x) = µt. In this case the last line of the action (2) reduces to where E and B are the electric and magnetic fields, and a ≡ a(µ) and b ≡ b(µ) are now evaluated on the background. One recognizes this to be the action for an electromagnetic field in a medium [60], and therefore the functions a and b can be related to the electric and magnetic polarizabilities, α E and α B respectively, by Since typically α M α E [61,62], the effective couplings are given by The action in Eq.
(2) contains any number of phonons interacting with two photon fields. One can then, in principle, enhance the coupling by introducing an external electric field, 5F 0i = E i , which allows for an interaction term that converts a photon into a phonon, 6 analogous to the Primakoff effect [63]. Indeed, the electric field will induce a polarization of the medium, hence favoring the interaction with the photon. In particular, expanding the last line of Eq. (2) to linear order in the phonon field, in presence of the external field, one gets Everything so far has been general for any electrically neutral s-wave superfluid. He-4 is a nonrelativistic system for which µ m He , c s 248 m/s and n 8.5 × 10 22 cm −3 [59], while the electric polarizability is α E 2 × 10 −25 cm 3 [62].
Using Eq. (5), together with the thermodynamical identities dP =ndµ and m He c 2 s = dP/dn, one finds µ 2 db/dµ α En m He c 2 s . Considering that c s 1 and, for 5 The introduction of external fields could present experimental difficulties. In this respect, our analysis should be considered as an optimistic one. 6 One could also introduce a magnetic field. This interaction is however suppressed in the nonrelativistic limit (i.e. for cs 1). an on-shell phonon,π ∼ c s ∇π, the photon-phonon interaction in presence of the external electric field can be well approximated by Starting from the actions (2) and (7) one deduces the following Feynman rules for the dark matter-dark photon interaction, for the dark photon-photon conversion and for the photon-phonon conversion induced by an external E-field: where the crossed circle represents the external electric field. It should be noted that, given the action (2), the in-medium photon propagator is modified with respect to the vacuum one. However, the changes are of order nα E ∼ 10 −2 . Being a subleading contribution to the matrix element, we will neglect them here. 7 7 For the interested reader, the in-medium photon propagator in Landau gauge reads where we have already used Eq. (5). Note that, since the medium does not break rotations, the inclusion of the above correction in the matrix element does not lead to any new tensor structures and/or anisotropies, as it happens instead for Dirac materials [21][22][23].
IV. PHONON EMISSION
In this work we focus on the simplest possible process, namely the emission of a single phonon after the interaction of the dark matter with the bulk of He-4. The amplitude of interest is given by the Feynman diagram in Figure 1. Averaging over the initial dark matter polarizations and summing over its final ones, one gets where we have used the nonrelativistic limit for the dark matter, k ( ) (m χ , 0), and for the He-4, c s 1. The corresponding rate is (10) where the angle between the incoming dark matter and the outgoing phonon is fixed by kinematics to be on the Cherenkov cone, cos θ = cs vχ + q 2mχvχ , with v χ the dark matter velocity [42,43]. Moreover, cos θ E = cos θ cos θ χ − cos(φ − φ χ ) sin θ sin θ χ is the angle between the electric field and the outgoing phonon, with (θ χ , φ χ ) the angle between the incoming dark matter and the electric field.
The rate of events per unit target mass is obtained as The maximum energy that the dark matter can transfer to a phonon is either fixed by kinematics (namely requiring cos θ < 1) or by the cutoff of the EFT, and it is ω max = min 2m χ c s (v χ − c s ), c s Λ . 8 On the other hand, the outgoing phonon must have energy larger than a certain value in order for it to be detected. When only a single phonon is involved, it cannot release to the system enough energy to induce an appreciable change in temperature in the detector [44]. It can, however, be observed via the so-called "quantum evaporation", which sets the minimum energy to be the binding energy of a He-4 atom to the rest of the bulk, ω min = 0.62 meV [45]. In particular, given the value of ω max , this implies that the final state phonon will be detectable only for a dark matter heavier than roughly 0.1 MeV. Importantly, the rate in Eq. (10) depends on the relative angle between the direction of the incoming dark matter and the electric field, as can also be seen in Figure 2. This induces a sensible modulation in the number of events. By suitably rotating the external field with time, one could employ this to discriminate signal from background [41]. 8 Note that consistency with the regime of applicability of the EFT does not limit the dark matter mass, but rather only the exchanged momentum. A heavy dark matter can still softly scatter off of the He-4 detector so to excite a phonon degree of freedom. [64], XENON10, XENON100 [32] and DarkSide-50 [65]. We consider a 95% C.L. for a year of exposure and a kg of material, assuming zero background. We also report the current BBN bounds [66], as well as a combination of mass and cross section that would allow to explain the dark matter relic abundance via a freeze-in mechanism [9,67,68].
V. PROJECTIONS AND COMPARISON WITH EXISTING BOUNDS
Starting from Eq. (11) we can compute the projected excluded region. In particular, we will consider an external electric field E = 100 kV/cm, which has been shown to be realistically achievable in lab [69]. 9 There are two distinct scenarios here: the heavy dark photon case (m V |q|) and the light dark photon one (m V |q|). In the former we find that the best sensitivity one can achieve is (12) which is already largely excluded by the existing stellar and accelerator constraints -see e.g. [70]. For ultra-light dark photon, instead, most of the cosmological and astrophysical bounds can be evaded when m V 10 −14 eV. In this case, we can use Eq. (11) to compute the expected sensitivity for the dark matter-electron cross section, which can be written as where α em is the fine-structure constant and m e the electron mass. In Figure 3 we show our results as compared to other direct detection experiments [32,64]. As one can see, a He-4 detector could be competitive in the sub-MeV region. Note that the masses excluded by He-4 would fall in the region already excluded by Big Bang Nucleosynthesis (BBN) constrains [66,71]. The same region would also be covered by the SN1987A supernova bound [66,72] on which, however, some doubts have been recently casted [73]. We also show the curve for dark matter relic abundance via freeze-in scenario [9,67,68,74], i.e. a scenario in which the interaction is very weak and slowly builds up the dark matter relic abundance non-thermally. In the case of light mediator, m V m χ , the dark matter production cross section from fermions f of the Standard Model thermal bath, ff → χχ, reads where α χ ≡ g 2 4π , α f ≡ For each value of the mass m χ the value of the dark matter relic abundance uniquely fixes the combination α χ α f ∝ 2 g 2 .
VI. CONCLUSION
In this work we have studied the response of a He-4 detector to the interaction of a sub-GeV dark matter particle which interacts with the Standard Model via a dark photon, kinetically mixed with the photon. In order to do that, we have employed a relativistic EFT to describe the low-energy interactions of the superfluid phonon with the dark matter. On top of that, we also presented the most general coupling between the photon and the bulk of He-4.
We considered the simplest possible process, i.e. the emission of a single phonon by the passing dark matter, whose rate can be enhanced by introducing an external electric field. For a dark matter lighter than the MeV such an observable could be competitive with the existing direct detection experiments, although that region should already be excluded by BBN bounds.
The case of a two-phonon final state has already been discussed in [41]. As already commented, we have checked that using our EFT we recover the same results.
In conclusion, in addition to the strong bounds that a He-4 detector could put on the parameters of a dark matter interacting with the Standard Model via a scalar mediator, it can also provide some important information in the sub-MeV region for a dark matter mediated by a dark photon.
Moreover, as already shown in [42,43], the superfluid EFT approach proves to be particularly clean and clear to tackle a particle physics problem like the one at hand. | 4,327.4 | 2019-11-11T00:00:00.000 | [
"Physics"
] |
Biotransformation of ferulic acid to 4-vinyl guaiacol by Lactobacillus farciminis
Continuously growing demand for natural flavors has led to a tremendous increase in biotransformation process employing microorganisms of different genera using ferulic acid (FA) as the precursor. In this study, potential of Lactobacillus farciminis (ATCC 29644) for biotransformation of FA to 4-vinyl guaiacol (4VG) was investigated. 4-vinyl guaiacol is a volatile phenol, reported to have 40 fold higher economic value than FA and is biotransformable to acetovanillone, ethylguaiacol and vanillin. Biotransformation process started after 5 h incubation of L. farciminis with FA in Man Regosa and Sharpe (MRS) broth at 37°C under 5% CO2. Production rate was observed at its maximum after 48 h. Formed 4VG was identified by GC-MS (QQQ) and quantification was done by HPLC UV-Vis. The impact of initial concentrations of FA and bacteria on the production of 4VG was studied. The results indicate that the production of 4VG is significantly affected by initial concentration of FA, and empirically 1, 15 and 50 mg/l of FA yielded 0, 3.34 and 10.26 mg/l of 4VG, respectively. The findings are a milestone towards safe high yielding means of biotransforming some common agro-industrial wastes to a value added product.
INTRODUCTION
The biological impact of ferulic acid steryl esters, extracted from rice bran oil, brought ferulic acid (FA) into focus during the 1970s, which was later on found to be a potential anti-atherosclerotic agent (Zhao and Moghadasian, 2008).Other biological activities of FA encompass anticarcinogenetic and antimicrobial effects besides mutagenesis and chemoprevention of coronary heart diseases (Min et al., 2006;Max et al., 2009).Ferulic acid is a phenolic acid (Ghosh et al., 2006), which may be present either in free or bound form in plants (Zhao and Moghadasian, 2008).It is present in wheat, maize *Corresponding author.E-mail: maznah@medic.upm.edu.my,maznahis<EMAIL_ADDRESS>+603-89472115.Fax: +603-89472116.
Abbreviations: FA, Ferulic acid; 4VG, 4-vinyl guaiacol.and rice brans (Walton et al., 2000;Mariod et al., 2010).Ferulic acid can be made free by enzymatic and physical processing (Walton et al., 2000;Min et al., 2006).The utilization of FA as the primary source of carbon, by bacteria of assorted genera, has led to the production of catabolic intermediates such as 4VG (Couto et al., 2006), protocatechuic acid, vanillic acid and vanillin (Torres et al., 2009).There has been a continuous rise in demand for natural dietary materials because of the potential hazards associated with synthetic ones (Okeke and Venturi, 1999).Biotransformation has gained momentum during recent years as a vital means of renewing natural resources by conversion into commercially valuable products.Many fragrances and flavors have been prepared employing biotransformation technology so far (Tripathi et al., 2002) using microbial means (Brunati et al., 2004).4-vinylguaiacol is a volatile phenol (Couto et al., 2006) and is reported to have 40 fold higher economic value than FA and can be biotransformed to acetovanillone, ethylguaiacol and vanillin (Landete et al., 2010).It is most extensively used in food and alcoholic beverages for flavoring and in ophthalmic field too (Baqueiro-Pena et al., 2010).It is present in pods of Hibiscus esculentus (okra), cooked apples, grapefruit juice, wine, raw beans, celery, coffee, strawberry, roasted peanuts and white sesame seeds (IHBT, 2005).According to Bohlin (1993), 4VG isolated from Ipomoea pescaprae (beach morning glory) has been reported to inhibit prostaglandin synthesis.Lactobacillus species play major role in industrial processes due to their ability to bioconvert substrates coupled with their generally regarded as safe (GRAS) status hence their application as probiotics (Bhathena et al., 2007).
No report describing the potential of Lactobacillus farciminis 29644 for 4VG production has been presented so far.There have been reports on the production of 4VG from FA but with poor degradation rates and low yields of metabolites (Karmarkar et al., 2000).Thus, this study for the first time seeks to investigate the ability of L. farciminis (ATCC 29644) to biotransform FA to 4VG.Coupled to this condition that is, initial FA and bacterial concentrations have been optimized to improve the yield of product.The work is of high worth from an industrial point of view with wide economic potential.
Chemicals
Ferulic and vanillic acids were purchased from Sigma-Aldrich (Germany), vanillin from MP Biomedicals (USA) and vanillyl alcohol from Merck (Germany) and liquid nitrogen from Malaysian Oxygen Berhad, Petaling Jaya, Selangor, Malaysia.Methanol, ethanol, acetic acid and acetonitrile were of HPLC grade and were procured from Fisher scientific (UK).
Inoculum preparation and biotransformation
Colony count technique was used to determine total viable cell count.L. farciminis was observed to have a cell density of 1 × 10 8 cells/ml.L. farciminis was cultured following a method reported by Sabu et al. (2006).About 5 ml of 18 h culture was inoculated into 45 ml of MRS broth, contained in a 250 ml conical flask and incubated at 37°C under 5% CO2 for 20 h.The inoculum was then inoculated into MRS broth supplemented with filter sterilized FA, which had been dissolved in 1M NaOH solution and made up to pH 8.5 using 6M HCl in a 100 ml total culture volume.This was then incubated under same conditions.About 3 ml of the sample was then withdrawn at intervals to determine the concentration of FA degraded and that of 4VG formed.Ferulic acid conversion was expressed as: FA conversion (%) = FAi -FAf × 100/FAi FAi = ferulic acid initial concentration, FAf = ferulic acid final concentration.
Analysis of spent media
Identification of 4VG by gaschromatography-mass spectrometry (GC-MS) Samples were analyzed following the method described by Couto et al. (2006) with slight modifications. 1 ml mixture of ether and hexane (1:1 v/v) was used to extract the volatile phenol by vortexing 3 ml sample with the mixture of ether and hexane for 5 min and the organic layer obtained was concentrated under nitrogen to about one third of the initial volume.It was then injected into a gas chromatograph-mass spectrometer (Thermo scientific-TSQ Quantum, USA) with a thermo TR-5MS column (30 m × 0.25 mm ID × 0.25 µm) (USA) and analyzed using X-calibur software.Helium was used as carrier gas.Injection temperature was set at 250°C, temperature gradient was adjusted at 80°C for 2 min, 120°C for 4 min, 155°C for 4 min and heated at 250°C for 3 min, injection volume employed was 1 µl and flow rate was kept at 1 ml/min.
Quantification of 4VG by HPLC
Analysis of filtered spent media for quantification of FA and 4VG was done using HPLC system (Agilent 1200 series, Germany) using C18 reversed phase column (Zorbax) maintained at 22°C, UV-Vis detector set at 280 nm.A linear gradient of two solvents was chosen for the run: solvent A (4% acetic acid in distilled water, v/v) and solvent B (acetic acid: acetonitrile: methanol 1:5:94 v/v) from 0 to 52% of solvent B for 30 min at a flow rate of 1 ml/min.Identification was then carried out with respective standards, while peak area was used as the tool of quantification.
Experimental design
The impacts of initial concentrations of bacteria and FA on the production of 4VG were investigated as biotechnological processes are significantly influenced by initial concentrations of substrate and microbes (Bloem et al., 2006;Faveri et al., 2007).Microorganisms of different concentrations (1 to 5 ml) were grown in MRS media with various concentrations (1, 5, 15, 25, 35 and 50 mg/l) of FA.
Statistical analysis
All the results in this study were expressed as mean ± standard deviation (SD) of 3 replicate measurements.The significant differences (p < 0.05) among the means were determined by one way analysis of variance (ANOVA) using Minitab statistical software (Version 15.1.1.0,Minitab Inc, USA). in vitro analysis, to have feruloyl esterase activity and has been identified as the enzyme responsible for microbial conversions of FA to vanillin (Bhathena et al., 2007) as it releases FA from plant cell walls making them available as substrates for phenolic acid decarboxylase, which transforms FA to 4VG (Landete et al., 2010).Numerous organisms such as Aspergillus, Bacillus, Candida, Corynespora, Fusarium, Pseudomonas are able to transform FA to a wide range of aromatic compounds.From our results, we do propose that the L. farciminis species used in this study utilizes the non-oxidative decarboxylation pathway for the production of 4VG from FA (Figure 1).To the best of our knowledge, the presence of phenolic acid decarboxylase in L. farciminis (ATCC 29644) has not been reported yet.Even though 4VG, as a breakdown product of FA was present in the culture medium, vanillin could not be detected.This may be due to the fact that vanillin is usually found at low concentration and is speedily metabolized, while the production of 4VG via decarboxylation of FA maybe a detoxification process in order to lower the concentration of inhibitory compounds (Baqueiro-Pena et al., 2010).The decarboxylation of FA due to one-carbon cleavage of FA has been chronicled for many lactic acid bacteria (Couto et al., 2006;Bloem et al., 2006).In this study, we report for the first time the production of high yields of 4VG from FA by non-oxidative decarboxylation using L. farciminis.The availability of agro-industrial wastes containing FA has been greatly highlighted in this work.Literature reports describe biotransformation of FA by different means including fungi, bacteria or genetically engineered microorganisms to other bioactives (Gosh et al., 2004;Li et al., 2008).L. farciminis in this study was able to biotransform FA to yield 4VG as the major degradation product; as detected by HPLC.The ability of lactic acid bacteria to degrade FA is in agreement with the findings of Bloem et al. (2006), whereby wine associated lactobacillus namely: Oenococcusoeni, L. hilgardi, L. brevis, L. plantarum and L. damnosus were observed to degrade FA with the production of vanillin and traces of 4VG.Also in a study conducted by Couto et al. (2006), 4VG from FA was produced from thirty two strains of LAB out of the thirty five strains tested.In this study, the influences of initial concentrations of FA on the production of 4VG were also studied.
L. farciminis (ATCC 29644) have earlier been reported in
Identification using GC-MS-QQQ (Figure 2) and quantification with HPLC from plotted standard curves was done.Experiments were performed in triplicates.HPLC analysis of the culture supernatant showed FA with a retention time of 14.1 min (Figure 3) and 4VG at 24.1 min (Figure 4).Results reveal a proportionate increase in production of 4VG with an increase in initial FA concentration.An initial FA concentration of 50 mg/l yielded about 10 mg/l of 4VG, while an initial FA of 1 mg/l yielded no 4VG.This result is in line with the findings of Couto et al. (2006), whereby the higher the hydroxylcinnamic acid content, the higher the concentration of volatile phenols produced.Substrate inhibition could not be determined as precipitation occurred, when FA concentration exceeded 50 mg/l.It has been reported that the growth of lactic acid bacteria is inhibited by hydroxycinnamic acids at 500 mg/l (Couto et al., 2006).The initial concentration of FA also influenced the time for production of 4VG, as 4VG production was observed at about 5 h (Figure 5) after incubation in cultures containing 5, 15, 25, 35 and 50 mg/l of initial FA, while none was observed for 1 mg/l initial FA.The production rate of 4VG was maximum after 48 h (Figure 6) incubation irrespective of the initial amount of FA used, after which the rate of 4VG formation started to decline but was still detectable at day 10 of incubation.The bioconversion rate of FA at 48 h of incubation ranged over 41 to 87% for initial FA concentrations of 5, 15, 25, 35 and 50 mg/l; with the lowest being 24% at an initial concentration of 5 mg/l, which is also in agreement with Couto et al. (2006).Initial FA concentration significantly influenced the production of 4VG; as compared to initial bacterial concentration (Figure 7).On the basis of findings, 50 mg/l of initial FA concentration was taken as the optimum amount.
Figure 2 .
Figure 2. GC-MS chromatogram with retention time of 5.95 min and spectra of 4VG from culture supernatant after 48 h incubation.
Figure 7 .
Figure 7. FA biodegradation and 4VG production by Lactobacillus farciminisafter 48 h of incubation with varying concentrations of initial bacterial and FA.FA: Ferulic acid, 4VG: 4-vinyl guaiacol.The values are mean ± standard deviation. | 2,788.8 | 2012-01-16T00:00:00.000 | [
"Biology",
"Engineering"
] |
Weak edge triangle free detour number of a graph
For any two vertices u and v in a connected graph G = ( V, E ) , a u − v path P is called a u − v triangle free path if no three vertices of P induce a triangle. The triangle free detour distance D △ f ( u, v ) is the length of a longest u − v triangle free path in G. A u − v path of length D △ f ( u, v ) is called a u − v triangle free detour . A set S ⊆ V is called a weak edge triangle free detour set of G if every edge of G has both ends in S or it lies on a triangle free detour joining a pair of vertices of S. The weak edge triangle free detour number wdn △ f ( G ) of G is the minimum order of its weak edge triangle free detour sets and any weak edge triangle free detour set of order wdn △ f ( G ) is a weak edge triangle free detour basis of G. Certain properties of these concepts are studied. The weak edge triangle free detour numbers of certain classes of graphs are determined. Its relationship with the triangle free detour diameter is discussed and it is proved that for any three positive integers a, b and n of integers with 3 ≤ b ≤ n − a + 1 and a ≥ 4 , there exists a connected graph G of order n with triangle free detour diameter D △ f = a and wdn △ f ( G ) = b. It is also proved that for any three positive integers a, b and c with 3 ≤ a ≤ b and c ≥ b + 2 , there exists a connected graph G such that R △ f = a, D △ f = b and wdn △ f ( G ) = c.
Introduction
By a graph G = (V, E), we mean a finite undirected connected simple graph. For basic definitions and terminologies, we refer to Chartrand et al. [6]. The neighbourhood of a vertex v is the set N (v) consisting of all vertices u which are adjacent with v. A vertex v is an extreme vertex if the subgraph ⟨N (v)⟩ induced by its neighbourhood N (v) is complete.
The concept of geodetic number was introduced by Harary et al. [4,5,9]. For vertices u and v in a connected graph G, the distance d(u, v) is the length of a shortest u − v path in G. A u − v path of length d(u, v) is called a u − v geodesic. A set S ⊆ V is called geodetic set of G if every vertex of G lies on a geodesic joining a pair of vertices of S. The geodetic number g(G) of G is the minimum order of its geodetic sets and any geodetic set of order g(G) is called a geodetic basis of G.
The concept of detour number was introduced by Chartrand et al. [3]. The detour distance D(u, v) is the length of a longest u − v path in G. A u − v path of length D(u, v) is called a u − v detour. A set S ⊆ V is called detour set of G if every vertex of G lies on a detour joining a pair of vertices of S. The detour number dn(G) of G is the minimum order of its detour sets and any detour set of order dn(G) is called a detour basis of G.
The concept of edge detour number was introduced by Santhakumaran an Athisayanathan [11,12]. A set S ⊆ V is called an edge detour set of G if every edge of G lies on a detour joining a pair of vertices of S. The edge detour number dn 1 (G) of G is the minimum order of its edge detour sets and any edge detour set of order dn 1 (G) is called an edge detour basis of G. A graph G is called an edge detour graph if it has an edge detour set.
The concept of weak edge detour number was introduced by Santhakumaran and Athisayanathan [13]. A set S ⊆ V is called a weak edge detour set of G if every edge of G has both ends in S or it lies on a detour joining a pair of vertices of S. The weak edge detour number dn w (G) of G is the minimum order of its weak edge detour sets and any weak edge detour set of order dn w (G) is a weak edge detour basis of G.
The concept of triangle free detour distance was introduced by Keerthi Asir and Athisayanathan [10]. The triangle free detour eccentricity e △f (v) of a vertex v in G is the maximum triangle free detour distance from v to a vertex of G. The triangle free detour radius, R △f of G is the minimum triangle free detour eccentricity among the vertices of G, while the triangle free detour diameter, D △f of G is the maximum triangle free detour eccentricity among the vertices of G.
The concept of triangle free detour number was introduced by Sethu Ramalingam and Athisayan athan [14]. A set S ⊆ V is called a triangle free detour set of G if every vertex of G lies on a triangle free detour joining a pair of vertices of S. The triangle free detour number dn △f (G) of G is the minimum order of its triangle free detour sets and any triangle free detour set of order dn △f (G) is called a triangle free detour basis of G.
In general, there are graphs G for which there exist edges which do not lie on a triangle free detour joining any pair of vertices of V. For the graph G given in Figure 1, the edge u 1 u 2 does not lie on a triangle free detour joining any pair of vertices of V. This motivates us to introduce the concept of weak edge triangle free detour set of a graph.
The following theorems will be used in the sequel.
www.ejgta.org
Weak edge triangle free detour number of a graph | S. Sethu Ramalingam and S. Athisayanathan Theorem 1.1. [14] Each extreme vertex of a graph G belongs to every triangle free detour set of G.
Theorem 1.2. [14]
If G is a connected graph of order n and triangle free detour diameter D △f , then dn △f (G) ≤ n − D △f + 1.
Throughout this paper G denotes a connected graph with at least two vertices.
2. Weak edge triangle free detour number of a graph Definition 2.1. Let G be a connected graph. A set S ⊆ V is called a weak edge triangle free detour set of G if every edge of G has both ends in S or it lies on a triangle free detour joining a pair of vertices of S. The weak edge triangle free detour number wdn △f (G) of G is the minimum order of its weak edge triangle free detour sets and any weak edge triangle free detour set of order wdn △f (G) is a weak edge triangle free detour basis of G.
Example 2.1. For the graph G given in Figure 2, it is clear that no two element subset of V is a weak edge triangle free detour set of G. It is easily seen that the set S 1 = {u, v, x} is a weak edge triangle free detour basis of G so that wdn △f (G) = 3. Also the set S 2 = {u, v, z} is another weak edge triangle free detour basis of G. Thus there can be more than one weak edge triangle free detour basis for a graph G. For the graph G given in Figure 3, S = {u, v} is a weak edge detour basis of G so that dn w (G) = 2 and S = {u, v, x, y, z} is a weak edge triangle free detour basis so that wdn △f (G) = 5. Hence the weak edge detour number and the weak edge triangle free detour number of a graph G are different. Proof. A weak edge detour set needs at least two vertices so that dn w (G) ≥ 2. Since every weak edge triangle free detour set is also a weak edge detour set so that dn w (G) ≤ wdn △f (G). Also the set of all vertices of G is a weak edge triangle free detour set of G so that wdn △f (G) ≤ n. Thus 2 ≤ dn w (G) ≤ wdn △f (G) ≤ n.
Remark 2.1. The bounds in Theorem 2.1 are sharp. The set of two end-vertices of a path P n is its unique weak edge triangle free detour set so that wdn △f (G) = 2. For the complete graph K n , wdn △f (K n ) = n. Thus the path P n has the smallest weak edge triangle free detour number 2 and the complete graph K n has the largest possible weak edge triangle free detour number n.
Definition 2.2. A vertex v in a graph G is a weak edge triangle free detour vertex if v belongs to every weak edge triangle free detour basis of G. If G has a unique weak edge triangle free detour basis S, then every vertex of S is a weak edge triangle free detour vertex of G. Remark 2.2. A cut-vertex may or may not belong to a weak edge triangle free detour basis of a graph G. For the graph G given in Figure 4, x, y} are the only weak edge triangle free detour bases of G. The cut-vertex w belongs to every weak edge triangle free detour basis so that the cut-vertex w is the unique weak edge triangle free detour vertex of G.
For the graph G in Figure 5, S = {u, v, x, y} is a unique weak edge triangle free detour basis and the cut-vertex w is not a weak edge triangle free detour vertex of G.
In the following theorem we show that there are certain vertices in a connected graph G that are weak edge triangle free detour vertices of G.
www.ejgta.org
Weak edge triangle free detour number of a graph | S. Sethu Ramalingam and S. Athisayanathan Theorem 2.2. Every extreme-vertex of a connected graph G belongs to every weak edge triangle free detour set of G. Also, if the set S of all extreme-vertices of G is a weak edge triangle free detour set, then S is the unique weak edge triangle free detour basis for G.
Proof. Let u be an extreme vertex of G and let S be a weak edge triangle free detour set of G.
Clearly u is an end-vertex and u does not lie on any triangle free detour joining a pair of vertices x, y ∈ S. So that S is not a triangle free detour set which is a contradiction.
Case 2. N [u] = K n (n ≥ 3). Since u / ∈ S, then u is an internal vertex of a x − y triangle free detour path say P, for some x, y ∈ S. Let v and w be the neighbours of u on P. Then v and w are not adjacent and so u is not an extreme vertex, which is a contradiction. If S is the set of all extreme vertices of G, then by the first part of this theorem, dn △f (G) ≥ |S|. If S is a triangle free detour set of G, then wdn △f (G) ≤ |S|. Hence wdn △f (G) = |S| and S is the unique weak edge triangle free detour basis for G.
Proof. This follows from Theorem 2.2.
In the following theorems we give the weak edge triangle free detour basis of certain graphs. Proof. Let S = {u, v} be any set of two vertices of G. If u and v are adjacent, then D △f (u, v) = n − 1 and every edge e ̸ = uv of G lies on the u − v triangle free detour and the both ends of the edge uv belong to S. If u and v are antipodal, then D △f (u, v) = n 2 and every edge e of G lies on a u − v triangle free detour in G. Thus S is a weak edge triangle free detour set of G. Since |S| = 2, S is a weak edge triangle free detour basis of G.
Conversely, assume that S is a weak edge triangle free detour basis of G. Let S ′ be any set of two adjacent vertices or two antipodal vertices of G. Then as in the first part of this theorem, S ′ is a weak edge triangle free detour basis of G. Hence |S| = |S ′ | = 2. Let S = {u, v} ⊆ V. If u and v are not adjacent and u and v are not antipodal, then the edges of the u − v geodesic that do not lie on the u − v triangle free detour in G so that S is not a weak edge triangle free detour set of G, which is a contradiction. Proof. Let G be an odd cycle C n (n ≥ 5). If {u, v} is any set of two adjacent vertices of G. It is clear that D △f (u, v) = n − 1. Then every edge e ̸ = uv of G lies on the u − v triangle free detour and the both ends of the edge uv belong to S so that S is a weak edge triangle free detour set of G.
Since |S| = 2, S is a weak edge triangle free detour basis of G.
Conversely, assume that S is a weak edge triangle free detour basis of G. Let S ′ be any set of two adjacent vertices of G. Then as in the first part of this theorem S ′ is a weak edge triangle free detour basis of G. Hence |S| = |S ′ | = 2. Let S = {u, v} ⊆ V. If u and v are not adjacent, then the edges of u − v geodesic do not lie on the u − v triangle free detour in G so that S is not a weak edge triangle free detour set of G, which is a contradiction. Thus S consists of any two adjacent vertices of G.
Theorem 2.6. Let G be a complete bipartite graph K n,m (2 ≤ n ≤ m). Then a set S ⊆ V is a weak edge triangle free detour basis of G if and only if S consists of any two vertices of G.
Proof. Let G be the complete bipartite graph K n,m . Let X and Y be bipartite sets of G with |X| = n and |Y | = m. Let S = {u, v} be any set of two vertices of G.
Let xy ∈ E such that x ∈ X and y ∈ Y. If x ̸ = u, then the edge xy lies on the u − v triangle free detour P : u, y, x, . . . , v of length 2n − 2. If x = u, then the edge xy lies on the u − v triangle free detour P : u = x, y, . . . , v of length 2n − 2. Hence S is a weak edge triangle free detour basis of G.
Case 2. Let u, v ∈ Y. It is clear that D △f (u, v) = 2n. Let xy ∈ E such that x ∈ X and y ∈ Y. If y ̸ = v, then the edge xy lies on the u − v triangle free detour P : u, x, y, . . . , v of length 2n. If y = v, then the edge xy lies on the v − u triangle free detour P : v = y, x, . . . , u of length 2n. Hence S is a weak edge triangle free detour set of G. In all cases, since |S| = 2, S is a weak edge triangle free detour basis of G.
Case 3. u ∈ X and v ∈ Y. It is clear that D △f (u, v) = 2n − 1. Let xy ∈ E. If xy = uv, then both of its ends are in S. Let xy ̸ = uv be such that x ∈ X and y ∈ Y. If x ̸ = u and y ̸ = v, then the edge xy lies on the u − v triangle free detour P : u, y, x, . . . , v of length 2n − 1. If x = u and y ̸ = v, then the edge xy lies on the u − v triangle free detour P : u = x, y, . . . , v of length 2n − 1. Hence S is a weak edge triangle free detour set of G.
Conversely, let S be a weak edge triangle free detour basis of G. Let S ′ be any set consisting of two vertices of G. Then as in the first part of this theorem, S ′ is a weak edge triangle free detour basis of G. Hence |S| = |S ′ | = 2 and it follows that S consists of any two vertices of G.
Theorem 2.7. Let G be the wheel W n = K 1 + C n−1 (n ≥ 6). Then a set S ⊆ V is a weak edge triangle free detour basis of G if and only if S consists of every vertex of G.
Proof. Let K 1 = {w} and C n−1 : v 1 , v 2 , . . . , v n−1 , v 1 be the cycle of length n − 1. Let W n = K 1 + C n−1 (n ≥ 6) be a wheel and S be a weak edge triangle free detour basis consists of every vertex of G. Then all the edges of C n−1 lie on the triangle free detour joining any pair of two adjacent vertices of C n−1 . Also D △f (w, v i ) = 1(1 ≤ i ≤ n − 1), then every edge of W n lies on a triangle free detour joining a pair of vertices of S. Hence S is a weak edge triangle free detour set of G. Since |S| = n, S is a weak edge triangle free detour basis of G.
Conversely, let S be a weak edge triangle free detour basis of G. Let S 1 be any set consisting of every vertex of G. Then as in the first part of this Theorem, S 1 is a weak edge triangle free detour basis of G. Hence |S| = |S 1 | = n and it follows that S consists of every vertex of G. The following theorems give realization results.
Theorem 2.8. For any two positive integers k and n with 2 ≤ k ≤ n, there exists a connected graph G of order n with wdn △f (G) = k.
Proof. Case 1. 2 ≤ k = n. Any complete graph G has the desired property. Case 2. 2 ≤ k < n, let P be a path of order n − k + 2. Then the graph G obtained from P by adding k − 2 new vertices to P and joining them to any cut-vertex of P is a tree of order n and so by Corollary 2.1, wdn △f (G) = k. Theorem 2.9. For each positive integer k ≥ 3, there exists a connected graph G with a vertex v of degree k in G such that v does not belongs to a weak edge triangle free detour basis of G and wdn △f (G) = k.
Proof. For k ≥ 2, let G be the graph obtained from the complete graph is a weak edge triangle free detour set of G. However, S ∪ {v 2 , v 3 } is a weak edge triangle free detour set of G and hence by Theorem 2.2, S ∪ {v 2 , v 3 } is a weak edge triangle free detour basis of G so that wdn △f (G) = k. Case 2. a = b − 1. Consider the graph G given in Figure 6. Let S = {u 1 , u 2 , . . . , u a−1 , u a } be the set of all end vertices of G. By Theorems 1.1 and 2.2, S is contained in every triangle free detour set and every weak edge triangle free detour set of G. It is easily seen that S is a triangle free detour set of G and so dn △f (G) = a, but S is not a weak edge triangle free detour set of G. Let T = S ∪ {v 3 }. Then T is a weak edge triangle free detour set of G and so wdn △f (G) = b = a + 1. Proof. Case 1. For 4 ≤ a = b, any tree with a end vertices has the desired properties, By Corollary 2.1.
Case 2. For 4 ≤ a < b. Let P 4 : x, u 1 , u 2 , u 3 be path of order 4. Let G 1 be the graph obtained from the path P 4 by adding a − 3 new vertices w 1 , w 2 , . . . , w a−3 and joining each vertex Let G 2 be the graph obtained from the graph G 1 by adding b − a + 2 new vertices v 1 , v 2 , . . . , v b−a+2 and joining each vertex v j (1 ≤ j ≤ b−a+2) to u 1 and u 2 in P 4 . Let G = G 2 be the required graph of order b+3 is shown in Figure 8. Since S = {x, w 1 , w 2 , . . . , w a−3 } is the set of all end vertices of G, then every weak edge detour set of G contains S and S is not a weak edge detour set of G. Let S 1 = S ∪ {u 1 , u 2 }. It is easily verified that S 1 is a weak edge detour set of G and so that dn w (G) = |S 1 | = a. Next, we show that wdn △f (G) = b. By Theorem 2.2, every weak edge triangle free detour set of G contains S. Clearly, S is not a weak edge triangle free detour set of G. It is easily verified that each v i (1 ≤ i ≤ b − a + 2) must belong to every weak edge triangle free detour set of G. Thus T = S ∪ {v 1 , v 2 , . . . , v b−a+2 } is a weak edge triangle free detour set of G, it follows from Theorem 2.2 that T is a weak edge triangle free detour basis of G and so wdn △f (G) = b.
Weak edge triangle free detour number and triangle free detour diameter of a graph
We have seen that by Theorem 1.2, dn △f (G) ≤ n − D △f + 1. However, in the case of weak edge triangle free detour number of a graph, this is not true.
Remark 3.1. In the case of weak edge triangle free detour number wdn △f (G) of a graph G, there are graphs for which wdn △f (G) = n − D △f + 1, wdn △f (G) > n − D △f + 1 and wdn △f (G) < n − D △f + 1. For any cycle C n of order n ≥ 4, D △f = n − 1 and wdn △f (C n ) = 2 so that wdn △f (G) = n − D △f + 1. For any wheel W n of order n ≥ 6, D △f = n − 2 and wdn △f (W n ) = n so that wdn △f (W n ) > n − D △f + 1. For the graph G in Figure 9, n = 6, D △f = 4 and wdn △f (G) = 2 so that wdn △f (W n ) < n − D △f + 1. T is a weak edge triangle free detour set of G and hence it follows from Theorem 2.2 that T is a weak edge triangle free detour basis of G so that wdn △f (G) = c. Subcase 1.2. Let a be an even integer. Construct the graph H as above Subcase(1.1) in Figure 10. Then G is obtained from H by adding c − 3 new vertices w 1 , w 2 , w 3 , . . . , w c−3 to H and joining each w i (1 ≤ i ≤ c − 3) to the vertex u b−a−1 and is shown in Figure 11. It is easily verified that a ≤ e △f (x) ≤ b for any vertex x in G and e △f (v 0 ) = a and e △f (v 1 ) = b. Thus R △f = a and D △f = b. Now, we show that wdn △f (G) = c. Let S = {w 1 , w 2 , . . . , w c−2 , u b−a } be the set of all end-vertices of G. As in Case 1, S is not a weak edge triangle free detour set of G. Let It is easy to see that S 1 is not a weak edge triangle free detour set of G. Clearly the set T = S ∪ {v 1 , v a } is a weak edge triangle free detour set of G. Hence it follows from Theorem 2.2 that T is a weak edge triangle free detour basis of G and wdn △f (G) = c.
. . , v 2a be a cycle of order 2a and W b+2 = K 1 + C b+1 be the wheel with V (C b+1 ) = {u 1 , u 2 , . . . , u b+1 },K 1 = {u 0 }. Let F be the graph obtained from W b+2 and P 2a by identifying u 0 of W b+2 with v 1 of P 2a as shown in Figure 12. It is easily verified that a ≤ e △f (x) ≤ b for any vertex x in G and e △f (v a ) = a and e △f (u 1 ) = b. Thus R △f = a and D △f = b. Let S = {u 1 , u 2 , . . . , u b+1 , v 2a } is a weak edge triangle free detour set of G so that wdn △f (G) = b + 2 = c. Proof. Case 1. When a is even, let G be the graph obtained from the cycle C a : u 1 , u 2 , . . . , u a , u 1 of order a by adding b − 1 new vertices v 1 , v 2 , . . . , v b−1 and joining each vertex v i (1 ≤ i ≤ b − 1) to u 1 and adding n − a − b + 1 new vertices w 1 , w 2 , . . . , w n−a−b+1 and joining each vertex w i (1 ≤ i ≤ n − a − b + 1) to both u 1 and u 3 and is shown in Figure 13. It is easily verified that the order of the graph G is n and triangle free detour diameter D △f = a. Now, we show that wdn △f (G) = b. Let S = {v 1 , v 2 , . . . , v b−1 } be the set of all end-vertices of G. Since no edge of G other than the edges u 1 v i (1 ≤ i ≤ b−1) lies on a triangle free detour joining a pair of vertices of S, S is not a weak edge triangle free detour set of G. Let T = S ∪ {v}, where v is the antipodal vertex of u 1 in C a . Then every edge of G lies on a triangle free detour joining a vertex v i (1 ≤ i ≤ b − 1) and v so that T is a weak edge triangle free detour set of G. Now, it follows from Theorem 2.2 that T is a weak edge triangle free detour basis of G and so wdn △f (G) = b. Case 2. When a is odd, let G be the graph obtained from the cycle C a : u 1 , u 2 , . . . , u a , u 1 of order a by adding b − 2 new vertices v 1 , v 2 , . . . , v b−2 and joining each vertex v i (1 ≤ i ≤ b − 2) to u 1 and adding n − a − b + 2 new vertices w 1 , w 2 , . . . , w n−a−b+2 and joining each vertex w i (1 ≤ i ≤ n − a − b + 2) to both u 1 and u 3 . It is easily verified that the order of the graph G is n, triangle free detour diameter D △f = a and is shown in Figure 14. Now, we show that wdn △f (G) = b. Let S = {v 1 , v 2 , . . . , v b−2 } be the set of all end-vertices of G. As in Case 1, S is not a weak edge triangle free detour set of G. Let S 1 = S ∪ {v}, where v is the any vertex of G such that v ̸ = v i (1 ≤ i ≤ b − 2). It is easy to see that S 1 is not a weak edge triangle free detour set of G. Now, the set T = S ∪ {u 2 , u a } is a weak edge triangle free detour set of G. Hence it follows from Theorem 2.2 that T is a weak edge triangle free detour basis of G and so wdn △f (G) = b. | 7,292.6 | 2022-10-29T00:00:00.000 | [
"Mathematics"
] |
Identification and functional characterization of multiple inositol polyphosphate phosphatase1 (Minpp1) isoform-2 in exosomes with potential to modulate tumor microenvironment
Inositol polyphosphates (InsPs) play key signaling roles in diverse cellular functions, including calcium homeostasis, cell survival and death. Multiple inositol polyphosphate phosphatase 1 (Minpp1) affects the cellular levels of InsPs and cell functions. The Minpp1 is an endoplasmic reticulum (ER) resident but localizes away from its cytosolic InsPs substrates. The current study examines the heterogeneity of Minpp1 and the potential physiologic impact of Minpp1 isoforms, distinct motifs, subcellular distribution, and enzymatic potential. The NCBI database was used to analyze the proteome diversity of Minpp1 using bioinformatics tools. The analysis revealed that translation of three different Minpp1 variants resulted in three isoforms of Minpp1 of varying molecular weights. A link between the minpp1 variant-2 gene and ER-stress, using real-time PCR, suggests a functional similarity between minpp1 variant-1 and variant-2. A detailed study on motifs revealed Minpp1 isoform-2 is the only other isoform, besides isoform-1, that carries a phosphatase motif for InsPs hydrolysis but no ER-retention signal. The confocal microscopy revealed that the Minpp1 isoform-1 predominantly localized near the nucleus with a GRP-78 ER marker, while Minpp1 isoform-2 was scattered more towards the cell periphery where it co-localizes with the plasma membrane-destined multivesicular bodies biomarker CD63. MCF-7 cells were used to establish that Minpp1 isoform-2 is secreted into exosomes. Brefeldin A treatment resulted in overexpression of the exosome-associated Minpp1 isoform-2, suggesting its secretion via an unconventional route involving endocytic-generated vesicles and a link to ER stress. Results further demonstrated that the exosome-associated Minpp1 isoform-2 was enzymatically active. Overall, the data support the possibility that an extracellular form of enzymatically active Minpp1 isoform-2 mitigates any anti-proliferative actions of extracellular InsPs, thereby also impacting the makeup of the tumor microenvironment.
ER is an essential organelle in protein folding and their secretion to various cellular destinations. In a conventional protein secretory pathway, inherent signal sequences, such as KDEL, KKXX, etc., sort proteins into destined locations. Nonetheless, not all proteins are secreted through the conventional secretory pathway. Leaderless or signal-peptide-containing proteins follow an unconventional protein secretory (UPS) pathway involving endocytic vesicles, secretory lysosomes and multivesicular bodies/endosomes (MVB/MVEs) to the plasma membrane (PM) and extracellular space [37]. Small cytosolic gaps between associating membranes such as ER-endosome contact sites [38] provide a non-vesicular exchange of lipid-bound proteins [39], metabolites, and calcium ions [40] to heterogeneous intraluminal vesicles (ILVs) of MVBs [41][42][43][44]. However, the exact mechanism of protein and lipid sorting into ILVs destined for extracellular space, enclosed in extracellular vesicles (EVs), is still unknown. These EVs comprise a heterogeneous mixture of exosomes and ectosomes [46][47][48], ranging from 40-1000nm in diameter in their most current classification. Of EVs, exosomes ranging from 40nm to 200nm in diameter are the only vesicles of endocytic-origin [45] enclosing a diverse mixture of RNAs, miRNA, proteins, lipids, and glycans [46] implicated in various immune modulations [47] and disease states [48][49][50].
Since Minpp1 is an ER-restricted enzyme due to the presence of an ER retention signal (KDEL/SDEL) at its C-terminus, its presence in extra-ER compartments seems improbable. However, a previous study found a Minpp1 enzymatic activity associated with lysosomes and extracellular cell culture media [51]. Suggesting a possibility of heterogeneity in Minpp1 structure and function and the presence of a Minpp1 isoform in extra-ER location. Furthermore, if enzymatically active, this isoform would have physiological consequences in neutralizing the anti-proliferative actions of extracellular InsPs, thus modulating the tumor microenvironment [51].
This study explored Minpp1 heterogeneity and identified a Minpp1 isoform-2 in extracellular vesicles (exosomes) isolated from a human breast cancer cell line, MCF-7. Initially, relative expression profiling of minpp1 variants was performed during cellular stress conditions and then correlated with Minpp1 isoform-2 secretion in exosomes to study its prospective secretory route. To further correlate the role of Minpp1 isoform-2 in dephosphorylation of extracellular InsP 6 , the enzymatic potential of the extracellularly secreted Minpp1 isoform-2 in exosomes was analyzed. The results suggest a potential role for exosome-associated Minpp1 isoform-2 in promoting tumor cell growth by preventing anti-proliferative actions of InsP 6 .
Minpp1 sequence collection and analysis
The cDNA sequence of hminpp1 variant-1 was retrieved from the public domain of the NCBI database. To identify the spliced variants of hminpp1, the full-length sequence of hminpp1 variant-1 (NCBI RefSeq: NM_004897.5, 1464 bp) was integrated into an online analytical tool -Basic Local Alignment Sequence Tool (BLAST) (https://blast.ncbi.nlm.nih.gov/Blast.cgi)to search across the NCBI curated RefSeq records for related sequences with a percent identity close to 100%. Multiple sequence alignment of hminpp1 variants was performed using the CLUSTAL O (1.2.4) online tool.
A discrepancy between the two databases (UniProt and NCBI) was observed regarding the number of hMinpp1 isoforms and their sequences. UniProt's database reports four-hMinpp1 isoforms, as opposed to three isoforms in the NCBI database. Therefore, as a foundation for future studies, this study is limited to the NCBI database. Multiple amino acids (AAs) sequence alignment of hMinpp1 isoforms was performed using the CLUSTAL O (1.2.4) analytical tool.
Phylogenetic tree analysis
Minpp1 isoform-2 AAs sequence was extracted, and a BLASTP search was done through the NCBI curated refseq_protein records for related sequences across species. A distance tree of BLASTP results was constructed for hMinpp1 isoform-2. The "fast minimum evolution" method was used with a minimum sequence difference of 0.85 to construct the phylogenetic tree based on the Girishin (protein) model. The percent identity of sequences ranged from 79% to 100%.
Plasmid transfection and confocal immunofluorescence microscopy
MCF-7 cells were plated at a density of 5x10 4 on glass coverslips in a 24-well plate. After 24h of incubation, cells were transfected with human minpp1 variant-1&-2 gene expression plasmids using TransIT-BrCa Transfection reagent, Mirius Bio (Madison, WI). Briefly, 50μL OptiMem media, Thermo Fisher Scientific (Waltham, MA), 0.5μL (1μg/μL) plasmid DNA, and 1.5μL TransIT-2020 reagent was sequentially mixed and incubated for 30min at room temperature (RT). This mixer was added dropwise to different areas to cover the well uniformly. Cells were incubated for 48-72h at 37˚C and 5% CO 2 . Minpp1 expression was observed under an Olympus IX71 fluorescence microscope for positively transfected cells. MCF-7 breast cancer cells were fixed and permeabilized before immunostaining as described by Scheffler and colleagues [52] with modifications. Briefly, positively transfected cells were washed and fixed in 3.7% cold formaldehyde solution for 10min at RT. Next, Permeabilization and blocking solution (0.1% Saponin, 1% BSA, and 0.3M Glycine) was added to the cells and kept for 30min at RT. Cells were later probed with primary antibodies (Mouse anti-CD63 mAb (H5C6-DSHB) (Iowa City, IA), rabbit anti-Minpp1 polyclonal antibody, Fabgennix Inc. (Frisco, Texas), mouse anti-LAMP2 mAb (H4B4-DSHB) (Iowa City, IA), mouse anti-GRP-78 mAb (E-4), Santa Cruz Biotechnology (Dallas, Texas) diluted at 1:200 in blocking buffer for 1h at RT. Next, cells were washed thrice with PBS and labeled with goat anti-mouse Alexa Fluor-555, goat anti-mouse Alexa Fluor 647, goat anti-mouse Alexa Fluor 488, and goat anti-rabbit Alexa Fluor 555 each at 1:400 dilution, Santa Cruz Biotechnology (Dallas, Texas). Finally, cells were mounted using mounting media containing DAPI (300nM) for nuclear staining. Double immunofluorescence images were observed under Zeiss LSM 880 laser confocal microscope, and the Zeiss Zen 2.3 software was used to analyze them.
To quantify the co-localization of proteins in the cell, Mander's Overlap Coefficient (MOC) was employed to analyze the data mathematically [53]. First, MOC, tM1 and tM2 values were gathered and averaged (n>5) for each positively transfected and immunostained cell for determining colocalization. Then, the co-localized puncta were selected and analyzed using the Coloc2 algorithm in Fiji-ImageJ software with the Costes' threshold regression. The reported co-occurrence of fluorescence among pixels was above the threshold to avoid background noise [53]. MOC ranged between 0 and 1; the greater the number, the more substantial evidence of the co-localization.
Isolation of extracellular vesicles-exosomes
Exosomes were isolated from MCF-7 cells as described using differential centrifugation as described by Théry et al. 2006 [54] with modifications. Briefly, conditioned media treated or untreated were pooled and centrifuged at 3,000g for 20min, followed by centrifugation at 10,000g for 35min at 4˚C to remove any cell debris. The clear supernatant was then ultra-centrifuged at 100,000g for 4h at 4˚C using fixed-angle rotors (F50L-8 x 39). Finally, the pellet containing exosomes was washed with PBS by ultra-centrifugation again for 2h at 4˚C to remove any contaminating proteins. The pellet (exosomes) formed was re-dispersed in PBS for electron microscopy and western blot (WB) analysis. For enzymatic analysis, an enzyme assay buffer was used to resuspend the pellet (Fig 1).
Brefeldin A treatment
To study the effect of BFA on exosomes and exosomes-derived Minpp1 isoform-2, we isolated exosomes from BFA-treated MCF-7 breast cancer cells as described by McCready et al. [55]. Briefly, cells were treated overnight with 10 μg/mL BFA, Sigma Aldrich (St. Louis, MO) or vehicle as control. Conditioned media was collected and pooled for sequential centrifugation as described above for the isolation of exosomes. Finally, isolated exosomes were resuspended in PBS for further analysis.
Western blotting
Samples for western blot analysis were prepared as described by Kilaparty [56]. Briefly, trypsinized cells were PBS washed and suspended in 1x RIPA buffer, EMD Millipore (Burlington, MA) containing Halt protease inhibitor cocktail, Thermo Fisher Scientific (Waltham, MA) and lysed for 30 min on ice. Then, cell lysates were centrifuged at 11,000g for 30min, 4˚C. Finally, the pellets were discarded, and the supernatants were used immediately or stored frozen at -80˚C till further use. Protein quantification was done using Bradford colorimetric assay [57] using BSA, Sigma Aldrich (St. Louis, MO), as a standard.
SDS-PAGE and western blotting (WB) were performed as described by Agarwal et al. [58]. Briefly, aliquots of 35-40μg of cell lysate protein or 2-5μg of exosome protein were boiled for 5min with 1x Laemmlli buffer, Bio-Rad Laboratories (Hercules, CA) containing 10% beta-mercaptoethanol. Proteins were then separated by 12% SDS-PAGE. Resolved proteins were electrophoretically transferred onto a nitrocellulose membrane, Bio-Rad Laboratories (Hercules, CA). The membranes were blocked for 45 min at RT with 5% non-fat dry milk prepared in Tris-Buffer Saline (20mm Tris-HCl and 150mM NaCl, pH 7.4) containing 0.01% tween-20 (TBST). The membranes were probed with primary antibodies (mouse anti-CD63 mAb (H5C6-DSHB) (Iowa City, IA), rabbit anti-minpp1 polyclonal antibody, Fabgennix Inc. (Frisco, Texas), mouse anti-GFP tagged mAb, Protein Tech Group (Rosemont, IL), and mouse anti-β-Actin antibody, Santa Cruz Biotechnology (Dallas, TX) each diluted at 1:1000 in blocking buffer for overnight at 4˚C. The membranes were washed three times with TBST. The binding of primary antibodies was proved by incubation with appropriately diluted horseradish peroxidase (HRP)-conjugated secondary antibodies for 1h at RT in blocking solution. The blots were washed three times with TBST and developed with super signal west pico plus chemiluminescent reagent, Thermo Fisher Scientific (Waltham, MA) followed by autoradiography.
Perturbation of vesicle integrity
Microsomes and exosomes are known to have a sturdy lipid bilayer than their parent cell's lipid bilayer. Therefore, the perturbation of vesicles was performed as described by Ali et al. 1-propanesulfonate), a non-denaturing mild detergent, can exclude any interference without much compromising the functional activity or integrity of enzymes. For exosomes, CHAPS has a minimum effect on their morphology or biomarker distribution [32]. In this study, we used Minpp1 enzymatic assay buffer (50mM Bis-Tris, pH-6.1, 100mM KCl, 1mM EDTA, 0.5mM EGTA, 0.05% (w/v) BSA and 3mM CHAPS) as described by Ali et al. [19] to open up microsomal vesicles to fully expressed luminal Minpp1 enzymatic activity. The Minpp1 enzymatic buffer to perturb exosomes was similar to that of microsomal vesicles, except CHAPS was increased to 16mM [32].
Minpp1 enzyme assay
By enzyme linked-immunosorbent assay. Minpp1 enzyme utilizes several inositol phosphates, including Ins(1,3,4,5)P 4 and Ins(1,2,3,4,5,6)P 6 as substrates and removes the 3-phosphate group from the inositol ring. Samples for Minpp1 assays were prepared as described previously [19] with certain modifications. Briefly, exosomes and microsomes were resuspended in Minpp1 enzymatic assay buffer and incubated with substrate -Ins(1,3,4,5)P 4 . Samples were then subjected to Ins(1,4,5)P 3 detection using an ELISA kit, My BioSource (San Diego, CA), following manufacturers' instructions. Briefly, enzyme preparations (exosomes and microsomes samples) were pre-incubated with 5μM Ins(1,3,4,5)P 4 for 24h at 37˚C in 50μL enzyme assay buffer. Standards and samples were then subjected to an ELISA microtiter plate already coated with goat anti-rabbit antibody at RT. Antibody specific for Ins(1,4,5)P 3 and HRP-conjugated Ins(1,4,5)P 3 were incubated along with samples for 1h at 37˚C. After washing properly with wash buffer, an equal ratio of substrate A and substrate B was added for 1h at 37˚C in the dark. The reaction was quenched later using the stop solution provided with the kit. Optical density was measured at 450nm within 10min of the reaction at 450nm using a microplate reader. The amount of Ins(1,4,5)P 3 as a product was determined using a standard curve constructed with known amounts of Ins(1,4,5)P 3 and run simultaneously.
By polyacrylamide gel electrophoresis of InsP 6 . Minpp1's enzymatic activity in exosomes was further assessed using another substrate -Ins(1,2,3,4,5,6)P 6 (InsP 6 )-as described by Wilson and his colleagues [59]. Briefly, exosomes re-dispersed in Minpp1 enzymatic assay buffer were incubated with 4.0 nmol InsP 6 for 24h at 37˚C followed by extraction and analysis of the products using Poly-Acrylamide Gel Electrophoresis (PAGE). Briefly, 33% polyacrylamide/TBE gels were used to resolve inositol polyphosphates present in the samples. The stacking gel had the composition of 0.2mL of 80% acrylamide/ bisacrylamide (19:1), 0.2mL of TBE (10x), 10μL of 10% APS, 3μL of TEMED and dH 2 O to a total volume of 2mL. The mounted gel was pre-run at 100V/10mA for 20min. About 30μL of InsP 6 standards (1-16nmol of InsP 6 (dipotassium salt from Sigma Aldrich (St. Louis, MO)) and exosome samples were mixed with TriTrack DNA Loading Dye (6x), Thermo Fisher Scientific (Waltham, MA) before loading unto mounted gel. The gel was run at 100V/10mA till the orange dye reached 2/3 rd of the distance. The resolved gel was stained at RT with toluidine blue staining solution (20% methanol, 2% glycerol, and 0.05% toluidine blue) at RT until the bands showed up. The gel image was later captured, and band density was determined using the ImageJ software to semi-quantitate the bands.
Nanoparticle tracking analysis
Exosomes' particle size was determined by Nanoparticle Tracking Analysis (NTA) using Zeta-View PMX120. Briefly, the purified exosomes were diluted at 1:50-1:5000 in PBS and subjected to NTA (ZetaView PMX120). Then, the particle numbers and size were graphed for the average size of the isolated exosomes.
Transmission electron microscopy
Transmission Electron Microscopy (TEM) was performed to observe the size and heterogeneity of exosomes. Briefly, 5μL aliquots of treated and untreated exosomes were applied to a 300 mesh Formvar/carbon-coated grid, Electron Microscopy Sciences (Hatfield, PA) and left for 5min at RT. After that, the excess solution was removed by filter paper, and the samples were left to dry. The samples were later negatively stained with 1% uranyl acetate (UA) and observed under the FEI Tecnai F20 electron microscope. Two independent experiments were performed, and several photographs were taken for each experimental condition.
Statistical analysis
The qPCR data were statistically analyzed using One-Way ANOVA (Analysis of Variance) with post-hoc Tukey HSD (Honestly significant difference) using SAS software. All experiments were repeated at least three times independently, and each sample was run in triplicate.
Bioinformatics' alignment of hMinpp1 variants and isoforms
Sequence analysis. To study heterogeneity in Minpp1 protein's function and relate their physiological significance, nucleotide sequences available in databases were first analyzed to identify the spliced hminpp1 variants and their transcriptional products. To find related sequences across taxonomic species, we initially BLASTN was initially performed on the full-length parent sequence: hminpp1 variant-1, NCBI Refseq: NM_004897.5, 1464 bp. In total, two more unique variants were found in humans with a percent identity close to 100% (Fig 2).
It is presumably due to the splicing of hminpp1 variant-1: a) it truncates two exons to yield variant-2 (NCBI Refseq: NM_001178117.2) with 104 unique base pairs of a total of 939 bp, b) it truncates an additional exon yielding 861 bp hminpp1 variant-3 (NCBI Refseq: NM_001178118.2), the shortest variant amongst them, with an unaligned 34 bp 5' region.
A discrepancy has been observed between databases -UniProt and NCBI-regarding the number of hMinpp1 isoforms reported. Previously, this lab reported four hMinpp1 isoforms based on NCBI and Uniport databases. However, with renewed interest in analyzing current hMinpp1 sequences using the updated NCBI database, the presence of only three hMinpp1 isoforms could be affirmed. As opposed to the NCBI, UniProt still reports one extra Minpp1 variant/protein. Therefore, as a foundation to imminent research, this investigation is limited our investigation to the NCBI database.
This lab's previously published work reported various motifs in the parent protein, hMinpp1 isoform-1 (NP_004888.2) [22]. Therefore, upon aligning hMinpp1 sequences of all three isoforms, this study analyzed whether the motifs are shared in all three hMinpp1 isoforms ( Fig 3A).
Interestingly, we found only one hMinpp1 isoform-1 signature motif (amino acids) common in all three isoforms -N-glycosylation (NATA, AAs 242-245). However, the bioactivity of the N-glycosylation motif is not yet analyzed. Therefore, since the primary focus of this study is on hMinpp1 isoform-2, all the prospective motifs/domains found in hMinpp1 isoform-2 (NP_001171588.1) are summarized in Fig 3C.
hMinpp1 isoforms alignment
In Minpp1 isoform-2, the phosphoglyceromutase acid phosphatase (PGAM) domain spans across AAs 74-207. Within the PGAM domain, N-Myristoylation and Acid Phosphatase A (AP-A) motifs were shared between hMinpp1 isoform-1 and -2 but not isoform-3. The AP-A motif (AAs 88-94; RHGTRYP) is known to be highly conserved across species and is wellknown for dephosphorylating inositol polyphosphates (InsPs) [25]. The presence of the myristoyl group increases protein to lipid interaction altering its subcellular localization [41][42][43][44]61]. Therefore, the absence of an ER-retention signal (KDEL) and the presence of the N-Myristoylation motif in hMinpp1 isoform-2 implies its extra-ER presence. As an alternate approach, InterProScan was used, an analytical tool to predict protein domains and motifs by scanning multiple databases [60], to examine the hMinpp1 isoform-2 amino acids sequence. This annotated a non-cytoplasmic domain (Fig 3B) with different motifs.
A rooted phylogenetic tree was constructed to study the distribution of hMinpp1 isoform-2 related proteins across taxonomic species and establish its evolutionary relatedness and divergence. The cladogram (Fig 4A) reflects hMinpp1 isoform-2 related proteins across taxa. An evolutionary pattern reveals the conservation of hMinpp1 isoform-2 protein across species such as bats, whales, dolphins, even/odd-toed ungulates, rabbits and hares, primates, rodents, carnivores, placental, etc., with sequence percent identity between 25.79% to 100%. The NCBI refseq_protein records were used to extract the sequences and construct a distance-based dendrogram, hence the reliability and reproducibility of the tree. Phylogenetic analysis suggests a close relation between hMinpp1 isoform-1 and isoform-2 (Fig 4).
ER stress affects the relative expression of hminpp1 variants
Earlier work from this lab suggested Minpp1 protein as an ER stress responder, the expression of which was enhanced under stress conditions [56]. However, whether the stress-induced expression is limited to hMinpp1 isoform-1 or is consistent with isoform-2 is unknown. Due to the unavailability of hMinpp1 isoform-specific antibodies, this study examined the relative transcript abundance of hminpp1 variants-1 and -2 in MCF-7 breast cancer and its non-cancer MCF-10A cells under different stress conditions using real-time PCR with gene-specific primers. It is found that the relative transcript abundance of hminpp1 variant-1 was considerably higher than the hminpp1 variant-2 in both cell lines (Fig 5A-5C). Inducing cellular stress increases the relative expression of each variant to a similar extent, suggesting both hminpp1 variant-1&-2 genes respond to cell stress in a similar proportion (Fig 5D-5F). These results suggest that not only hMinpp1 isoform-1 but also isoform-2 plays a role in cellular stress responses.
Minpp1 isoform-2 is localized in extra-ER vesicles
Since Minpp1 isoform-2 lacks ER retention signal and is expressed at a relatively low level, GFP-tagged Minpp1 isoform-2 was transiently overexpressed using the expression plasmid to -2, B). The InterPro protein viewer for the Minpp1-isoform-2 (NP_001171588.1). The underlying tool-InterProScan-annotates protein's domain and motifs using predictive models provided by multiple databases [60], C). Summary of the prospective motifs in hMinpp1 isoform-2 based on its sequence alignment with hMinpp1 isoform-1.
Minpp1 isoform-2 is localized in exosomes
Sorting into MVBs could deliver proteins to both lysosomes and extracellular space. Next, we examined the extracellular secretion prospect of the Minpp1 isoform-2 was examined by identifying its presence in exosomes. Isolated exosomes, maintained for 24h in the serum-free media of MCF-7 cells, were isolated and characterized using western blotting, Transmission Electron Microscopy (TEM), and Nanoparticle Tracking Analysis (NTA). TEM images of isolated exosomes exhibited a cup-shaped double membrane morphology (Fig 8A) with an average size of 120nm (Fig 8C), typical for exosomes.
PLOS ONE
The western blot analysis exhibited enrichment of CD63, a known biomarker for exosomes, in the isolated fractions (Fig 9B), further confirming their identity. Note that there is a greater degree of fluctuation in CD63 band densities in total cell lysates and exosomes. This is perhaps due to the difference in the amounts of proteins loaded in cell lysates (40μg) and exosomes (7 μg). Immunostaining of the blot with anti-Minpp1 polyclonal antibody confirmed the presence of Minpp1 isoform-2 in exosomes ( Fig 9B). Note that this antibody shares the common epitopes in Minpp1 isoform-1 (55kDa) and isoform-2 (34kDa) and thus stains both.
A cell produces/releases multiple types/sizes of vesicles as EVs [62]. Therefore, it was investigated whether an interruption of the vesicular trafficking pathway would affect exosome secretion and incorporation of Minpp1 isoform-2 in it. A known pharmacological protein trafficking inhibitor, brefeldin A (BFA) [63], was used to analyze the extra-ER secretion of Minpp1 isoform-2 into exosomes. Initially, the difference in size and morphology in the exosomes isolated after BFA treatment were examined (Fig 8). Then the packaging of Minpp1 isoform-2 in BFA-induced exosomes was assessed.
This study found an uninterrupted secretion of exosomes in the presence of BFA. However, a noticeable reduction in the size of exosomes was observed (Fig 8D). Briefly, exosomes derived from untreated cells displayed an average size of 120nm compared to 95nm for exosomes isolated from BFA-treated cells (Fig 8C & 8D). In contrast, no significant morphological difference was recorded between exosomes isolated from treated or untreated cells ( Fig 8A & 8B). Both sources of exosomes exhibited a typical double-membrane cup-shaped morphology. . Nanoparticle tracking analysis of isolated exosomes. The distribution peaks around~120 nm, following the exosome enrichment during preparation. D). An uninterrupted secretion of smaller exosomes, distribution peaks around~95 nm, were reported in the presence of a BFA (10μg/mL) inhibitor for 24hr. 1% Uranyl Acetate for 15sec at RT was used to stain the samples negatively. https://doi.org/10.1371/journal.pone.0264451.g008
ER-stress increases the secretion of Minpp1 isoform-2 in exosomes
Next, it was determined whether the presence of Minpp1 isoform-2 and exosome's primary biomarker CD63 are altered under BFA-induced cellular stress. BFA is known to inhibit intracellular vesicular trafficking and cause cellular stress (Fig 9A).
The data revealed an irrepressible extracellular secretion of Minpp1 ( Fig 9B) and CD63 in the isolated exosomal fraction. It can be speculated that cancer cells envelop and release more of the Minpp1 isoform-2 enzyme into extracellular space during cellular stress conditions. A semi-quantification of the western blot bands accounted for more than a two-fold increase in the expression of Minpp1 isoform-2 in the BFA-treated exosomes compared to control exosomes (Fig 9C). Note that the densities of actin band in total cell lysates and exosome samples vary drastically due to differences in the amounts of proteins loaded in cell lysate (40 ug) and exosomes (7ug). However, it is comparable between control and BFA-treated exosomes.
Minpp1 isoform-2 in exosomes is enzymatically active
Next, it was investigated whether Minpp1 isoform-2 present in exosomes is enzymatically active, Minpp1 isoform-2, similar to isoform-1, as proposed to have an AP-A motif, as found -BFA, B). Western blot analysis of exosomes (Exo) isolated from BFA (10μg/mL, 24h) treated and untreated MCF-7 cells conditioned media, pre-enriched by sequential centrifugation. An equal amount of proteins (7μg) from both control and BFA-treated Exo were loaded. Probing was done by; anti-Minpp1 antibody (Fabgennix Inc.) that also binds to Minpp1 isoform-2, anti-Exo biomarker: CD63 antibody (Santa Cruz Biotech Inc.) and, β-Actin antibody (Santa Cruz Biotech Inc.) at 1:1000 dilution in blocking buffer, CL; cell lysate (40μg protein). Shown blots are a representative result of three independent experiments (n = 3). C). A comparative analysis of Minpp1's expression percentage) between BFA treated and untreated Exo sample. A more than 3-fold increase in Minpp1 isoform-2 secretion in EVs from BFA treated cells was observed.
In a competitive inhibition ELISA assay, Ins(1,3,4,5)P 4 (InsP 4 ) was used as a substrate which is known to be dephosphorylated to Ins(1,4,5)P 3 (InsP 3 ) by Minpp1. The product InsP 3 was then detected by ELISA kit colorimetrically employing specific antibodies to InsP 3 . We found about a 60% reduction in the ratio of InsP 4 /InsP 3 concentration in exosomal fraction and microsomes (resuspended in Minpp1 enzymatic buffer) compared to control (spiked with InsP 4 ) (Fig 10A). Microsomes were used as a positive control known to carry the Minpp1 isoform-1 enzyme.
The enzymatic activity of Minpp1 isoform-2 in exosomes was also determined qualitatively by hydrolysis of Ins(1,2,3,4,5,6)P 6 as a substrate because Minpp1 is known to hydrolyze multiple InsPs including InsP 6 . Following incubation of InsP 6 (4nmol) with exosomes, the metabolic products were separated on PAGE to analyze InsP 6 dephosphorylation. No InsP 6 band was found in the presence of exosomes, while toluidine staining qualitatively detected InsP 6 standards ran in parallel with other controls (S1 Fig). In conclusion, InsP 4 and InsP 6 dephosphorylation by exosomes confirm the bioactivity of exosomes-enveloped Minpp1 isoform-2 enzyme.
Discussion
This study examined the heterogeneity in Minpp1 and demonstrated an extra-ER Minpp1 isoform-2 secreted in exosomes. Computational studies previously published from this lab predicted four different isoforms of Minpp1 [22] in humans. However, this study re-evaluates existing databases and consolidates to only three variants of the hminpp1 gene (NM_004897.5)
Fig 10. Enzymatic analysis of exosome-based Minpp1 isoform-2. A).
Competitive inhibition enzyme immunoassay against exosomes pre-enriched by ultra-centrifugation. The ELISA plate was pre-coated with goat anti-rabbit antibody. Samples were subjected to the plate along with antibodies specific to InsP 3 and Horseradish Peroxidase (HRP)conjugated InsP 3 . The competitive inhibition reaction is launched between HRP labeled InsP 3 and unlabeled InsP 3 with the antibody. On adding HRP substrate solution, color develops reciprocally to the amount of InsP 3 in the sample. Samples were spiked with 5μM InsP 4 for 24h RT. Exosomes (~1.0μg) were resuspended in Minpp1 assay Buffer with 16mM CHAPS. Microsomes (~1.0μg) were resuspended in Minpp1 assay Buffer with 3mM CHAPS. The bar graph shows the percentage of InsP 4 dephosphorylation as a qualitative activity of Minpp1 isoform-2 enzyme compared to control (InsP 4 +Minpp1 Assay Buffer). B). Hydrolysis of InsP 6 (4 nmol) with exosomes as an indication of Minpp1 isoform-2 enzyme activity. Freshly collected serum-free conditioned media was differentially ultra-centrifuged to isolate exosomes. The isolated exosomes were resuspended in Minpp1 enzymatic assay buffer and later incubated overnight with InsP 6 (4 nmol) at room temperature. After incubation, metabolized InsPs were resolved by PAGE and visualized with toluidine blue staining.
https://doi.org/10.1371/journal.pone.0264451.g010 translated into three different isoforms (Fig 3). We found that the difference was due to an obsolete NCBI's RefSeq: XM_017016966.1 (hminpp1 variant-4), which no longer retains its "validated" status in the database. Irrespective of the number of isoforms, the exact mechanism behind isoform multiplicity, however, remains unclear. It could be due to some unknown evolutionary mechanism that increases the Minpp1(NP_004888.2) proteome diversity, linking different motifs or domains into the isoforms (Fig 3C), thus altering their cellular localization and function.
Among the noted motifs and domains, hMinpp1 isoform-2 was found to be the only other member of the Minpp1 family retaining the evolutionarily conserved acid phosphatase (AP-A) motif known to hydrolyze InsPs with no ER-retention signal (KDEL). Minpp1 isoform-2 was also found to have N-Myristoylation and N-Glycosylation sites in the linear amino sequence of Minpp1 isoform-2 ( Fig 3C). However, the functionality of any motif/domain of the Minpp1 isoform-2 protein has not yet been biochemically determined. Bearing that in mind, it is predicted that Minpp1 isoform-2 could escape the ER by anchoring the lipid membrane via its N-Myristoylation site, possibly into an extra-ER compartment. The non-cytoplasmic domain found in the "InterProScan" of the Minpp1 isoform-2 sequence additionally supports the prediction (Fig 3B).
The BLASTP-phylogenetic tree constructed based on hMinpp1 isoform-2 amino acid sequence showed that the related proteins are widely spread from eudicots (flowering plants), bony fishes, rodents, avians to humans (Fig 4A). Such protein sequence identity-based phylogenetic tree could identify functional similarities [64]. However, considering how nature exploits a protein's structure based on its environment, examining the BLASTP-phylogenetic tree alone on enzymes' functional transferability [65] will not be sufficient. An advanced computational approach targeting functional residues or protein's structural/surface properties could uncover more on Minpp1 isoform multiplicity, functional constraints, and the protein divergence across taxa.
The bioinformatics approach is limited to answering whether these isoforms are functional? When expressed, do they contribute to a new function to the hMinpp1 proteome or play a regulatory role? Therefore, an experimental approach is imperative. This study now presents data indicating that hminpp1 variant-2 is expressed in real-time in MCF-7 cancer cells, albeit the expression levels are insignificant compared to hminpp1 variant-1. However, during cellular stress conditions, a significant increase in the relative expression of the normally almost dormant hminpp1 variant-2 was observed. The low-level expression might represent a defense mechanism where a cell responds to stress by inducing specific genes essential for stabilizing the stress environment. The hminpp1 variant-2 could also adhere to the hminpp1 variant-1 convention as a potential cell-stress responder [56].
Lately, several researchers have reported the presence of the Minpp1 enzyme in extra-ER compartments [32- 36,51]. However, it is unclear which isoform of Minpp1 protein it is. Minpp1's presence outside ER is beyond the paradigm of conventional protein secretion due to the ER-retention signal in isoform-1. Therefore, the proteome diversity of Minpp1 could help understand its extra-ER presence and segregation away from its cytosolic physiological substrate, InsPs [19]. This study established that the two isoforms of Minpp1 partially reside together in the same vicinity (Fig 7D). Minpp1 isoform-1 was found relatively more localized around the nucleus, with ER biomarker (GRP-78) than Minpp1 isoform-2 ( Fig 7C, 7D & 7F). Minpp1 isoform-2 was more on the cell periphery, the vicinity of PM-destined MVBs (Fig 7C, 7D & 7F), and was found colocalized with MVB biomarker CD63 (Fig 7A). In the UPS pathway, MVBs can release their cargo into an intermediate endo-lysosomal compartment (Fig 11) for degradation or fuse with PM to release the contained ILVs into the extracellular space [66]. With the presence of Minpp1 isoform-2 in the lysosome (Fig 7B), one could argue that its secretion is due to the unconventional secretory (UPS) pathway [37,44,67,68]. It is further interesting to speculate that the enzyme reported in the lysosome and extracellular cell culture media in Windhorst's study [51] could well be Minpp1 isoform-2. However, the significance of Minpp1 in the hostile environment of lysosomes is still unclear.
The ER stress-related apoptotic pathways have extensively been targeted in various diseases, including neurodegenerative disorders [72], cancer, type-2 diabetes [73], intestinal inflammation [74], amyotrophic sclerosis (ALS) [75], and many more. Some of these disorders such as neurodegenerative [76,77], single nucleotide polymorphisms (SNP) mutations that affect the milk fatty acids (FAs) traits in Chinese Holstein [78], a glycolytic bypass in Hepatitis-B virus (HBV) positive hepatocellular carcinoma (HCC) [79], differentiation and apoptosis [51,80] are implicated in involving Minpp1. Previous work from this group has reported a link between Minpp1 and ER stress [56]. In this study, an equivalent increment in the relative transcript abundance of the minpp1 variant-2 gene further suggests a similar functional causality with ER stress (Fig 5). Severe ER stress releases exosomes carrying damage-associated molecular patterns (DAMPs) [81]. The reported exosomes from tumor cells remodel the ECM by regulating the pre-metastatic niche [82]. These DAMPs-associated exosomes carry a heterogeneous group of molecules, including ATP, uric acid, smaller and larger proteins [81], and factors for self-renewal and protection [83]. There is an irrepressible secretion of relatively smaller exosomes (~95nm) during ER-stress by BFA (Fig 8C & 8D) accompanied by a concomitant increase in the expression of exosomes-associated Minpp1 isoform-2 enzyme https://doi.org/10.1371/journal.pone.0264451.g011 (Fig 9B & 9C). However, only a limited amount of unconventionally secreted proteins are resilient enough to surpass the inhibitory effect of BFA, i.e., independent of ER/Golgi apparatus (Fig 11) [37,84]. Collectively, these findings support the hypothesis that an unconventional protein secretion pathway ushers Minpp1 isoform-2 into extracellular space. Also, the overexpression of Minpp1 isoform-2 in ER stress-exosomes could perhaps be a cell-stress alarmin. Moreover, since exosomes suggest parental imprint [46], the presence of Minpp1 isoform-2 in exosome-associated DAMPs could be viewed as a promising biofluid-based non-invasive early breast cancer biomarker.
The data presented in this study further report that the proposed AP-A motif is enzymatically active in the exosomes-associated Minpp1 isoform-2 (Fig 10). Thus, packaging enzymatically active Minpp1 isoform-2 into exosomes could facilitate the exogenous transfer of Minpp1 isoform-2 from one cell to another or assist in ECM remodeling by protecting tumor cells against the anti-proliferative actions of any extracellular InsPs [85,86]. Moreover, the Minpp1's ability to remove 3-phosphate overlaps with PTEN [20]. Therefore, and as suggested for PTEN's expression in the tumor's microenvironment [87], the expression of Minpp1 isoform-2 enzyme in exosomes could imply an essential role in the evolution of ECM and tumor cells during metastasis. It is further interesting to speculate that the Minpp1 isoform-2 enzyme in ECM could activate a cassette of proteins in its proximity that collectively function in metastasis and evading cell death. Thus, inhibition of the Minpp1 isoform-2 enzyme could inhibit ECM's protective potential, making Minpp1 isoform-2 an attractive target for drug therapy to restrict tumor invasion. S1 Fig. A). InsP 3 -ELISA logarithmic standard curve. The best fit curve was plotted with the InsP 3 concentration log on the x-axis vs. the OD log on the y-axis. Regression analysis was used to analyze the graph. B). varying concentrations of BSA spiked with 4 nmol of InsP 6 were resolved by PAGE to examine any effect of added protein on the detection of InsP 6 . (TIF) | 7,971.6 | 2022-03-02T00:00:00.000 | [
"Biology"
] |
Are Firms in Corporate Groups More Resilient during an Economic Crisis? Evidence from the Manufacturing Sector in Poland
Corporate groups are specific types of business networks that generate particular advantages for firms. They allow corporates to reduce costs, develop the pool of resources and increase the flexibility of operations and responses to external shocks among others. The above mentioned benefits are of even greater importance during times of economic turbulence. Their involvement in a corporate group should theoretically allow firms to perform better. The aim of this study is to verify whether corporate group membership truly translated into a firm’s higher input competitiveness and a firm’s better performance during the recent economic crisis. First, we try to investigate if the input competitiveness is higher in the case of firms being members of corporate groups. Second, we test whether the involvement in a corporate group matters for the performance of the firms. Using critical in-depth literature studies and conducting the primary empirical research using the CATI (computer-assisted telephone interviewing) method we strive to verify the following hypothesis – the higher a company’s input competitiveness during the economic crisis, the better a competitive position the company achieves. The empirical research encompasses more than 700 corporates from the manufacturing sector in Poland during the global economic crisis and shortly afterwards. To investigate the issue we use the following methods of statistical analysis – cluster analysis, non-parametric tests and correlation coefficients. The results of the study show that firms involved in both Polish and international corporate groups were more resilient during the economic crisis than those which were not.
INTRODUCTION
An economic crisis in the simplest terms is a sharp drop in the economic activity that manifests itself through decreasing GDP, increasing unemployment, decreasing investment activity, turbulent financial markets and increasing factor costs. Gourinchas and Kose (2011) pointed to the fact that the financial crisis that started in 2008 led to the deepest and most synchronized global recession over the past 70 years. According to World Development Forum, the GDP growth in Poland achieved a level of 2.63% in 2009 compared to -4.39% for the European Union (World Development Indicators, 2015). This ability to cope with the economic crisis gave Poland the name "Green Island". In the next few years the GDP growth in Poland was even higher reaching the level of 5.01% in 2011 and 1.3% in 2013, respectively. The comparatively strong results of the Polish economy reflect the relatively high immunity of Polish firms to economic crisis turbulence. Still, the manufacturing companies that accounted for around 10% A corporate group "(…) is composed of corporates that are independent in legal terms, but rely on each other economically due to the control and/ or ownership links between them. Within a group, some links between a dominant entity (parent) and its subsidiaries are distinguished" (CSOP, 2015, p. 18). The Central Statistical Office in Poland (CSOP) uses the term enterprise group instead of corporate group but in literature they are used interchangeably. According to the survey conducted annually by CSOP, there are more than 2000 corporate groups registered in Poland. Most of them operate in manufacturing industries and trade and repair of motor vehicles. Throughout 2009-2013 these two areas of economic activities accounted for about 46% of people employed in corporate groups. To be more specific corporate groups can be further divided into: "(1) all-resident corporate group composed only of corporates (both group head and subsidiaries) that are all Barbara Jankowska, Katarzyna Mroczek-Dąbrowska, Marian Gorynia and Marlena Dzikowska / resident in the same country; (2) multinational corporate group composed of at least two corporates located in different countries; (3) truncated corporate group as a part of a multinational group, located in the same country" (CSOP, 2015).
All-resident groups constituted about 50% of all registered groups in 2009, in 2010 the share of all-resident groups in the number of all registered groups dropped to 41%, in 2011 to 36%, in 2012 and 2013 to 31%. Overall, the statistics on the corporate groups have not changed much through the analyzed period 2009-2013. They account for 0.6% of all the non-financial firms registered in Poland but employ about 28% of all employees and generate more than 50% of the sales income. The corporate groups that operate internationally (foreign controlled truncated corporate groups) mostly had their headquarters located in EU Member States. Throughout 2009 Germany was the number one location, followed by the Netherlands (2009)(2010)(2011) and most recently Cyprus (2014). Outside the EU zone the United States was the main global group headquarters (CSOP, 2010(CSOP, , 2011(CSOP, , 2012(CSOP, , 2013(CSOP, , 2014(CSOP, , 2015. Most of the corporate groups (in the manufacturing sector) employ between 50 and 249 people (153 groups) or more than 1000 people (122 groups). In total that accounts for almost 500,000 employees. Their total assets come to PLN 202.85 billion with 58% in fixed assets and the remaining 42% in current assets.
Bearing in mind the significant position of corporate groups in Poland during the economic crisis and shortly afterwards, our aim is to verify whether members of corporate groups in Poland performed better than companies outside such groups. Using the CATI method, we conducted a survey of 695 manufacturing companies to gather information on their performance during the period 2009-2013. Afterwards, the information was supplemented by relevant financial data extracted from the Amadeus database. Detailed information on the indicators used is provided in later sections.
We start our paper by outlining the conceptual background behind corporate groups, how they are perceived as a specific type of business network, and how they use their resources and capabilities as sources of competitive advantage. We then use existing literature to formulate hypotheses related to the interdependencies between corporate group affiliation and sources of competitive advantage and separately between corporate group affiliation and their performance. Subsequently, we present the methodology and the findings of the analysis with the use of descriptive statistics, non-parametric analysis of variance and correlation coefficients. In the final part of the paper, we discuss the findings and highlight the implications and limitations of our research.
The emergence of corporate groups has played a significant role in the globalization process, and currently these groups play a vital role in the transformation of both Eastern European and Asian countries. A corporate group, known also as an enterprise group (CSOP, 2015) or a business group (Carney, 2011), consists of independent economic entities that are bound together through capital, transactional and personal ties (Romanowska, 2011). The concept of corporate groups differs across nations. In Poland, as mentioned before, the definition was formulated by the CSOP (2015). Though a significant growth in the number of corporate groups can be mainly observed in the developing countries, they are also of great significance in the developed ones. A quick look at the trade flows reveals that 75% of US trade is directly linked to corporate groups and similarly 65% of French international trade is carried out by domestic or foreign-owned corporate groups (Altomonte & Rungi, 2013). Thus, the importance of such groups should not be underestimated.
Corporate groups come into existence through mergers and acquisitions, capital outsourcing, direct investments and consolidation (Trocki, 2004). They can be viewed as a neo-institution that emerges from the network of firmsfilling the institutional voids (e.g., Li & Kozikode, 2008).
Corporate groups are said to be set between markets and hierarchies (Williamson, 1975(Williamson, , 1985. These co-dependent entities create structures that vary both in terms of organization and management. Corporate groups created through capital outsourcing (that is extracted from an already existing firm that belongs to the group) tend to be homogenous throughout. Control does not have to be exerted in excess as strong ties exist between the old and new firms in the group. Much more control is needed in the case of groups that come into existence by mergers and acquisitions (Trocki, 2004). As firms vary in cultural, organizational, social and sometimes even ethical ways, a common ground (i.e. common rules of conduct) needs to be established to create a networking platform for the companies within the group.
The degree to which control and management centralization are exerted depends upon the goals of the corporate group in question. Most of them seem intent on displaying excessive coordination of both managerial and operational activities (Romanowska, 2011). This seems to be due to the processes affecting the groups worldwide -the pursuit of internationalization and diversification.
Although it is hard to find agreement on a universal definition of a corporate group, the one that emphasizes the aspect of legally independent firms with common management prevails (e.g., Colpan & Hikino, 2010). In other words, a corporate group is a group of inter-related jointly controlled firms, consisting of a parent firm and a number of subsidiaries that can be linked to sub-subsidiaries and other equity associate firms. They are independent entities but they are characterized by coordinated activities through different ties. These ties arise from interactions that are a feature of business networks which, according to Todeva, (2006) encompass not only actors and activities but different resources as well. Thus, a firm fully embedded in the network of corporate group, may hold an advantage over a stand-alone company functioning in the market. This advantage may result from capital availability, know-how and experience sharing, synergy effects, etc. At the same time, it can be argued that control costs, shareholders' individual goals and structural complexity may diminish this predominance.
The concept of a business network is based on the concept of a network in general terms -it is a structure that is formed by nodes tied to each other by particular threads. The nodes in a network are e.g. firms or other organizations and the threads are particular relationships between the actors. Ford, Gadde, Hakansson and Snehota (2011, p.182) stated that the nodes and threads are equipped with tangible and intangible resources. According to Todeva (2006, p. 15) a business network (an industrial network) is a set "of repetitive transactions based on structural and relational formations with dynamic boundaries comprising interconnected elements (actors, resources and activities). Networks accommodate the contradictory and complementary aims pursued by each member, and facilitate joint activities and repetitive exchanges that have specific directionality and flow of information, commodities, heterogeneous resources, individual affection, commitment and trust between the network members". Networks developed by a group of firms often help to promote the operations of each group member and these members can stay financially independent, while at the same time enjoying access to the resources of other members thanks to the inter-firm relationships (Gulati, 1995).
The definition of a network presented by Ford (et al., 2011, p. 182) and in particular the fact that the nodes and threads are equipped with tangible and intangible resources is useful when explaining the possible interdependencies between being a group affiliated firm and being relatively better equipped with resources and capabilities. According to the resource based view (RBV), corporate groups as specific business networks are a type of business organizations that are bundles of idiosyncratic resources and resource conversion activities (Rumelt, 1984). Wernerfelt (1984, p. 172) described the resources of a company as anything that can be perceived as a strong or weak side of the organization, and classified them as material resources and assets, which include among others the brand, technological know-how, capabilities, commercial contracts, machines, processes, capital, etc. Corporate group affiliated companies join their strengths and weaknesses within their resources and capabilities. It has been argued by many researchers that interfirm ties contribute to the development and exploitation of competitive resources (e.g., Almeida & Kogut, 1999;Dyer & Singh, 1998;Eisenhardt & Schoonhoven, 1996;Foss & Eriksen, 1995;Gulati, 1999;Gulati, Nohria & Zaheer, 2000;Lavie, 2006;McEvily & Marcus, 2005;Shan & Kogut, 1994;Sorensen & Reve, 1998;Uzzi, 1997).
The tangible and intangible resources embedded in the nodes and threads of a network, to some extent, arise from the resources and capabilities of networked firms and simultaneously can increase the sources of competitive advantage of single firms. The interfirm ties provide the corporate group affiliated firms with access to the information, knowledge, resources and markets, and lead to a faster diffusion of knowledge. The pooling of top managerial resources within a corporate group promotes innovation and positively influences the entrepreneurial capacity required per unit of innovative decision-making (Leff, 1978;Belenzon & Berkovitz, 2010). The set of group-specific assets that can increase the resources and capabilities of group affiliated firms is the corporate group reputation among others (Duysters, Jacob, Lemmens & Jintianal, 2009). Balcet and Bruschieri (2008) point to the intra-group technology transfer and information flow and the group financial strength that contributes to the group affiliated firm's competitive advantage. Hence, we argue, that: H1: A corporate group affiliated firm has better resources and capabilities than a non-group affiliated firm.
Corporate group affiliated firms' performance
New institutional economics constitutes the basic conceptual framework on which research on corporate group performance is based. Particular theories include agency theory and transaction cost approach as well as RBV and institutional theory. However, the different theoretical approaches do not always correspond with one another in regards to corporate group affiliation and firm performance. Based mostly on transaction cost approach and institutional theory researchers found that corporate group affiliation enhances the financial performance since it allows for internalization and hence, transaction costs minimization. It is claimed that, especially in the context of developing economies, corporate groups fill in the void of poor-quality legal and regulatory institutions, limited property rights and corruption (Granovetter, 2005) in order to substitute the inefficient market with an efficient internal structure (Estrin, Poukliakova & Shapiro, 2009). On the other hand, agency theory highlights the multi-layered coordination issues that corporate groups undeniably suffer (Morck, Wolfenzon & Yeung, 2005) and that in the end may significantly impair the groups' as well as affiliates' effectiveness.
Findings of empirical research are inconclusive with regard to affiliate's performance. Khanna and Rivkin (1999) sampled 13 developing economies to see whether group affiliation has an effect on a firm's financial performance. The results found both evidence for and against the effect. Khanna and Rivkin (1999) applied econometric analysis for financial data of: Argentina, Brazil, Chile, India, Indonesia, Israel, Mexico, Peru, the Philippines, South Korea, Taiwan, Thailand and Turkey. The hypothesis proved to be right for the developed countries and for most of the developing ones (except for Mexico and Peru). Keister (1998) posed a similar question when looking at the transformation process in China. The research focused on the 1980s and proved that corporate group affiliation boosted the member's financial performance. Some scholars (e.g. Bertrand, Mehta & Mullainathan, 2002;Khanna & Yafeh, 2005) claim that the positive effect exists although it can happen at the expense of others. In case any affiliated firms face troubles in terms of their performance, other affiliated firms operate under strong pressure to bail them out. The nature of corporate groups manifests itself through sizeable flows of goods among affiliated firms and it can happen that the firms are made to purchase them from other group affiliated firms, irrespective of the quality of the goods. Performance of group affiliated firms may also be affected by different institutional context and overall economic conditions.
In Poland, manufacturing corporate groups generate around PLN 234.3 billion revenue and reach net revenue of around PLN 8.68 billion (CSOP, 2015). Their return on assets is 4.3% which is 6th place in terms of sectoral breakdown after: mining and quarrying, education, arts, entertainment and recreation, other service and electricity, gas, steam and air conditioning supply. In 2013, the return on investment was around 7.7% and return on sales 12.3%. Although the overall number of affiliated firms is relatively small, they generate 44% of the manufacturing gross profit and almost 70% of the operations revenue. Comparing the gross profit rate for entities affiliated in the corporate groups with the overall manufacturing sector, we can notice that in 2009 and 2010 they achieved about 0.2% higher rate. Similarly, they performed better in the period 2011-2013 and on average achieved about 0.1% higher rate. Thus, bearing in mind the data of the Central Statistical Office in Poland on the financial performance of manufacturing corporate groups, we will attempt to verify the second hypothesis: H2: A corporate group affiliated firm has better performance throughout a period of economic crisis and shortly after than the non-group affiliated firm.
A similar hypothesis has been posed in the pre-crisis research of George and Kabir (2012). However, the researchers do not focus on the perception of the resources and capabilities but on the portfolio diversification. They assume that portfolio diversification has a positive effect on a company's resources and capabilities and afterwards prove that group-affiliated firms performed better. Our aim is to verify such dependency in a different geographical and institutional context and to see how the overall economic situation affects the results. By doing so, we seek to observe links among resource and capabilities' perception, group affiliation and performance in Poland.
Sample and timeframe
Our empirical research aims to address the question of performance of group affiliated firms against non-group affiliated firms taking into account two restrictions. Firstly, we limit the analysis to one country only, in order to eliminate the distortion caused by institutional differences in each country. Secondly, we restrict the study to manufacturing industries only. We address the question of which firms are more immune to the economic crisis -group affiliates or the non-group affiliates. In doing so, we raise the question of whether the performance-affiliation effect exists and if it depends on the economic situation.
The study is partially based on data from the AMADEUS database and primary data from interviews with top managers of 695 manufacturing firms located in Poland and operating in 7 industries defined according to NACE Rev. 2 at the level of divisions (see Table 1). The sample was determined by a prior analysis with the use of linear ordering of objects and the results of this analysis are broadly presented in another paper (e.g., Dzikowska, Gorynia & Jankowska, 2015). It is useful to underline that the aim of the delimitation was to identify industries in which firms did relatively well during the economic crisis (division 10, 17, 25, 32) in Poland and those that had difficulties with returning to pre-crisis performance (division 14, 15, 24). Subsequently, a ranking of industries was developed. The industries included in our study encompass 44% of firms registered in Poland and operating in the manufacturing sector.
First, the authors used the data presented in the Amadeus database. Only firms with complete contact and financial records were taken into consideration. It turned out that in this proprietary electronic database there are 2533 firms with complete records representing the 7 selected industries 5 . Then, thereof, 750 firms were randomly contacted in July and August 2015 resulting in an effective response rate of 93%. In the study we wanted to investigate the implications of corporate group affiliation, during the crisis period and shortly after, for the sources of competitive advantage and performance of group affiliated firms. Thus we took into consideration only those entities that, within the whole period of time, were members of the same type of corporate group. In our study we distinguished between a Polish and an international corporate group. We contrasted the data for these entities with the data for firms that, within the whole period of time, stayed out of any corporate group. Since few companies migrated between the two distinguished types of corporate groups, we eventually had 695 entities included in the research sample.
Among those 695 companies, there are 43 micro, 220 small, 284 medium entities and 148 large entities. To characterize the size of the firms we used the number of employees in the crisis year 2009. The majority of corporate group affiliated firms represent division 10 (319 entities) and division 25 (226 entities) which are industries that coped relatively well with the crisis. 317 firms were not affiliated within any corporate group, 202 firms had affiliation within a Polish corporate group and 176 operated within an international corporate group.
The timeframe for the study embraces the period 2009-2013. The timeframe of five years was intentionally assumed. The first symptoms of the global economic crisis in Poland were visible in the second half of 2008; hence the year 2009 was defined as the period of the crisis. The growth of GDP in 2009 was 2.3%, down from 5.13% in 2008. In 2010, GDP growth recovered to the level of 3.88% (World Development Indicators, 2015). We assume that after 2009 we have the so called post-crisis period when the positive and negative consequences of the economic crisis emerged.
Methods, variables and operationalization
Firstly, we divided the whole sample into three groups of distinctive entities: (1) not operating in any corporate group (FNG), (2) firms participating in Polish corporate groups (FPG) and (3) entities performing within international corporate groups (FIG). The division was necessary to verify the interdependencies between group affiliation and sources of competitive advantage and to identify the potential implications for the firm's performance at the time of the economic crisis and in the post crisis period. For the purpose of our study we defined the Polish corporate group as a group where the parent company was located in Poland and an international group as a group where the parent company is of foreign origin. The variables used in the study are described in Table 2. We looked for possible differences in the firm's perception of sources of competitive advantage. To characterize the construct -the sources of competitive advantage within the three defined groups of entities -we used variables explained in Table 2. Using Cronbach's alfa we checked if the broad set of variables measures the construct of the source of competitive advantage in a reliable way. Last, but not least, we conducted the Kruskal-Wallis non-parametric analysis of variance for the variables of resources and capabilities. To evaluate the firm's performance we used two types of variables -objective and subjective ones. To check if the set of variables was internally consistent, reliable and all variables measure the same construct, we calculated Cronbach's alfa (see Table 2). Then, we tried to verify if there are any statistically significant differences among the firms operating outside corporate groups (FNG) and operating within corporate groups, with the distinction between Polish (FPG) and international corporate groups (FIG). For that purpose we used Kruskal-Wallis non-parametric analysis of variance since, to evaluate the variables, we used the ordinal scale and there were more than three different groups of entities (FNG, FPG and FIG).
Variable
Operationalisation Internal consistency Sources of competitive advantage in the crisis period (2009) 13 indicators on a 7-point Likert scale, where "-3" stands for "much worse than direct competitors", and "3" stands for "much better than direct competitors".
Performance in the crisis (2009) and post-crisis period(2011-the time of prosperity, 2013)
Subjective measures 5 variables (profitability, sales growth, market share, overall financial condition, customer satisfaction) evaluated with the use of 7-point Likert scale, where "-3" stands for "much worse than direct competitors", "3" stands for "much better than direct competitors"
margin (EBIT/revenues), sales growth (based on company revenues -year to year), return on equity
Common method bias is possible as our data are to a large extent based on perceptual measures from single respondents of each firm. To decrease the risk of common method bias that could artificially inflate the observed relationships between variables (Campbell and Fiske, 1959), respondents were not aware of the hypothesised relationships shown in the study. Additionally, we included data, such as profit margin (EBIT/revenues), sales growth (based on company revenues year to year) and return on equity, based on secondary information from AMADEUS.
Sources of competitive advantage in the period of the economic crisis -corporate group affiliates against the non-group firms
We tried to investigate whether the enhanced resources and capabilities of firms operating within the corporate groups, as detailed in the literature, could explain their better performance in the crisis and post-crisis periods. At first we analyzed the sources of competitive advantage of the firms representing the three distinct groups in 2009. Then we verified whether some additional company characteristics (e.g. company size) affected the study. As no evidence was found, we have included these characteristics in the research. A closer look at the resources and capabilities of FNG, FPG and FIG allows us to state that the highest mean values were reported among FIG (Table 4). Firms affiliated within international corporate groups perceived their resources and capabilities in the crisis year 2009 better than the affiliates of Polish corporate groups and the rest of the entities. In order to verify whether the differences in the resources and capabilities of FNG, FPG and FIG are statistically significant, we used the non-parametric analysis of variance. The results presented in Table 3 include critical values and significance levels in relation to the sources of competitive advantage, where clear differences were observed in the distribution of the answers related to the evaluation of its particular elements. The obtained significance levels (p-values) for the differences in resources and capabilities of firms from the three defined types of corporate groups were lower than 0.05, thus they are statistically significant. It justifies the hypothesis that firms operating within corporate groups are better equipped to cope with unfavourable external circumstances.
Performance in the period of the economic crisis and shortly aftercorporate group affiliates against the non-group affiliates
Performance of the firms was evaluated with the use of two types of variables; objective measures based on financial data retrieved from the electronic Amadeus database -profit margin (EBIT/revenues), sales growth (based on company revenues -year to year), return on equity) and; subjective measures which present the perception of profitability, sales growth, market share, overall financial condition, and customer satisfaction [as perceived by the managers who represented these companies] (see Table 2). Analysing the descriptive statistics for the objective measure we notice that the highest mean values in the crisis year 2009 are characteristic of FPG with one exception -return on equity is highest for FIG and FNG (Table 3). In 2011, a period associated with economic prosperity in Poland, the top position belonged to FIG, despite sales growth being better in the case of FPG. Two years later in 2013, the sales growth of firms was the highest in the case of FIG, but profit margin and return on equity was better for FPG. The evaluation of performance with the use of these measures didn't provide a conclusive picture. The subjective measures bring a more clear and unambiguous picture. FIGs perceived their performance as better than FNG and FPG in 2009 and in the post-crisis time in Poland (Table 5). FNG evaluated the subjective measures of performance as better in the crisis year than FPG with one exception -client satisfaction. The post-crisis time brought the relatively best position of FIG and then FPG. However, bearing in mind the scale used to evaluate the performance (see Table 2) we have to state that the evaluation is quite low since the mean values for resources and capabilities in 2011 and 2013 oscillate between 0.45 and 1.42. However, the worst results are a characteristic of the time of the economic crisis, and reveal an awareness within the firms that they did not operate very well in this period but that they were better than their competitors. Looking at the performance measures of FIG and FPG against the FNG we can notice that the first two types of entities reported better results which can be associated with their resistance to the unfavourable external conditions. In order to check whether the differences in the performance of FNG, FPG and FIG, evaluated with the subjective measures, are statistically significant we used the Kruskal-Wallis test. The results are presented in Table 6. The obtained significance levels (p-values) for the differences in the performance of firms not involved in any corporate group and firms involved in Polish or international corporate groups are below 0.05 and thus are statistically significant. It justifies the hypothesis that firms operating within corporate groups were able to cope better with unfavourable external circumstances. In hypothesis 2 we indicated that the corporate group affiliated firms enjoyed better performance than the non-affiliated firms during the economic crisis and shortly after. The hypothesized explanation for that could be resources and capabilities that form the sources of competitive advantage. To check the potential interdependencies between the sources of competitive advantage and the performance of firms, in the crisis time and shortly after, we calculated the Spearman's rank correlation coefficient for the indicators of the sources of competitive advantage and performance indicators. The results are presented in table 7, 8 and 9. The highest correlation coefficients for sources of competitive advantage and all performance indicators considered for FPGs, FNGs and FIGs were in 2009. The value dropped in the years 2011 and 2013 but was still significant at the level of above 0.4. The strongest correlation between the sources of competitive advantage and performance measures in the crisis year 2009 was characteristic for FIGs, the second position belonged to FNGs. RS1 -material resources, RS2 -human resources, RS3 -intangible resources (knowledge, brand, patents, etc.); RS4 -financial resources; RS5 -Logistics (performance and efficiency), RS6 -production (performance and efficiency), RS7 -marketing and sales (effectiveness and efficiency), RS8 -service (effectiveness and efficiency), RS9 -supplies (performance and efficiency), RS10 -technology(advancement and efficiency), RS11 -management of human resources (efficiency and performance), RS 12 -firm management systems (efficiency and effectiveness), RS13 -quality control (efficiency). P1 -profitability, P2 -sales growth, P3market share, P4 -overall financial condition, P5 -customer satisfaction.
Correlation coefficients in 2009 were relatively the lowest for FPG, which could indicate that in the crisis time the impact of internal factors on the performance of the Polish group affiliates may have been weakened because of unfavourable external conditions. FPGs are groups where the parent company is headquartered in Poland which means that the transmission of negative changes in the Polish economy in 2009 happened via the interactions among the siblings affiliated in the group and located in Poland and interactions between the parent company headquartered in Poland and other group affiliated firms. As far as FIGs are concerned the interactions affected by the situation in the Polish market took place just among the group affiliates operating in the Polish market. The transmission of external shocks, a characteristic of the Polish market in 2009, was possible via the interactions among group affiliates located in this market and not via the relations with the parent company operating in a different national market. The interactions with the international parent company in that year could have been the remedy against the negative impact of interactions among Polish-based affiliates. The correlation coefficients presented in tables 7, 8, 9 are statistically significant with p < 0.05.
CONCLUSION, LIMITATIONS AND IMPLICATIONS FOR FURTHER STUDIES
Our research corroborates the findings of previous studies on the impact of group affiliation on the sources of competitive advantage and performance of firms. Based on the methodology presented before, first we show that the group affiliated entities enjoy better resources and capabilities against the non-group affiliated firms and second, thanks to being better equipped they can achieve better performance even in the crisis time and shortly after. The results of the analysis, with the use of descriptive statistics, clearly demonstrate that corporate groups can be regarded as bundles of particular resources and firms within these groups can take advantage of the resources and capabilities embedded in the inter-firm ties. The statistically significant differences within the sources of competitive advantage among the firms affiliated in the Polish groups, international corporate groups, and stand-alone firms prove that the affiliation matters for the sources of competitive advantage not only in times of prosperity but during a period of economic crisis.
Our descriptive findings reflect the assumption that the crisis time affects the firms since the assessment of resources and capabilities was rather low when we take into account the scale used in the survey. But looking for the highest ranks we can easily notice that the best "scores" go to the firms affiliated within international corporate groups. This result can be on one hand surprising and on the other hand not. Bearing in mind the fact that the economic crisis was first noticeable in 2008, not in Poland but in other countries that, in many cases, were the location of headquarters of parent firms in international groups, the result is surprising. However, contradictory reasoning can be that the pool of resources and capabilities in the case of international group affiliated firms was greater shortly before the crisis, which is why even in 2009 their sources of competitive advantage were better evaluated.
Better resources and capabilities, which are characteristic of Polish and especially international group affiliated entities against stand-alone firms, translate into better assessment of performance measures of firms involved in both types of corporate groups. In this context we can state that both hypotheses were confirmed. And in particular, FIG's greater resilience to the crisis, which is proved by their relatively better performance, can be explained by their access to external and intra-group resources and capabilities. It may have given these firms the support needed for their relatively higher profitability, sales growth, market share, overall financial situation and perceived client satisfaction. All in all, our findings suggest that Polish firms affiliated in international corporate groups were more resilient to the economic crisis despite the commonly accepted thesis that international corporate groups may have acted as synchronization factors among crisis phenomena across different national markets.
The study provides some practical implications related to the justification for the existence of corporate groups in general and during the period of economic crisis in particular. The international group affiliated firms excel in all dimensions of their performance in the crisis and in the post-crisis period. The Polish group affiliated firms faced the worst performance (subjective measures) in the crisis period 2009 compared to the non-group affiliated firms with just one exception -client satisfaction. It is not surprising since the unfavourable external settings put stronger pressure on firms headquartered in Poland and operating under the supervision of a parent entity headquartered in Poland. External shocks usually increase the coordination cost that emerges to some extent among networked organizations. The growth of coordination costs can be linked to the fact that the negative external circumstances influence firms directly and indirectly. The direct impact emerges thanks to the firm and external environment entities interactions. The additional indirect impact is related to the networked firms that can absorb and experience the crisis through their relationships with other firms. In this context the relationships can work as a kind of pipeline of external shocks. The impact of negative changes in the external environment can be strengthened by the networked realm in which corporate group affiliates operate.
The findings are in line with the need to develop and broaden the research on embedded competencies. In the discussion focused on standalone firms the concept of core competencies is used and scholars underline that even firms with similar resources and capabilities can differ in terms of their competitive advantage. We can explain it by referring to the concept of competencies. Competencies are related to the coordination and exploitation of resources and capabilities. Firms differ in terms of their coordination abilities. According to Eriksen and Mikkelsen (1996), we can define competencies as the "organizational capital" that supports a firm's integration of resources into "idiosyncratic value propositions". The coordination directly linked to competencies and the organizational capital is of even greater importance in the case of group affiliated firms that operate within different social and economic ties. The affiliated entities get the chance to increase their competitive advantage thanks to the competencies embedded in the network of a corporate group. The access to the embedded competencies is determined by the relations of particular firms and the structure of the whole network of relations. Thanks to the relations within corporate groups the interactions among group affiliates are not anonymous and the firms can create trustbased relationships (Uzzi, 1999) which further create information transfer and even collaborative attitudes of group affiliates. That can all contribute first to the sources of competitive advantage and second to the performance of the group affiliated firm. Hence we argue that the methodological contribution of this paper is the manifestation of the significance of the concept of embedded competencies. This approach to the upgrading of competitive advantage calls for more conceptual and empirical studies.
Our research is subject to several limitations. Firstly, the theoretical background provides a rather blurry hypothesis on the possible better performance of the firm that is a corporate group affiliate in comparison to the non-affiliated firms. Therefore, we made a simple division in group affiliates (FPGs and FIGs) and non-group affiliates (FNGs). But perhaps a more detailed distinction (e.g. the one suggested by the CSOP and indicated in the introduction to the paper) will help to obtain more transparent and even more unequivocal findings. Secondly, although the study does refer to the manufacturing sector, which is the biggest in terms of the number of companies, at the same time it does neglect all the other industries. It is possible to broaden the scope of our analysis and verify how performance is related to corporate group affiliation in other industries. Additionally it would be useful to conduct the analysis within particular industries to take into account their idiosyncrasies. This would allow for a more detailed insight into the matter.
The limitations of the study suggest that there is a possibility to conduct a more in-depth analysis that would, however, require a much broader scope of information. It is also possible to enrich the studies in a cross-country analysis. It would be worth observing, how the relation between the group affiliation and performance evolved in other countries. | 8,810.2 | 2016-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
Nonperturbative definition of the pole mass and short distance expansion of the heavy quark potential in QCD
We show that the O(Lambda) ambiguity in the pole mass can be fixed in a natural way by introducing a modified nonperturbative V-scheme momentum space coupling tilde-alphaV(q) where the confining contributions have been subtracted out. The method used is in the spirit of the infrared finite coupling approach to power corrections, and gives a non perturbative definition of the `potential subtracted' mass. The short distance expansion of the static potential is derived, taking into account an hypothetical short distance linear term. The magnitude of the standard OPE contributions are estimated in quenched QCD, based on results of Luscher and Weisz. It is observed that the expansion is not yet reliable at the shortest distances presently measured on the lattice.
Introduction
Historically, the pole mass M and the heavy quark potential V (r) were among the first quantities where renormalons [1] have been discussed in a physical context in QCD. Latter, the connection of the O(Λ) ambiguity in the pole mass [2,3] with a corresponding ambiguity in the coordinate space potential [4] was pointed out. It was observed [5,6] that the leading renormalon contribution cancels in the total static energy E static = 2M + V (r), a physical quantity which should be free of ambiguities. This cancellation is a non-trivial finding. Indeed, one might have expected that the pole mass and the static potential should be separately well defined: for instance, in the Schrödinger equation, the quark mass normalizes the kinetic energy. Furthermore, although the potential appears to be nonperturbatively defined only up to an arbitrary constant (in particular only the force is the quantity free of ambiguity in lattice calculations), it is difficult to maintain the view that the arbitrary normalization of V (r) implies an arbitrary normalization of M , which nevertheless would follow from the non-ambiguity of the static energy if there were no independent way to fix the normalization of either the mass or the potential. In this paper I suggest that there is in fact a natural way to define unambiguously the pole mass at the nonperturbative level (at least as far as the leading renormalon ambiguity is concerned) even in a confining theory like QCD, by properly subtracting out the confining contributions to the self-energy, hence to fix also the 'constant term' in the potential. In Sec. 2, the definition of the O(Λ) term in the pole mass is given, in term of a properly defined nonperturbative momentum space V-scheme couplingα V (q). The method used is in the spirit of the infrared (IR) finite coupling approach to power corrections [7]. In Sec. 3, theoretical constraints oñ α V (q) are reviewed. In Sec. 4 the short distance expansion of V (r) is derived, including the effect of an hypothetical linear short distance term, and the standard IR power corrections are estimated on theoretical ground. It is shown that present lattice data are not available at distances short enough for a reliable short distance analysis to be performed yet.
The nonperturbative pole mass
To define the pole mass, one has to fix its well-known renormalon ambiguity [2,3]. I start from the result [5,6] that the leading IR contribution δM P T |IR to the perturbative pole mass M P T (when expressed in term of a short distance mass like m ≡ m M S ), is related (presumably to all orders of perturbation theory [5]) to the leading long distance contribution δV P T |IR to the perturbative coordinate space potential V P T by the relation V P T (q) is the momentum space perturbative potential, related to V P T (r) by Fourier transformation and µ f is an IR factorization scale. Defining to all orders of perturbation theory a momentum space potential effective coupling α V |P T (q) bỹ eq.(2.1) can be rewritten as The right hand side of eq.(2.5) is presumably ill-defined, since it involves an integration over the IR Landau singularity thought to be present in α V |P T (q), and represents (taking µ f ∼ Λ) the O(Λ) ambiguity in the pole mass. To solve this problem, one would be tempted, in analogy with the IR finite coupling approach to power corrections [7], to replace the perturbative effective coupling α V |P T (q) inside the integral in eq.(2.5) by the corresponding nonperturbative coupling α V (q) defined bỹ where this timeṼ (q) is the Fourier transform of the full nonperturbative potential V (r): However, in a confining theory,Ṽ (q) either does not exist (e.g. if V (r) ∼ B log r + C for r → ∞), or is anyway too singular at small q (reflecting the singular large distance behavior of V (r)), making the integral in eq.(2.5) (with the nonperturbative α V (q)) divergent at q = 0. For instance, in the case of a linearly raising potential V (r) = O(r) for r → ∞, one gets α V (q) = O(1/q 2 ) for q → 0. This observation suggests one should first subtract out the confining long-distance part of the potential to define a suitable nonperturbative coupling α V (q). To this end, the following procedure appears the most natural one: expand the potential around r = ∞, and subtract from V (r) the first few leading terms in this expansion (including an eventual constant term) which do not vanish for r → ∞. There is by construction only a finite number of such terms. Let us call their sum V conf (r). Then we have which, assuming the large r expansion can actually be performed, uniquely defines δV (r), such that δV (r) → 0 both for r → 0 (from asymptotic freedom) and for r → ∞. It is clear that δV (r) now admits a standard Fourier representation and one can define the new nonperturbative couplingα V (q) by One should note that the perturbative part of these quantities are preserved, namely δV P T (r) ≡ V P T (r) and δṼ P T (q) ≡Ṽ P T (q), since δV differs from V by the V conf (r) term, which, viewed from short distances, appears as a finite sum of nonperturbative power-like corrections, invisible order by order in perturbation theory. Indeed, the terms occurring in perturbation theory should scale as 1/r, hence vanish for r → ∞, which excludes them from V conf (r). Thusα V |P T (q) = α V |P T (q) is the same as in eq.(2.4), i.e.α V and α V have identical perturbative expansions.
As an example, consider the potential in quenched QCD (this is actually the only case where the analytic form of the r → ∞ expansion is known in low orders). Theoretical expectations give the long distance expansion for r → ∞ Although the O(1/r) term is not a rigorous result of QCD, since it has been derived within an effective bosonic string theory [8], it has been numerically confirmed [9] in high precision lattice simulations. We shall therefore assume that eq.(2.11) gives the correct large distance behavior of the static potential. It follows that and one defines V (r) ≡ Kr + C + δV (r). (2.13) In this case, the couplings α V (q) (if it can be defined nonperturbatively, i.e. if C = 0 as previously noted) andα V (q) just differ by a 1/q 2 term, arising from the Fourier transform of the Kr piece.
The prescription for the nonperturbative definition of the pole mass now reads as follows. Introduce the 'potential subtracted' mass [5] (2.14) and define the nonperturbative IR contribution to the pole mass by in complete analogy with eq.(2.1),(2.2) and (2.5). Then the pole mass is given by where the dots represent non-leading O(1/m) IR contributions from higher order renormalons, and the µ f dependence approximatively cancels between the two terms on the right hand side. The interpretation of the prescription eq.(2.18) is transparent: it says one should remove from M P T its ambiguous IR part δM P T |IR (µ f ), as suggested in [5], and substitute for it the corresponding nonperturbative (and non-ambiguous) IR contribution δM IR (µ f ). One should note the similarity between eq.(2.18) and the corresponding expressions in the IR finite coupling approach to power corrections [7]. In the present context, however, the nonperturbative coupling is unambiguously identified. With the pole mass well-defined, the constant term C in the large distance expansion of the potential (eq.(2.11)) is in turn fixed, since the corresponding constant term in the large distance expansion of E static (r), which should be unambiguous and calculable, is 2M + C.
Constraints on the nonperturbativeα V (q)
Eq.(2.11) and (2.13) yield δV (r) ∼ − π 12 1 r for r → ∞, hence δṼ (q) ∼ − π 2 3 1 q 2 for q → 0, which yields i.e.α V (q = 0) ≃ 0.196, a rather small IR fixed point value. Substituting this value as a rough estimate ofα V (q) in the integrand of eq.(2.17) gives which represents a correction of about 100 M eV for the range of µ f quoted in [5] for b-quarks. A more refined estimate is obtained by inputting the information about the O( 1 r 2 ) term in eq.(2.11), which was obtained in [9] from a fit to high precision large r lattice data and yields for r → ∞ δV (r) ≃ − π 12 Note that, since b > 0,α V (q) increases from its IR value as q increases, hence must be non-monotonous in the IR region, since asymptotic freedom implies it should ultimately decrease to 0 at large q. At µ f = 1.2 GeV , the second term in the parenthesis in eq.(3.5) represents a correction of about 40% to the IR value. Substituting eq.(3.5) in the integrand of eq.(2.17) gives which yields δM IR (µ f ) ≃ 120M eV for µ f = 1.2 GeV .
Short distance expansion of the heavy quark potential
In this section I show that, barring constant terms, the short distance expansion of the heavy quark potential can be obtained directly 1 from eq.(2.7), despite the singular behavior ofṼ (q) at small q. Introducing again the factorization scale µ f , eq.(2.7) can be written as At short distances, we can expand the sin qr factor in the low momentum integral, which gives the IR power corrections. Making the further assumption that α V (q) has no large power corrections at large q and may be well approximated by its perturbative part (this assumption will be modified below, eq.(4.7)), one ends up with the r → 0 expansion is the IR subtracted perturbative potential [5]. The normalization of the standard O(r 0 ) and O(r 2 ) renormalon-related power corrections in eq.(4.3) is thus given 2 by low-energy moments of α V (q). Note that the O(r 0 ) term is actually infinite, as expected from the divergent IR behavior of α V (q). In particular in quenched QCD eq.(3.5) implies for q 2 → 0 But since the O(r 0 ) term contributes only an overall normalization constant to the potential, which in this section is left arbitrary, one can drop it out. On the other hand, the O(r 2 ) and higher order r-dependent contributions are finite. In particular, using eq.(4.5) as a rough approximation to α V (q) in the range 0 < q < µ f one obtains in quenched QCD for r → 0 (ignoring any constant term) Let us now modify the previously mentioned assumption, in order to deal with the possibility that a O(1/q 2 ) power correction is actually present in α V (q). Such a correction has been first suggested in [11] as a consequence of new physics related to confinement, leading to a O(r) linear correction to the potential at short distances, of the same size (and sign) as the standard long distance correction related to the string tension. It should be noted however that a short distance linear piece may have a more conventional (although still non perturbative) infrared origin, as indicated by the position of the leading IR renormalon present inṼ (q), which also suggests [5] the presence of a O(1/q 2 ) correction. Let us thus assume that for q 2 → ∞ with K 0 = K in general. To deal with this correction, one can use the general method of [12], or more conveniently, introduce a new couplingα V (q) (different in general from the one in section 2, see below), related to the original α V (q) by such that the redefined couplingα V (q) is essentially given by its perturbative part (which coincides with that of α V (q)) at large q 2 with no substantial power corrections. Thus from eq.(4.8) and, upon taking the Fourier transform V (r) = K 0 r + δV (r), (4.11) where δV (r) is given by eq.(2.9) and (2.10), but withα V (q) now defined by eq.(4.8). Note that for K 0 = K this definition coincides with that of section 2 (assuming C = 0, see the comment after eq.(2.13)). Thus, introducing a factorization scale µ f as in eq.(4.1) we have Sinceα V (q) has no large power corrections, it can be approximated by its perturbative part α V |P T (q) above some scale µ f , and one deduces the short distance expansion From eq.(4.5) and (4.8) we get for q 2 → 0 (4.14) Thus, dropping again the (infinite) O(r 0 ) term, and using eq.(4.14) for q < µ f , we obtain hence from eq.(4.11) which of course agrees with eq.(4.6) for K 0 = 0. The correlation between the coefficient of the O(r) correction (which is µ f independent) and that of the standard OPE O(r 2 ) correction should be noted. For K 0 = 0, we get a neat derivation of the well-known statement [11] that the appearance of a linear short distance term in V (r) is equivalent to the presence of a O(1/q 2 ) correction in the standard α V (q) coupling. Moreover, for K 0 = K, one obtains the straightforward, but interesting, result that the appearance of the linear short distance term is equivalent to the statement that the modified coupling α V (q) of section 2 (rather then α V (q)) has no O(1/q 2 ) corrections. One might attempt an analysis of the lattice short distance data of [13] based on eq.(4.11) and (4.13). V P T (r, µ f ) could be evaluated from eq.(4.4) by solving the known [14] 3-loop renormalization group equation for α V |P T (q) and performing the integral, similar to the single dressed gluon 'renormalon integral' (with IR cut-off) in [12,15], while the power corrections should be fitted. Unfortunately, one finds that the perturbative expansion of the V-scheme coupling beta function is not reliable at values of µ f small enough that the low momentum integral in eq.(4.12) can be meaningfully expanded and parametrized in term of a few power correction terms, even at the shortest values of r presently measured on the lattice. Thus no reliable fit of the power corrections can be performed yet. It should be noted that in the present approach standard IR power corrections appear from an OPE-like separation of long and short distances in the Fourier transform of the momentum space potential, and their presence is mandatory. This is to be contrasted with the result of [13], where no power corrections were needed 3 if the potential is predicted in term of the renormalization group equation of the position space effective charge α F associated [17] to the force F (r) = dV . However, the implicit definition 4 of the power corrections in the later case is different, and does not make use of a momentum space IR cutoff to separate long from short distances.
Conclusion
We have shown that it is possible to fix in a natural way the O(Λ) renormalon ambiguity in the pole mass, thus giving a nonperturbative definition of the pole mass in QCD at this level of accuracy, which represents a natural nonperturbative extension of the 'potential subtracted' mass, in the spirit of the IR finite coupling approach to power corrections. This definition is an optimal one, in the sense the prescription is to remove from the heavy quark potential contribution to the self-energy those terms and only those one (the confining ones contained in V conf (r)) which would give a meaningless (infinite) result for the pole mass. For instance, one should not remove from δV (r) the O(1/r) 'Lüscher term' to include it in V conf (r) (see eq.(2.11)) (which, moreover, would make the modified IR finite V-scheme couplingα V (q) non-asymptotically free!). The applications of the proposed mass definition are similar to those of the 'potential subtracted' mass, to which it provides the leading power correction, allowing an accurate relation to the standard M S mass, but it can be used consistently with non-perturbative extensions of the Coulomb static potential (such as implied by phenomenological potential models or the potential determined on the lattice). The remaining challenge is to fix the O(Λ 2 /m) ambiguities in the pole mass arising from higher order renormalons.
We have also discussed the OPE like analysis of the short distance potential. The magnitude of the standard OPE contributions have been estimated from eq.(4.5). However, the resulting short distance expansion is unreliable at the lowest values of r measured so far on the lattice, due to the poor convergence of perturbation theory for the momentum space V-scheme coupling beta function. | 4,145.4 | 2003-07-15T00:00:00.000 | [
"Physics"
] |
A Graph-Based Reinforcement Learning Method with Converged State Exploration and Exploitation
: In any classical value-based reinforcement learning method, an agent, despite of its continuous interactions with the environment, is yet unable to quickly generate a complete and independent description of the entire environment, leaving the learning method to struggle with a difficult dilemma of choosing between the two tasks, namely exploration and exploitation. This problem becomes more pronounced when the agent has to deal with a dynamic environment, of which the configuration and/or parameters are constantly changing. In this paper, this problem is approached by first mapping a reinforcement learning scheme to a directed graph, and the set that contains all the states already explored shall continue to be exploited in the context of such a graph. We have proved that the two tasks of exploration and exploitation eventually converge in the decision-making process, and thus, there is no need to face the exploration vs. exploitation tradeoff as all the existing reinforcement learning methods do. Rather this observation indicates that a reinforcement learning scheme is essentially the same as searching for the shortest path in a dynamic environment, which is readily tackled by a modified Floyd-Warshall algorithm as proposed in the paper. The experimental results have confirmed that the proposed graph-based reinforcement learning algorithm has significantly higher performance than both standard Q-learning algorithm and improved Q-learning algorithm in solving mazes, rendering it an algorithm of choice in applications involving dynamic environments.
Introduction
Reinforcement Learning (RL) has found its great use in a lot of practical applications, ranging from problems in mobile robot [Mataric (1997); Smart and Kaelbling (2002); Huang, Cao and Guo (2005)], adaptive control [Sutton, Barto and Williams (1992); Lewis, Varbie and Vamvoudakis (2012); Lewis and Varbie (2009)], AI-backed chess playing [Silver, Hubert, Schrittwieser et al. (2017); Silver, Schrittwieser, Simonyan et al. (2017); Silver, Huang, Maddison et al. (2016)], among many others.The idea behind reinforcement learning, as illustrated in Fig. 1, is that an agent learns from the environment by interacting with it and receives positive or negative rewards for performing calculated actions, and the cycle is repeated.The key issue of the whole process is to learn a way of controlling the system so as to maximize the total award.When the agent begins to sense and learn a completely or partially unknown environment, it involves in two distinct tasks: exploration which attempts to collect as much information about the environment as possible, and exploitation which attempts to receive positive rewards as quickly as possible.
Figure 1: In reinforcement learning, the agent observes the environment, takes an action to interact with the environment, updates its own state and receives reward There is a dilemma of choosing between the two tasks of exploration and exploitation, though.Too much exploration will adversely influence the efficiency and convergence of the learning algorithm, while putting too much emphasis on exploitation will increase the possibility of falling into a locally optimal solution.The existing RL algorithms all attempt to balance out these two tasks in their learning cycles (Fig. 1), but these is no guarantee that the best result can always be obtained.Besides the exploration and exploitation dilemma, the RL algorithms have to employ value distributions that inexplicitly assume that environment is static (i.e., no change), or it changes very slowly and/or insignificantly.However, in many real applications, the environment rarely stays unchanged.More than likely, the environment that can be described in terms of states (Fig. 1) changes over the course of exploration.In this case, value distribution has nothing to do with the problem at hand, and all the information obtained from the previous exploration efforts become less, or totally irrelevant.To effectively solve the aforementioned problems in reinforcement learning, we herein present a new algorithm based on the partitioning of the states set and search of the shortest path in a directed graph that represents a RL method.We have formally proved and experimentally verified that both exploration and exploitation in reinforcement learning actually converge at the end of the decision-making process, and thus, the learning process does not need to face the exploration/exploration dilemma as other existing reinforcement learning methods would do.This observation indicates that a reinforcement learning scheme is essentially the same as searching for the shortest path in a dynamic environment, which is readily tackled by a modified Floyd-Warshall algorithm as proposed in the paper.The experiment that applies the proposed algorithm to solve mazes confirm better performance of the new algorithm, particularly its effectiveness in addressing issues pertaining to a dynamic environment.
Preliminaries and background
In this section, we will first survey the basic structure of reinforcement learning (RL) algorithms, particularly value-oriented method of RL, and formally define the exploration vs. exploitation tradeoff in RL.In the literature, RL is shown to be mapped to various graph representations, and these methods are briefly described in the section as well.With graph representations, RL can benefit from rich results in graph algorithms, and we thereby finish this section by reviewing algorithms that search for the shortest path in a graph, as they are related to this paper.
Value-oriented method for the exploration-exploitation tradeoff in RL
Most RL problems can be formalized using Markov Decision Processes (MDPs), and there are a few key elements in RL as defined below.1. Agent: An agent takes actions.2. Environment: The physical world through which the agent operates.3. State: A state is a concrete and immediate situation in which the agent finds itself.In this paper, we denote as the state of the agent at time instance i, and set S contains all the states that the agent can operate on.That is, ∈ . 4. Action: agents choose among a list of possible actions.Denote as the action that agent might perform at time instance . is defined as the set of all possible moves of the agent can make, i.e., ∈ .5. Reward: A reward is the feedback that is used to measure the success or failure of an agent's action.Here a reward at time instance is defined as .Actions may affect both the immediate reward and, through the next situation, all the subsequent rewards [Sutton and Barto (2017)] 6. Exploitation: a task that makes the best decision given all the current information.7. Exploration: a task that gathers more information to be used for making the best decision in the future.8.An episode: the behavior process cycle of the agent from the beginning of the exploration to the beginning of the next exploration.When the interaction between the agent and the environment breaks naturally into subsequences, which are referred as episodes.Each episode ends in a special state called the terminal state, followed by a reset to a standard starting state or to a sample from a standard distribution of starting states [Sutton and Barto (2017)].In RL, the exploration-exploitation tradeoff refers to a decision making process that chooses between exploration and exploitation.Value-oriented RL methods have to deal with such exploration-exploitation tradeoff through the value distribution as defined by the value function or a probability that decides the chain of actions that lead to the target state all the way from the start state through a series of awards.A decision chain refers to a series of decision-making steps taken by an agent.In order to strike a balance between exploration and exploitation, there are two main decision methods that can be followed, the ϵ − greedy and softmax.In ϵ − greedy method, the action is selected by, one has where * is the action in which of the value function assumes the highest value: (2) where (, ) is action-value function which evaluates each possible action while in the current state.One drawback of ϵ − greedy action selection is that when it explores, all the possible actions are given the equal opportunity, as indicated in Eq. ( 1).In a simple term, this method is as likely to choose the worst-appearing action as it is to choose the next-to-best action.This gives rise to the so-called softmax method that can vary the action probabilities through a graded function below, where π(|) is the probability policy to choose an action from the specific state , and τ is a "computational" temperature, and is an action-value function that evaluates each possible action in the current state.The problem of value-oriented method is due to its weak ability to eliminate exploration blindness resulting from a large number of repeated explored states introduced by the value distribution structure.The stochastic factors that are added to help the search process jump out of the loops and balance exploration and exploitation actually come at the expense of more blindness of exploration.
RL over graphs
A RL can be represented as a directed graph, G<V, E>, where a vertex, v i ∈V(G), corresponds to a state in reinforcement learning and an edge, e ij ∈E(G), connects two vertices (two states ) with a decision action in reinforcement learning.In this graph, a path can be regarded as decision sequences in reinforcement learning.In the literature, many RL methods are related to their graph representations.In PartiGame Algorithm [Moore (1994)], the environment of RL is divided into cells modeled by kd-tree, and in each cell, the actions available consist of aiming at the neighbor cells [Kaelbling, Littman, Moore (1996)].In Dayan et al. [Dayan and Hinton (1993)], speedup of reinforcement learning is achieved by creating a Q-learning managerial hierarchy in which high-level managers learn how to set tasks for their lower level managers.The hierarchical Q-learning algorithm in Dietterich [Dietterich (1998)] proves its convergence and shows experimentally that it can learn much faster than ordinary "flat" Q-learning.None of these methods, however, can solve the root problem concerning the dilemma of exploration and exploitation.
Floyd-Warshall algorithm
Denote (, , ) as a shortest path search algorithm that is applied to G from vertice to vertice that represents a RL.The classical shortest path algorithms like Dijkstra [Dijkstra (1959)] and A* [Hart, Nilsson, Raphael (1968)] are single starting point algorithms for the path-finding.The Floyd-Warshall algorithm [Floyd (1962)] (FW), which is pursued to use in this study, provides the shortest path between any two vertices in specified graph and it is found to be adaptive to the change of the graph.In the standard Floyd-Warshall algorithm, two matrices (DIST and NEXT) are used to express the information of all the shortest path in the graph.The matrix DIST records the shortest path length between two vertices.The matrix NEXT contains a name of the intermediate vertex through which the two vertices are connected through the shortest path.Because of the optimal substructure property of the shortest path, no matter how many intermediate vertices the shortest path passes through, simply recording one of the intermediate vertices is sufficient to express the entire shortest path.
Convergence of exploration and exploitation
In this section, we first define the completely explored graph, which serves as the foundation for a graph-based iterative framework for reinforcement learning.Under this framework, the acquired knowledge from the RL's exploration task can be recorded by the graph, and the shortest path search can then be conducted to determine the next decision chain.This new approach is able to well track the graph changes that are caused by exploration and sometimes by the changing environment.In this section, we shall prove that with this framework involving the shortest path search, the exploration actually converges to exploitation.In a simple word, exploration will find the shortest path to reach the same reward as exploitation does.
Completely explored graph
Definition 1 A Completely Explored State is a state of which all its possible successor states have already been explored.If one of a state's successor states has been explored and at least one of its successor states has not yet been explored, the state is called a Partially Explored State.Definition 2 If the vertex set V in connected graph G<V,E> includes the start states of episodes and all these states have been completely explored, and the edge set E represents all the actions that need to be taken to connect all the different states, graph G is called a Completely Explored Graph (CEG).Fig. 2 shows an example of a CEG where each state( ) is linked with up to 4 possible actions: 0 , 1 , 2 , 3 .There are some explored edges which are omitted for simplicity, such as action 1 for ( 20 ) and ( 01 )ℎ 0 ; they point to nonexistent state transitions.If we denote an environment feedback function by Env, then for a given action , the next state +1 can be determined as +1 = ( , ).
Exploration converges to exploitation
From CEG, we can prove that exploration converges to exploitation.Here is denoted as a reward state.Lemma 1. Suppose that exploration of each episode starts with state 0 , and ends in reward state stt rwd after passing some intermediate states through a series of episodes.Of all the possible episodes, one can see ∉ .Proof: Once the agent has landed in state stt rwd , the episode ends, so there will be no further exploration originated from stt rwd .That is, the graph is a completely explored graph and the states are completely explored states, according to definitions 1 and 2. Thus stt rwd cannot be a member of the SE set.(End of proof) Define the envelope set of set SE as = { : ( ) ∧ ( ( ) ⊄ )} Corollary 1.For ∈ consists of members in SE, there exists at least one of the successor states of stt i does not belong to SE.
Proof: It comes directly from the definition of SEE.(End of proof) Corollary 2. If stt i ∈ (Set predecessor (stt rwd ) ∩ SE) is in an exploring episode, then irrespective of the explore strategies adopted in the future, stt i will always be part of SEE.Proof: From lemma 1, one can see that if ∈ has a successor state and it is impossible for to be a member of SE.By definition of SEE it is known that is always a member of .(End of proof) Definition 3. If SSA(CEG) plans a decision chain that eventually reaches , and it does not produce any change in either V(CEG) or E(CEG), this condition is referred as Exploration Convergence.Corollary 3. Once the exploration converges, the planned decision chain from SSA(CEG) remains the same in the next exploration episode.Proof: At the beginning of each episode, a CEG is constructed as needed to explore the new states.If the CEG keeps unchanged at the end of the current episode, the next episode will produce the exactly same path.At this point, the algorithm converges as it satisfies the condition set by Definition 3. (End of proof) Theorem 1. Assume there are a finite number of states and a SSA(CEG) is able to find the shortest path in CEG, exploration becomes finding a path from the start state stt 0 to the stt rwd .In other words, exploration converges to exploitation.Proof: i.
During exploration, state 0 can reach the SEE through SSA(CEG) .That is, one needs to find the shortest path, path k, among all the paths, such that: where ( , ) is the length of the shortest path between state and state .
If ∈ ( ) , then ( 0 , … , , … , ) from SSA(CEG) algorithm marks the shortest path from 0 to .In this case, the conditions concerning exploration convergence (defined in Definition 3) are met, and the exploration converges to the exploitation.ii.If ∉ ( ) , CEG continues to evolve as exploration progresses.iii.As exploration continues, new members are added into SEE and they replace the old ones, extending the shortest path, and according to Corollary 2, any new member ∈ ( ( ) ∩ ) will always be part of SEE.iv.When exploration ends, that satisfies Eq. (4) will eventually meet the condition: ∈ ( ) .v. The agent is bounded to pass the state associated with the shortest path in the ( .If not, there would have a different ∈ ( ) from that otherwise makes ( 0 , ) < ( 0 , ).If ∈ , it's impossible for SSA(CEG) to choose as a state in the shortest path.If ∉ , there must be a state ∈ in 's predecessor chain that makes ( 0 , ) < ( 0 , ).The algorithm does not converge during this episode.vi.Putting all things together, one can see that exploration by SSA(CEG) must converge to the shortest path from start state 0 to .As indicated in corollary 3, once the algorithm has found the shortest path from 0 to , the path will be repeated with no change in the following episodes.In this case, the exploration is readily to be halted.(End of proof).
Algorithm implementation
Based on Theorem 1 described in the previous section, we propose a framework for RL that does not need to concern about the dilemma of exploration and exploitation.There are two major components in the framework, namely CEG and Incompletely explored states, and there are two iterative steps as illustrated in Fig. 3: i.
Based on the current CEG, an action decision, in the form of a single decision or a chain of multiple decisions, will be made to guide the next exploration.ii.Update the CEG with the new knowledge acquired from the latest exploration.In a static or nearly static environment, exploration will help continue to grow CEG, while in a changing environment, CEG members can be added or deleted according to the exploration result.Note that when the CEG is updated, nodes or edges can be added or deleted from the graph.In a static environment, as the exploration progresses, the number of nodes and edges tends to increase, while in a dynamic environment, the number of nodes and edges may increase or decrease.
Shortest path search in dynamic environment
The standard Floyd-Warshall Algorithm calculates matrices DIST and NEXT in batch divided by the length of short path (the number of relay vertices here) for each vertex pair.
For constantly changing of vertices and graph structure that engages in exploration all the time, a more efficient method is needed and thus proposed in this section.
During exploration in reinforcement learning, the completely explored states are discovered in sequence, and subsequently, they are added to the CEG, after which the corresponding edges are also added.In addition, if the agent wants to adapt to the dynamic environment, the removal of vertex must also be taken into account.In this section, a modified algorithm Floyd-Warshall algorithm (SFW) is presented, which is able to search for the shortest path in a graph that represents a dynamic environment.In a simple term, we present SSA(CEG) for SFW.
In SFW, each time when a new vertex is added, it is not only to add the shortest path associated with the new vertex directly tied to the two matrices as defined in Floyd-Warshall, but also to compare the length of the new path introduced by the new vertex against that of the shortest path obtained from the prior iteration.These operations may result in the update of the two matrices.
Guided exploration
As proved in Section 3, exploration finally converges to the shortest path that connects with the target state.Since exploration and exploitation basically produce the same result, our algorithm only needs to consider one single task, exploration.
The steps of how to guide exploration is listed in Tab. 1.There are several major steps in the algorithm listed in Tab.1: Step 1: If is a neighbor of , the agent can take action directly, which transitions the state to .
Step 2: If is not a member of SE, a decision is randomly made by calling ().
Step 3: If is an edge state of SE, a decision is randomly made by calling ().
Step 4: If is a member of but not a member of , the shortest path is obtained using SFW.This path represents the decision chain by which the agent can exit in the most efficient way.
Update of the CEG
Before exploration starts, SE is empty, the agent has no a priori knowledge of the environment.Denote 0 as the start state of each exploration episode.Once exploration begins, from the initial state 0 , for each successor state , an action is selected from the actions set according to SFW, after which the agent moves to the next state +1 by taking action obtained from the feedback of the environment.Exploration gets repeated.Whenever a new state is found, it will be added to the graph, GU, immediately.When the current state is completely explored, it will be added to set SE, sometimes to the SEE simultaneously.This algorithm is listed in Tab. 2. One can see that when a new completely explored state, corresponding to a vertex in the graph can be added to the CEG, it must generate some action decision reflected as edge changes in the graph.The new SEE by definition can be readily derived from the updated CEG.
Notes on the proposed algorithm
If the current state of the agent is in SE, the shortest path to the boundary of the explored region is selected, as seen from the algorithm listed in Tab. 1.As far as the completely explored states are concerned, our approach is able to traverse all of them as opposed to explore them repeatedly.In the classical Q-learning algorithm, there are such a large number of states that have to be traversed repeatedly.This subtle difference makes our algorithm more computationally efficient, as evidenced by the experimental results reported in the next section.
Compared to value-oriented algorithms, experiences in our approach obtained from exploration history are recorded in GU and SE rather than derived from the value distribution.The establishment of two matrices in SFW relies on GU and SE, while its structure contains the shortest path of all the pairs of states in SE.Therefore, in the case of changing environment, the modest modification of GU and SE and the updates of the two matrices will make the proposed algorithm more adaptive to the new, changing environment.This feature can be clearly seen from the result reported in 5.3.
Experimental results
The new graph-based algorithm detailed in Section 4 has been applied to solve a maze.Maze solving has been widely adopted for the testing of reinforcement learning algorithms.The agent in the experiment can be seen as a ground robot roaming in a maze, and it can always sense its current position (state) as it moves around.At the beginning of experiment, the agent knows nothing about the maze, and it needs to find the reward(target) position and complete its journey by passing through a path from a specified start position.
Setup of experiment
The maze has a size of 16 rows by 16 columns for a total of 256 blocks.There are 4 types of blocks, namely target, trap, obstacle and ordinary pass.The fixed start position is treated as a normal pass block.The agent gets a reward of 1 when it reaches the target, but if the agent falls into a trap, it will get a reward of -1.Both conditions will lead to the finish of current episode, and the agent will have to return to the start position and restart its exploration.Note that the agent can keep the exploration information from all the previous episodes.There are 975 mazes in the experiment, and they differ from each other in terms of the locations of the obstacle blocks.In our experiment, there are 46 obstacles for each of the 975 mazes.For each maze, the initial position of the agent is at the upper left corner (1,1), and the target position is set to be (9,9).There are 4 fixed traps, located at (4,4), (12,4), (4,12) and (12,12).Tab. 3 summarizes the main characteristics of the maze.Fig. 4 illustrates a sample of mazes, and their respective reference numbers are 25, 36, 159, 256, 377, 512, 666 and 908.In these mazes, the red circle represents the agent, gray blocks represent obstacles, black blocks represent the traps, yellow block represents the target, and the rest are normal pass blocks.The proposed algorithm, referred as SFW, is compared against the classical Q-learning algorithm (ql) and an improved Q-learning algorithm (qlm).Tab.4 tabulates the main parameter values for ql.The main improvement of qlm over ql is that qlm can remember the locations of the obstacles and traps found during exploration, and avoid them during the subsequent explorations.Even if the next action is randomly selected based on some probability, qlm can filter out the obstacles and traps.0.9 ϵ − greedy 0.9
Performance comparison with Q-learning algorithms 5.2.1 Single maze comparison
All three algorithms are compared in terms of number of steps per episode when they are applied to solve all the mazes, and the results from mazes 25 and 908 are plotted in Fig. 5 and Fig. 6, respectively.In solving both mazes, SFW is found to converge more quickly than the other two algorithms, and it requires less number of steps during the exploratory process.As expected, qlm's performance is better than that of classical ql.
Statistical performance comparisons for all mazes
The experiments in this section include all 975 mazes.The X axis of each figure corresponds to the maze number.The comparison of convergence speed of every maze is shown in the Fig. 9 and Fig. 10, where ql and qlm are compared with SFW separately.The Y axis of each figure is the number of episodes when the agent for the first time arrives at convergence.In both figures, one can see that the proposed algorithm converge more quickly than the other two algorithms, especially true when there are a large number of episodes.Actually, the SFW alogirthm converges after no more than 20 episodes, while the other two algorithms need as many as 100+ episodes.The exploration efficiency obtained from solving every maze is shown in Fig. 13.One can see SFW outperforms qlm in this regard, and both algorithms are significantly better than ql.The X axis represents the maze number.The Y axis is the ratio of the total number of explored states to the total number of steps when the agent for the first time arrives at convergence.We will in this subsection examine how these changes can affect the performance of the three algorithms.
Obstacle change
Take Maze #8 as an example, the changes of the obstacles are tabulated in Tab. 5.
Computation efficiency
All three algorithms are compared for their respective computation efficiency under the same computation platform.The hardware used in the experiments has a Intel(R) Core(TM) i5-3210M CPU running at 2.50 GHz, and a RAM size of 8 GB.The operating system is Ubuntu 64bits.The tools used to test CPU time and memory occupation are line_profiler and memory_profiler, respectively.The average CPU time reported in Tab.7 is the average time of solving all 975 mazes.The basic memory usage in Tab.7 refers to the stable memory usage collected from solving select 62 mazes.One can see that SFW requires more memory space than the other two algorithms; the memory usage for both ql and qlm is comparable.The peak memory usage of SFW is also higher than that of ql or qlm.Unlike classical Q-learning algorithm and improved Q-learning algorithm, the proposed algorithm does not struggle with the exploration vs. exploitation tradeoff, as it was proved that the two tasks of exploration and exploitation actually converge in the decision-making process.As so, the proposed graph-based algorithm finds the shortest path during exploration, which gives higher efficiency and faster convergence than the Qlearning algorithm and its variant.Another big advantage of the proposed algorithm is that it can be applied to the dynamic environment where the value-oriented algorithm fails to work.The efficiency and convergence performance of the proposed algorithm comes at a cost of increased computational complexity.Future study will be focused to confine the computational complexity and particularly memory usage.
Figure 2 :
Figure 2: An example of a completely explored graph.Nodes represent states and directed edges between nodes represent actions.The shadowed area that includes all the State nodes (colored yellow) and all the associated directed edges represents the CEG.The unfilled nodes outside the shadowed area represent incompletely explored states, even though they connect to the CEG Suppose the complete actions set = { : 0 , … , } is known.State is a completely explored state if any reachable next state of , denoted as +1 ,by taking a possible action ∈ , is traversed.Let GU<V, E> represent a graph that contains all the traversed states, including both the completely and the partially explored states.We can define the predecessor state set () as () = {: ( ∈ ) ∧ ({, } ∈ ())} and the successor state set () is defined as () = {: ∈ ∧ {, } ∈ ()} where SE denotes the set V(CEG).If we denote an environment feedback function by Env, then for a given action , the next state +1 can be determined as +1 = ( , ).
Figure 5 :Figure 6 :
Figure 5: The steps amount in the #25 maze per episode comparison.The X axis is the episode number, and the Y axis represents the number of steps in each episode
Figure 11 :Figure 12 :
Figure 11: Steps length of convergence comparison: ql and SFW
Figure 13 :
Figure 13: Explore efficiency comparison 5.3 The maze in the dynamic environment Changes of environment are categorized as obstacle change and target position change.We will in this subsection examine how these changes can affect the performance of the three algorithms.
Fig. 14
Fig.14shows the snapshot of exploration, convergence, environment change and adaptation.The maze has undergone three major changes that occur to the locations of the obstacles in the experiments.The green squares in Fig.14represent the members of SE, and the purple squares represent dynamically increased obstacles that are located within the current convergent path.
Figure 14 :Figure 15 :
Figure 14: Dynamic obstacles, Maze #85.3.2Changes of target positionsTab.6 summarizes the changes that occur to maze 243.Other mazes have gone through similar changes.One can see that the target position is changed once, relocated from the center of the maze to its lower left corner.
The major steps of SFW are summarized follow.If current state stt is to be added to set SE, do the following steps: i.Add and initialize a new row in matrices DIST and NEXT.ii.Add and initialize a new column in matrices DIST and NEXT.iii.Update the new column by computing the shortest paths from all the vertices to this new vertex.iv.Update the new row by computing the shortest paths from the new vertex to all the other vertices.v. Update matrices DIST and NEXT by comparing the length of each vertices pair between the old shortest path recorded in the matrices and that of the new paths with the new vertex added.Denote ( , ()) as the length of the shortest path from arbitrary state to () as defined in Section 2: ( , ()) = min ) ← {( , ), ( , ) + (, )} {( , ): ∈ ()}(5) Matrices DIST and NEXT are updated by performing the following operations: ( , ) = ( , ()) + 1 (6) ( , ) = (7) where is the the result of min {( , ): ∈ ()} as given in Eq. (5).In the same token, one can update the stt's predecessor states set () with stt's successor states set ().That is, ( (), ) = min {( , ): ∈ ()} , ) = (12)
Table 3 :
Maze design parameter
Table 7 :
Algorithm complexity comparison In this paper, a new graph-based method was presented for reinforcement learning. | 7,316.4 | 2019-01-01T00:00:00.000 | [
"Computer Science"
] |
hanism of the OH-radical and Cl-atom oxidation of propylene carbonate †
Rate coefficients have beenmeasured at 298 K and atmospheric pressure for the reaction of OH radicals and Cl atoms with propylene carbonate. The measurements were performed in a large volume photoreactor using in situ FTIR spectroscopy for the analysis. The following rate coefficients (in units of cm per molecule per s) were obtained: k(OH + PC) 1⁄4 (2.52 0.51) 10 12 and k(Cl + PC) 1⁄4 (1.77 0.43) 10 . Product studies performed on the OH-radical and Cl-atom mediated oxidation of propylene carbonate in air support that the major fate of the intermediate cyclo-methyl-pentoxy radicals, formed in the degradation reaction sequence, is unimolecular decomposition. The FTIR product spectra, in combination with the absence of other potential products, suggest that the decomposition probably results to a large extent in the formation of acetyl formyl carbonate, CH3C(]O)OC(]O)OCH(]O). In product studies performed in N2, in which ppm levels of O2 are present, formation of acetic acid was observed in addition to acetyl formyl carbonate. It is postulated that the acid formation occurs via a pathway involving a 1,3-hydrogen shift mechanism with an intermediate alkoxy radical which is able to competewith the unimolecular decomposition pathway of the alkoxy radical at very lowO2 partial pressures.
Introduction
Propylene carbonate (PC) is a carbonate ester, derived from propylene glycol, with the empirical formula CH 3 C 2 H 3 O 2 CO and molecular structure: It is a colourless to yellowish and odourless liquid with a high boiling point.2][3] It is chiral but is used exclusively as the racemic mixture.
The production of propylene carbonate and its widespread use as a solvent and chemical intermediate will result in fugitive releases to the environment.Propylene carbonate has a vapour pressure of 0.045 mm Hg at 25 C (ref. 4) and based on this assessments of its possible environmental fate have concluded that if released to the atmosphere it will exist solely as a vapour. 5s for the majority of volatile organic compounds (VOCs) in the atmosphere vapour-phase propylene carbonate will be degraded in the atmosphere to a large extend by reaction with hydroxyl radicals. 6Direct loss of propylene carbonate by photolysis is also potentially possible since it contains a functional group that can absorb light greater than 290 nm, however, nothing is currently known on the atmospheric photolysis frequency of propylene carbonate which would allow an evaluation of the importance of this loss process compared to reaction with OH.Using a structure-activity relationship 7 a rate coefficient of 3.78 Â 10 À12 cm 3 per molecule per s has been estimated for the reaction of OH radicals with propylene carbonate, which corresponds to an atmospheric lifetime for the compound of between 3 to 4 days.To the best of our knowledge there have been to date no experimental determinations of the OH rate coefficient for the reaction.
2][13][14][15] These ndings suggest that the Clatom mediated VOC oxidation chemistry may be much more prevalent than previously thought.Since a rate coefficient for the reaction of Cl-atoms with propylene carbonate does not exist in the literature and the reaction may possibly have some atmospheric signicant it has been investigated in this work.
In summary, the objectives of the present work have been to investigate for the rst time the kinetics and mechanism of the OH-radical and Cl-atom mediated photooxidation of propylene carbonate and assess any possible environmental consequences.Apart from any atmospheric relevance of the results from the present work it also provides mechanistic insight into the gas-phase fate of the cyclo-methyl-pentoxy radicals that are formed in the OH-and Cl-initiated photooxidation of propylene carbonate.
Experimental
The experiments were performed in a 1080 L quartz-glass photoreactor in synthetic air at a total pressure of 760 Torr (760 Torr ¼ 101.325 kPa).Since the photoreactor is described in detail elsewhere 16 only a brief general overall description is given here.A pumping system consisting of a turbo-molecular pump backed by a double stage rotary fore pump allows the photoreactor to be evacuated to 10 À3 Torr.Magnetically coupled Teon mixing fans are mounted inside the chamber to ensure homogeneous mixing of the reactants.Two types of lamps are available for photolysis of the radical/atom precursors: 32 super actinic uorescent lamps (Philips TL 05/40 W: 320 < l < 480 nm, l max ¼ 360 nm) and 32 low-pressure mercury lamps (Philips TUV/40 W, l max ¼ 254 nm).The lamps are distributed evenly around the photoreactor, are wired in parallel, and can be switched individually.A white type multiple-reection mirror system with a total optical path length of 484.7 AE 0.8 m is mounted inside the photoreactor for sensitive in situ long path absorption monitoring of reactants and products in the IR spectral range 4000-700 cm À1 .IR spectra were recorded with a spectral resolution of 1 cm À1 using a Nicolet Nexus FT-IR spectrometer equipped with a KBr beam splitter and a liquid nitrogen cooled mercury-cadmium-telluride (MCT) detector.
Rate coefficients for the reactions of OH radicals and Cl atoms with propylene carbonate were determined using the relative kinetic technique.Hydroxyl radicals were produced by the photolysis of hydrogen peroxide using the mercury lamps: Chlorine atoms were generated by photolysis of molecular Cl 2 with the uorescent lamps: In the presence of OH radicals or Cl atoms, propylene carbonate and the reference compounds decay through the following reactions: Provided that the reference compounds and the propylene carbonate are lost only by reactions (3) and (4), then it can be shown that: The relative rate technique relies on the assumption that both propylene carbonate and the reference compounds are removed solely by reaction with either OH radicals or Cl atoms.In order to verify this assumption various tests were performed.Mixtures of propylene carbonate and the reference compounds with either H 2 O 2 or molecular chlorine were prepared and allowed to stand in the dark for 30 minutes the duration of a typical experiment.Neither reaction of the radical precursors (H 2 O 2 /Cl 2 ) with propylene carbonate nor any of the reference compounds was observed.Wall loss of all the substances was also insignicant.To test for possible photolysis loss of propylene carbonate it was irradiated alternatively in air with the uorescent and mercury lamps.Neither type of lamp caused photolytic loss of propylene carbonate.
The initial concentrations of propylene carbonate and the reference compounds methanol, n-butane and ethene were 2-4 ppmV (1 ppmV ¼ 2.46 Â 10 13 molecule per cm 3 at 298 K and 760 Torr of total pressure).The initial concentrations of H 2 O 2 and Cl 2 were typically around 10 and 5 ppmV, respectively.The experiments were performed in 760 Torr of synthetic air at (298 AE 2) K.In a typical experiment 60 interferograms were co-added per spectrum over a period of approximately 1 minute and 15-20 such spectra were recorded per experiment.The rst 5 spectra were always recorded without lamps to check that wall loss of the reactants remained negligible.
Reactants and products were quantied by comparison with calibrated reference spectra contained in the IR spectral databases of the laboratory in Wuppertal.Quantitative spectral subtraction was accomplished using the spectral subtraction option in the OMNIC Soware Suite 8.0 from Thermo Scientic.The reactants were monitored at the following infrared absorption frequencies (in cm À1 ): propylene carbonate 1866, n-butane 2965, methanol 1033 and ethene 950.The reaction products were monitored at the following infrared spectra (in cm À1 ): formaldehyde 2766, acetic acid 3581 and 1798, formic acid 1776 and 1105 and carbon monoxide 2162.
Kinetic study
Fig. 1(A) shows exemplary plots of the kinetic data obtained for the reaction of OH radicals with propylene carbonate measured relative to n-butane and ethene.Fig. 1(B) shows an exemplary plot of the kinetic data obtained for the reaction of Cl atoms with propylene carbonate measured relative to methanol and ethene.Reasonable straight lines were obtained for both reactions using the two reference compounds.The rate coefficient ratios k PC /k ref obtained from linear regression analyses of the plots of the kinetic data are given in Table 1 for the reaction of OH and Cl with propylene carbonate.The ratios are the averages from at least three experiments with each reference compound and the errors are the 2s standard deviation.The rate coefficients k PC given in Table 1 were put on an absolute basis using the following values for the reactions of the reference compounds (in units of cm 3 per molecule per s): k(OH + nbutane) ¼ 2.36 Â 10 À12 ; 17 k(OH + ethene) ¼ 7.9 Â 10 À12 ; 18 k(Cl + CH 3 OH) ¼ 5.5 Â 10 À11 ; 18 k(Cl + ethene) ¼ 1.1 Â 10 À10 . 18ince the values of the rate coefficients obtained for the reaction of OH with propylene carbonate using n-butane and ethene as reference compounds are in relatively good agreement we prefer to quote a nal rate coefficient for the reaction of (2.52 AE 0.51) Â 10 À12 cm 3 per molecule per s which is the average of all the individual determinations.Similarly for the reaction of Cl with propylene carbonate because of the good agreement between the determinations with the two reference compounds we prefer to quote a nal rate coefficient for the reaction of (1.77 AE 0.43) Â 10 À11 cm 3 per molecule per s which is the average of all the determinations.There are no other kinetic studies in the literature with which the measured rate coefficients for the reactions of OH radicals and Cl atoms with PC can be compared.The OH structure activity relationship (SAR) of Kwok and Atkinson 19 predicts a value of 3.78 Â 10 À12 cm 3 per molecule per s for the reaction of OH with PC which is $60% higher than the measured value.However, since the substituent factor F(-OC(] O)R) was used in the calculation to represent the carbonate group (-O(C]O)O-) and the ring-strain factor for a C5 ring was used, the agreement between experiment and estimate can be considered as reasonable.The rate coefficient for the reaction of OH with PC can be compared to that of OH with g-valerolactone for which a value of (2.81 AE 0.34) Â 10 À12 cm 3 per molecule per s has been reported, 20 i.e. approximately 16% higher than that of OH with PC.These compounds differ in that the O atom adjacent to the -CH 2entity in PC is a -CH 2entity in g-valerolactone, therefore, one would expect a somewhat higher OH rate coefficient for g-valerolactone compared to PC because of the presence of the extra -CH 2entity.Just how much higher the rate coefficient for the reaction of OH with g-valerolactone should be is hard to gauge because of the uncertainty in factors for substituent effects and the ring strains for PC and g-valerolactone.The OH SAR of Kwok and Atkinson 19 predicts that both compounds should be equally reactive toward OH.This prediction serves to show that the rate coefficient determined for the reaction of OH with PC in this study is of the correct order of magnitude.
A similar comparison can also be made for the reactions of Cl with PC and g-valerolactone.A rate coefficient of (3.74 AE 0.22) Â 10 À11 cm 3 per molecule per s has been measured for the reaction of Cl with g-valerolactone in our laboratory. 21In this case the reaction of Cl with g-valerolactone is just over a factor of two higher than the value measured for Cl with PC in this study.In an attempt to estimate the rate coefficient for the reaction of Cl with PC we have used the approach adopted in the OH SAR of Kwok and Atkinson, 19 i.e. we have used a substituent factor F(RC(O)O-) as a surrogate for the carbonate -O-C(O)Oentity.Using the parameters given in the Cl SAR of Aschmann and Atkinson 22 and the substituent factor F(RC(O)O-) ¼ 0.066 reported by Xing et al. 23 for the reaction of Cl with esters we estimate a value of 1.64 Â 10 À11 cm 3 per molecule per s for the reaction of Cl with PC.This value is in surprisingly good agreement with the experimental value and supports that using the substituent factor F(RC(O)O-) as a surrogate for the carbonate -O-C(O)Oentity is justied.
Product study
The products formed in the Cl-atom initiated photooxidation of propylene carbonate have been investigated in one atmosphere of synthetic air and nitrogen.spectrum with the absorptions of HCl and CO removed is not shown since they do not interfere with the main product absorptions below 2000 cm À1 and product absorptions above 2000 cm À1 are negligible.
The product spectrum is relatively simple indicating the probable dominance of one major product.The spectrum contains a broad peak in the carbonyl region from 1900 to 1725 cm À1 with two apparent maxima at approximately 1815 and 1844 cm À1 .The ngerprint region is dominated by 3 absorptions with maxima at 1232, 1314 and 1007 cm À1 .Also present in the spectrum, but not visible in trace (B) are weak absorptions due to formic acid (HC(O)OH).The concentration-time proles of propylene carbonate and the identied products HCl, CO and HC(O)OH are shown in Fig. 3.The errors on the product concentrations were typically $5%, for better clarity they have not been included in Fig. 3 Since the reaction of Cl with propylene carbonate proceeds by H-atom abstraction the formation of HCl is expected.Although the formation of HC(O)OH appears to be primary in nature we can not think of a plausible mechanism for a primary formation route and think it may stem from the rapid decomposition of an unstable primary product such as acetyl formyl carbonate (see below).The small amount of CO observed in the system is denitely being formed in secondary reactions.The strong product absorptions in the carbonyl and ngerprint regions all correlate linearly with the absorption of propylene carbonate over most of the reaction period, however, when most of the propylene carbonate has been consumed loss (probably wall) of the product(s) giving rise to the absorptions is evident.Fig. S1, panel A, in the ESI † compares the absorbance-time behaviour of the propylene carbonate carbonyl absorption at 1867 cm À1 with that of one of the product absorbance's at 1009 cm À1 .In Fig. S1, † panel B, the absorbance of the propylene carbonate carbonyl absorption at 1867 cm À1 is plotted against product absorption at 1009 cm À1 and demonstrates the linear correlation over most of the reaction period.Exactly similar results were obtained with OH as the oxidant, however, since (i) both OH and Cl react by similar mechanisms with propylene carbonate, i.e.H-atom abstraction, 5 (ii) the conversions of propylene carbonate were much lower and (iii) the OH product spectra were difficult to analyse due to strong absorptions from H 2 O 2 and water we are only presenting here the results with Cl as oxidant.
The OH SAR of Kwok and Atkinson 19 predicts contributions of around 4, 31 and 65% for H-atom abstraction from the primary, secondary and tertiary hydrogens in propylene carbonate.It is not possible to estimate accurately the corresponding percentages for H-atom abstraction with Cl atoms since reliable substituent factors are not available to account for the effect of the cyclic -OC(O)Ofunctionality.However, the good agreement between the product spectra obtained using both Cl and OH and the similarity in reaction mechanism suggests that H-atom abstraction from the secondary and tertiary hydrogens will also dominate for the reaction of Cl with propylene carbonate.This borne out by the interpretation of the results discussed below.
The radicals formed from H-atom abstraction from the primary, secondary and tertiary hydrogens in propylene carbonate will add O 2 to form the corresponding peroxy radicals.The main but not solitary fate of the peroxy radicals will be self and reaction with other peroxy radicals to form the corresponding alkoxy radicals, 24,25 which in the cases of the radicals formed from secondary and tertiary H-atom abstraction, will be cyclo-methyl-pentoxy radicals.Reaction channels forming molecular products are also possible 24,25 but as will be discussed below these are thought to be relatively minor for the cyclic peroxy radicals involved in the degradation of propylene carbonate.The alkoxy radicals that can be formed in the reaction of Cl/OH with propylene carbonate are shown in Fig. 4.
Scheme 1 outlines the possible reaction channels for the reactions of the alkoxy radical formed from H-atom abstraction at the methyl group in propylene carbonate.As depicted in Scheme 1 the radical could react with O 2 to form an aldehydic carbonate and/or decompose to form a carbonate alkyl radical and HCHO.Further reactions of the alkyl radical could form a cyclic keto carbonate or glyoxal.If the carbonate group containing products were being formed to any appreciable extent a strong carbonyl absorption from this group should occur at around 1870 cm À1 , 26,27 for example, the carbonyl absorption from propylene carbonate occurs at 1866 cm À1 in the gas phase.However, in the product spectrum the carbonyl absorption is very weak in this region.Formation of HCHO and glyoxal was also not observed indicating that the decomposition pathways are negligible.Based on these observations we conclude that product formation from H-atom abstraction at the methyl group in propylene carbonate is very minor.
Scheme 2 outlines possible reaction routes for the alkoxy radical formed from H-atom abstraction from the methylene group in propylene carbonate.The radical can react with O 2 to form a keto carbonate compound or cleave the C-C bond in the ring to form the linear alkyl radical shown in Scheme 1.It is well established that the major fate of the cyclopentoxy radical is ring-opening rather than reaction with O 2 (ref.28-30) and it is expected that this is also case for the alkoxy radical formed at the methylene group in propylene carbonate.The absence of any strong carbonyl absorption at 1870 cm À1 , as discussed above, also supports that formation of the molecular product through reaction of the radical with O 2 is negligible.
The alkyl radical could decompose with formation of acetaldehyde or add O 2 and through a sequence of peroxy-peroxy reactions etc. eventually form diformyl carbonate (CH 3 C(O) OC(O)OC(O)H) and acetic formyl carbonate (CH 3 C(O)OC(O) OC(O)CH 3 ).Since formation of acetaldehyde is not observed and the further reactions of the CH 3 radicals would result in the formation of HCHO and CH 3 OH, both of which were also not observed, it would appear that the major pathway must be formation of acetic formyl carbonate.The carbonyl absorption frequencies of open-chain carbonates occur at lower frequencies than those of the cyclic analogues. 26,27A shi to lower carbonyl frequencies compared to propylene carbonate is observed in the product spectrum obtained on reacting Cl with propylene carbonate (Fig. 2, trace (B)).The structure of acetic formyl carbonate contains an anhydride entity CH 3 -C(O)-O-C(O)-and this should be reected in the product spectrum.The product spectrum is compared with a reference spectrum of acetic anhydride in Fig. 2, traces (B) and (E), respectively.It can be seen that the positions of the carbonyl absorptions and also those in the ngerprint region match very well.Acetic anhydride has two absorption maxima in the carbonyl region which are due to the symmetrical and asymmetrical stretching vibrations of the carbonyl groups.The carbonyl stretching region in the product spectrum from the reaction of Cl with propylene carbonate also shows the existence of different carbonyl stretching absorption maxima.The resolution in the carbonyl maxima, that is clearly evident in the infrared spectrum of acetic anhydride, is probably lost in the infrared spectrum of acetyl formyl carbonate because of the presence of the additional carbonyl functionality in acetic formyl carbonate.
Scheme 3 outlines possible reaction routes for the alkoxy radical formed from H-atom abstraction from the tertiary Hatom in propylene carbonate.The alkoxy radical could eject a methyl group and form a keto-cyclo-carbonate, however, the lack of a carbonate absorption in the product spectrum and also the presence of HCHO and CH 3 OH, which would be formed from further reactions of the methyl radical, supports that this reaction pathway is negligible.The major reaction pathway for this radical will be ring-opening for which there are two possibilities, i.e. either C-O or C-C bond cleavage.The C-O bond cleavage route would result in the formation of methyl glyoxal, however, as this is not observed in the product spectrum this pathway is considered to be negligible.The major pathway must then be C-C bond cleavage with formation once again of acetyl formyl anhydride.
In summary, the evidence from the product study supports that H-atom abstraction from both the secondary and tertiary hydrogens in propylene carbonate will lead predominately to the formation acetyl formyl carbonate.
A product study has been performed on the reaction of Cl with propylene carbonate in one atmosphere of nitrogen.It should be borne in mind, that although the reaction was performed in N 2 , in large volume photoreactors such as used in this work, ppm levels of O 2 in the reaction system are unavoidable.The product spectrum obtained on irradiation of a propylene carbonate/Cl 2 /N 2 reaction mixture is shown in Fig. S2, trace (A) in the (ESI †).Although the spectrum looks very similar to that obtained in air, on closer inspection it is clear that another product is being formed that contains a carbonyl and hydroxyl entity.The product has been positively identied as acetic acid (CH 3 C(O)OH), a reference spectrum of which is shown in Fig. S2, † trace (B).The residual product spectrum which results on subtracting acetic acid from the product spectrum shown in Fig. S2, † trace (A) is shown in trace (C).The resulting spectrum is virtually identical with the product spectrum obtained in air and is attributed to the formation of acetyl formyl carbonate.
Fig. S3 † shows the concentration-time proles for the decay of propylene and the formation of acetic acid in N 2 .Also shown are the proles for HC(O)OH and CO which were also formed.In N 2 the yield of acetic acid was (42 AE 3)%.We have examined the formation of acetic acid as a function of the O 2 partial pressure in the reaction system and the results are shown in Fig. S4.† It can be seen that the yields falls from $42% in N 2 to zero by an O 2 partial pressure of $20 Torr.Unfortunately we have no means of estimating just how large the trace levels of O 2 are for the experiments performed in N 2 but they are obviously sufficiently large that a signicant fraction of the reaction leads to formation of acetyl formyl carbonate via the pathways outlined in Schemes 2 and 3.
We propose that the process leading to the formation of acetic acid at low O 2 partial pressures involves an alternative reaction pathway for the alkoxy radical formed through H-atom abstraction from the tertiary carbon in propylene carbonate.We suggest that the process involves a 1,3-hydrogen shi from the methylene group to alkoxy oxygen as shown in Scheme S1 in the ESI.† The newly formed radical can undergo peroxy-peroxy reactions and eventually decompose to form acetic acid, CO 2 and HO 2 radicals.At present this is the only potentially viable route to the formation of acetic acid which we can think of.It is not possible to tell from the experiments whether the H-shi is thermal or photochemical.
Conclusions
Rate coefficients have been determined for the reaction of OH radicals and Cl atoms with propylene carbonate at room temperature and atmospheric pressure.Using an hydroxyl radical concentration of [OH] ¼ 2 Â 10 6 radicals cm 3 (ref.31) in combination with the OH rate coefficient determined in this work gives a tropospheric lifetime for propylene carbonate, with respect to reaction with OH, of around 24 days.The product study indicates that the main product of the atmospheric photooxidation of propylene carbonate will be acetyl formyl carbonate which does not appear to be particularly stable.Reaction of acetyl formyl carbonate with OH radicals will be very slow and since neither the carbonate nor anhydride entities in its structure absorb in the tropospheric solar actinic region (l > 290 nm) 32,33 photolysis loss will also be negligible.Deposition on surfaces with formation of acetic and formic acids will probably constitute the main atmospheric fate of acetyl formyl anhydride.Therefore, the atmospheric degradation of propylene carbonate is likely to add to the environmental acidication burden close to point of its atmospheric in situ production.
Fig. 2 ,
trace (A) shows a spectrum of propylene carbonate and trace (B) shows the product spectrum obtained aer reaction with Cl and subtraction of residual propylene carbonate.Traces (C)-(E) show reference spectra of HCl, CO and acetic anhydride.In the product spectrum the formation of HCl and CO in the spectral regions around 3000 and 2000 cm À1 , respectively, are clearly visible.A product
Fig. 1
Fig.1Plot of the kinetic data for the reaction of (A) OH with propylene carbonate measured relative to n-butane and ethene and (B) Cl with propylene carbonate measured relative to methanol and ethene.
Fig. 2
Fig. 2 Products formed in the irradiation of a propylene carbonate/Cl 2 mixture in synthetic air.Trace (A) is a spectrum of propylene carbonate, trace (B) is a product spectrum after irradiation and subtraction of excess propylene carbonate and traces (C)-(E) are reference spectra of HCl, CO and acetic anhydride.
Fig. 3
Fig. 3 Concentration-time profile for the decay of propylene carbonate and the formation of products on irradiation of a propylene carbonate/Cl 2 /air mixture.
Fig. 4
Fig. 4 Alkoxy radicals formed through H-atom abstraction by Cl atoms or OH radicals from the primary (a), secondary (b) and tertiary (c) hydrogens in propylene, carbonate.
Scheme 2
Scheme 2 Possible reaction channels for the alkoxy radical formed after H-atom abstraction from the methylene group in propylene carbonate.Major suspected products are shown in brackets.
Table 1
Rate coefficient ratios k PC /k ref and absolute rate coefficients k PC for the reactions of OH and Cl with propylene carbonate (PC) obtained from analysis of the kinetic data for the reactions.Comparison of the k PC values with those obtained with structure activity relationship (SAR) methods À11 0.179 AE 0.018 (1.97 AE 0.20) Â 10 À11 Average (1.72 AE 0.39) Â 10 À11 a Calculated using the OH SAR of Kwok and Atkinson. 19b Estimated using the Cl SAR of Aschmann and Atkinson 22 and a substituent factor of F(RC(O)O-) ¼ 0.66 for esters reported by Xing et al. 23 (see text). | 6,126.4 | 2016-10-12T00:00:00.000 | [
"Chemistry"
] |
Evaluation of Directional Dependence of Sensitivity for Room-Temperature Magnetic Flux Sensors With Wide Sensitivity Region
Recently, the room-temperature (RT) magnetic flux sensors have begun to be applied to biomagnetic measurements. The RT sensors have a large advantage that they can be placed closer to the magnetic sources and obtain larger signals, unlike the superconducting quantum interference device (SQUID) flux sensors. However, when an RT sensor is placed closer to the sources, the dimension of the sensitivity region cannot be negligible, and the directional dependence of sensitivity should be considered to estimate the theoretical magnetic signals for magnetic source analysis. We proposed a method to evaluate the directional dependence of the sensitivity of the RT sensors in response to adjacent magnetic sources using the array of coils arranged along a circular arc. Consequently, it was revealed that a specific magnetoresistance device-based flux sensor with a certain spatial extent had the directional dependence of sensitivity with a bell-shaped profile. We also proposed the multiple sensitivity points model for the bell-shaped profile and estimated the sensitivity distribution over the sensitivity region, which is expected to be effective in improving the accuracy of the magnetic source analysis using an RT sensor array.
I. INTRODUCTION
B IOMAGNETIC measurement is a promising tool for the non-invasive investigation of electric activities in body tissues such as neurons or muscles. The electric currents generated by these tissues induce weak magnetic fields that can be detected using highly sensitive magnetic flux sensors arranged over the body surface. The original electric activities are estimated by conducting an appropriate magnetic source analysis of the obtained magnetic field distribution. The electric activities reflect the function of the tissues and provide significant clinical information. The two applications of the biomagnetic measurements magnetoencephalogram and magnetocardiogram that are effective for the non-invasive functional imaging of the brain and heart, respectively, are already commercialized and introduced to hospitals [1].
Recently, the performance of room-temperature (RT) magnetic flux sensors such as magnetoresistance (MR)-device-based sensors, magnetoimpedance (MI) sensors, or fluxgates has been improved and begun to be applied to detect biomagnetic fields that are extremely small and detected only using superconducting quantum interference device (SQUID) flux sensors [2]- [6]. The magnetic field resolution of the RT sensors is still inferior to the SQUID sensors. However, the RT sensors have two well-known advantages for biomagnetic measurements other than the non-necessity of cooling. One is the flexibility of the sensor arrangement, and the other is that the sensor can be placed closer to the Manuscript received May 6, 2020; revised June 20, 2020; accepted July 7, 2020. Date of publication July 13, 2020; date of current version January 20, 2021. Corresponding author: Y. Adachi (e-mail: adachi.y@gmail.com).
Color versions of one or more of the figures in this article are available online at https://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMAG.2020.3008912 magnetic sources because the cryostat that separates the source from the sensors is removed, unlike the SQUID sensors. When the flux sensor and magnetic source are close to each other, a higher signal intensity can be expected because the intensity of the magnetic field increases as the distance from the source to the sensor decreases. However, if the flux sensors have a wide sensitivity region, for example, when using a flux concentrator, the directional dependence of the sensitivity must be considered. If the magnetic field B at the sensor position r is assumed to be uniform and/or the dimension of the sensor is negligibly small, the sensor sensitivity region is represented by a single point, and the theoretical output signal s from the sensor is expressed using the scalar product or s = g|B(r)||n|cosθ , where g, n, and θ represent the sensor sensitivity, orientation of the sensor, and angle between the sensor orientation and magnetic field direction, respectively. Most algorithms used to conduct magnetic source analysis are based on this assumption. However, when the magnetic field flux sensors are placed adjacent to the sources of magnetic field and the size of the sensor is not negligible, the non-uniformity of the magnetic field to which the sensor is exposed must be considered. Consequently, the sensor sensitivity area is not represented by a single point and the theoretical output signal cannot be calculated using the scalar product. For example, in the case of a magnetic flux sensor coupled with a pickup coil to improve its sensitivity like a SQUID sensor as shown in Fig. 1(a), when the magnetic sources are positioned sufficiently far from the pickup coil, the diameter of the pickup coil (a) becomes negligible, and the theoretical magnetic signal versus θ , corresponding to the directional dependence of the sensitivity called the theta profile, traces a cosine curve. However, if the distance between the source and the pickup coil (d) is short, the surface integral must be applied over the area of the pickup coil to obtain the theoretical magnetic signal. The theta profile does not trace the cosine curve as θ changes as shown in Fig. 1(b). For the precise estimation of the magnetic sources adjacent to the sensor, it is essential to clarify the theta profile, i.e., the directional dependence of the sensitivity.
In the case of some types of commercially available MR-based sensors, an MR element is coupled with a flux concentrator and packaged as a sensor module, whose output is linearized to the input magnetic field. The flux concentrator is made of a high-permeability metal and gathers the external magnetic field to improve the sensitivity. There are not-a-few reports showing that the directional dependence of the sensitivity of a single sensor element traces a cosine-like curve in the uniform magnetic field, for example, as indicated in [7]. However, if the dimensions of the flux concentrator coupled with the sensor element are significant compared to the distance between the sensor and magnetic source, the sensor module captures magnetic fields in a certain volume over which the magnetic fields are considered not to be uniform. The directional dependence of the sensitivity in response to the magnetic source close to the sensor module is not necessarily cosine-like. In order to obtain the theta profile in the same way as the sensor coupled with a pickup coil described in the previous paragraph, the detailed structure of the flux concentrator is necessary. However, it is not usually disclosed by the sensor manufacturer. Also, even if the design of the flux concentrator is known, obtaining the theta profile is useful for calibration purpose.
In this study, we experimentally investigated the directional dependence of the sensitivity of commercially available RT sensors. To explain the obtained theta profiles responding to an adjacent magnetic source, a model called "multiple sensitivity points model" was proposed.
A. Evaluation of the Theta Profile
To evaluate the theta profile of the RT sensors, a coil array was fabricated, as shown in Fig. 2. A set of 23 solenoid coils was arranged along a circular arc with the radius of 20 mm. The solenoid coil had the inner/outer diameter of 0.83/1.75 mm and the length of 30 mm. The number of turns was 1000. Each coil was oriented in the radial direction, and the angular interval between adjacent coils was 10 • . A channel was excavated along the midline (θ = 0 • ) of the coil array. An RT sensor was inserted in the channel so that its orientation was aligned along θ = 0 • .
The theta profiles of two commercially available RT sensors were examined. One was an MR-based sensor (pT-MR, TDK, Japan) [8]. The outer dimension of the plastic package of the sensor module is 8 mm × 8 mm × 69 mm. The package contains an MR device coupled with a flux concentrator made of a high-permeability metal and electronics to make the sensor output linearized. However, its structure and dimension are not disclosed by its manufacturer. The other was an MI sensor (MI-CB-1DH-B, AICHI STEEL, Japan) [9]. A piece of amorphous magnetic wire core corresponds to the sensitivity region of the sensor. Its dimension is ϕ 0.8 mm × 6 mm.
The set of 23 coils was excited using a sinusoidal burst current with the frequency of 80 Hz for 300 ms one after another. The amplitude of the current was 0.5 mA pp , and the intensity of the induced magnetic field at the sensor was estimated at several nanoteslas. The output impedance of the current driver connected to each coil was sufficiently high, and then the induced voltage in the non-excited coils adjacent to the excited one hardly affect the measurement. The sensor was exposed to the magnetic field from different orientations with −110 • < θ < 110 • . The position of the RT sensors was adjusted along the channel, that is, the y-axis, so that the signals from the coils at θ = ±90 • became zero or less than the noise level. The output signal corresponding to the magnetic field from each coil was digitally recorded. The excitation sequence for 23 coils was repeated at least 64 times, and the repeatedly obtained signals were averaged to improve the signal-to-noise ratio. The 80 Hz component was extracted from the obtained signals using the fast Fourier transform. Thus, we obtained a set of 23 magnetic signals from the solenoid coils at each angle, corresponding to the theta profile V (θ i ) (i = 0, . . . , 22).
B. Multiple Sensitivity Points Model and Calibration
To understand the theta profile of the RT sensors, which did not trace the cosine curve, the multiple sensitivity points model was introduced. We assumed that the sensor was a segmented line with a certain length and assigned multiple sensing points along the line at a regular interval. Each sensing point had an individual sensitivity g n toward the direction n and provided the cosine theta profile (see Fig. 3).
We estimated a set of g n from the obtained theta profile using numerical search. The position and orientation of each solenoid coil were given in advance. When the position of the sensor was assumed, the "off-axis" theoretical magnetic field from the i th solenoid coil at the nth sensing point B th,n (θ i ) was estimated using a generalized complete elliptic integral [10], and then the theoretical signal V th (θ i ) was calculated as follows: where N and n represent the number of the sensing points and orientation of the sensor, respectively. A set of sensitivities and the position of the sensor were obtained using the least squares method optimizing these parameters to minimize E that is determined as follows: where V meas (θ i ) is the obtained magnetic signal from the i th coil described in Section II-A. Preliminarily, we applied the numerical search described above, setting the number of the sensing points to 3, 5, 7, 9, and 11. When we assumed the number of the sensing point to be 5, we obtained the most stable result from the numerical search. Therefore, the number of sensing points was determined to be 5 in this study.
Here, by applying the variable separation, we introduced the base sensitivity g and the relative sensitivity u n represented by g n = gu n , providing u n = 1, and calculated to find a set of u n and the position of the sensor to minimize We did not know the detailed structure of the flux concentrator, but we assumed the length of the line segment L to be 30 mm as shown in Fig. 3 for the following reason. When we calibrated the sensor array composed of the same MR-based sensors using a set of calibration coils arranged relatively away from the sensor array [11], assuming the sensing area as a single point, the sensing point of each sensor was localized approximately 15 mm away from the one end of the sensor module package. Assuming that the flux concentrator had a symmetric structure, the sensing point should be localized at the center of the flux concentrator. Therefore, L was set to 30 mm.
We examined the theta profiles of six MR-based sensors and estimated their sensitivity based on the multiple sensitivity points model to check their variability. Fig. 4 shows the obtained theta profiles of the MR-based sensor and MI sensor. As shown in Fig. 4(a), the theta profile of the MR-based sensor does not trace a cosine curve but a bell-shaped curve. This theta profile indicates that the flux concentrator effectively collected the magnetic field oriented in the assumed sensor direction (θ = 0 • ) more than the magnetic field on the side of the sensor. When we estimated the theoretical magnetic field from the source adjacent to the MR-based sensor, the directional dependence of the sensitivity should be considered. Meanwhile, the theta profile of the MI sensor shows the cosine-like shape, as shown in Fig. 4(b), because the sensitivity region of the MI sensor is sufficiently small compared to the distance between the magnetic source and the sensor. After obtaining the theta profiles of six MR sensors, the sets of the relative sensitivities u n and sensor position to minimize E in (4) were estimated assuming the multiple sensitivity points model under the constraints u 2 = u −2 and u 1 = u −1 , while considering the symmetrical structure of their flux concentrator. Fig. 5 shows the plot of the averaged relative sensitivity and its variability along the sensitivity region of the MR sensor. According to the estimated distribution, the sensitivity at the center of the sensitivity region is relatively higher than at both ends.
III. RESULTS AND DISCUSSION
Using the set of the relative sensitivity shown in Fig. 5 and (1), the theoretical theta profiles were estimated when the distance between the center of the sensitivity region and magnetic source was assumed to be 20, 40, 80, and 160 mm. Fig. 6 shows the estimated theoretical theta profiles for the magnetic sources at various distances. The plot for d = 20 mm shows the bell-shaped profile corresponding to Fig. 4(a). In contrast, when d = 40 mm or above, the theta profiles show cosine-like curves approximately. This indicates that if we apply the scalar product to the analysis of the magnetic sources placed at more than 40 mm from the sensor, for example, in the case of magnetocardiogram, the effect of the wide sensitivity region would not be significant. The magnetic fields over the sensitivity region could be regarded as approximately uniform.
However, when the source of the magnetic field is positioned at a distance of less than 20 mm, as is the case when targeting peripheral nerves or skeletal muscles, we will need to consider the bell-shaped theta profile of the MR-based sensor and application of the multiple sensitivity points model to magnetic source analysis because the depth of these magnetic sources is often less than 10 mm from the body surface.
In biomagnetic measurements, a set of small coils attached to the anatomical points of the body surface of subjects, called marker coils, is often used to determine the position of the subject relative to the sensor array [12]. The marker coils induce specific magnetic fields that can be detected using a sensor array, and the position of the marker coils relative to the sensor array is estimated from the detected magnetic field distribution using a source localization algorithm. Herewith, the position of the subject relative to the sensor array is obtained via the position of the marker coils. These coils are usually in close contact with the sensor array because they are attached to the body surface of the subjects. The distance between the marker coils and sensors should be less than 20 mm. Therefore, while applying the marker coil localization to the biomagnetic measurement using the array of the MR-based sensors, the bell-shaped theta profile should be considered. Otherwise, the marker coil localization and source reconstruction based on the result of the marker coil localization will include an indispensable error.
The method to evaluate the directional dependence of the sensitivity and the multiple sensitivity points model proposed in this study will also be effective for other magnetic flux sensors with wide sensitivity regions such as an orthogonal fluxgate [13].
The similar approach to the multiple sensitivity points model proposed in this study is also effective for SQUID magnetometers with a pickup coil. As mentioned in Section I, when the magnetic source locates adjacent to the sensor, we need to apply the surface integral over the area of the pickup coil to estimate the theoretical magnetic fields. However, the computational cost of the surface integral is quite high and often makes the magnetic source analysis time consuming. We can reduce the computational cost by applying the multiple sensitivity points model instead of the surface integral when a number of sensitivity points are arranged in the area of the pickup coil.
IV. CONCLUSION
In this article, we proposed a method to evaluate the directional dependence of the sensitivity of an RT magnetic flux sensor when the magnetic sources are located quite adjacent to the sensor. The specification of the directional dependence of commercially available sensors provided by their manufacturers is often evaluated in a uniform magnetic field and not suited for the biomagnetic applications because the distance between the sensor and the source is quite short and the magnetic field over the sensor sensitivity region cannot be regarded as uniform. Using the proposed method, it clearly revealed that the directional dependence of the sensitivity of the tested MR-based sensors had the bell-shaped profile that was different from the MI sensor, even if the structure of the flux concentrator in the sensor module package was unknown. The multiple sensitivity points model for the RT sensors and estimation of a set of the relative sensitivities would be effective in analyzing the directional dependence of the sensitivity even though the assumption for the structure of the flux concentrator was less rigorous. This model is expected to improve the accuracy of the analysis of the magnetic sources adjacent to the sensors, for example, marker coil localization for biomagnetic measurements with the RT sensor array. The accuracy improvement of the magnetic source analysis based on the multiple sensitivity points model and the validity of the model itself will be evaluated as our future issues. | 4,268.6 | 2021-02-01T00:00:00.000 | [
"Physics"
] |
A model of non-Maxwellian electron distribution function for the analysis of ECE data in JET discharges
. Recent experiments performed in JET at high level of plasma heating, in preparation of, and during the DT campaign have shown significant discrepancies between electron temperature measurements by Thomson Scattering (TS) and Electron Cyclotron Emission (ECE). In order to perform a systematic analysis of this phenomenon, a simple model of bipolar distortion of the electron distribution function has been developed, allowing analytic calculation of the EC emission and absorption coefficients. Extensive comparisons of the modelled ECE spectra (at both the 2 nd and the 3 rd harmonic extraordinary mode) with experimental measurements display good agreement when bulk electron distribution distortions around 1-2 times the electron thermal velocity are used and prove useful for a first level of analysis of this effect.
Introduction
Discrepancies between electron temperature measurements by Thomson Scattering (TS) and Electron Cyclotron Emission (ECE) have been often observed in high-temperature tokamak plasmas, in particular on TFTR [1], JET [2] and FTU [3]. Such observations, made on different machines, by different types of instruments, using different calibration methods, are too ubiquitous to be ascribed to instrumental effects; they rather call for explanations based on physics phenomena. The hypothesis that the discrepancy could be associated to non-Maxwellian bulk electron distributions has been put forward in the past [4,5] and appears as a plausible explanation in the case of a plasma heated by EC waves, as FTU [3]. For TFTR and JET, electron heating rather takes place because of the interaction of the electron distribution either with a fast ion tail driven by Neutral Beam Injection (NBI) and/or Ion Cyclotron Resonance Heating (ICRH), or by energetic alpha particles produced by fusion reactions in DT (Deuterium-Tritium) plasmas. Two mechanisms are known to produce small bipolar distortions of the electron distribution in the presence of energetic ions: collisional * Corresponding author<EMAIL_ADDRESS>relaxation [6] or Landau damping of kinetic Alfvén waves, as observed in the magnetosheath [7,8].
Recent experiments performed at JET at high level of plasma heating, in preparation of, and during the DT campaign have shown again TS-ECE discrepancies on an extensive database [9]. ECE is observed to be higher or lower than TS, depending on the plasma scenario. Moreover, ECE measured by a Martin-Puplett interferometer on a broad frequency range displays differences between 2 nd and 3 rd harmonics extraordinary (X) mode, which, at high temperatures (> 4 keV) and high densities are expected to yield the same radiation temperature. In order to perform a systematic analysis of this effect, a simple model of bipolar distortion of the electron distribution function has been developed, allowing analytic calculation of the EC emission and absorption coefficients. Bulk electron distribution distortions around 1-2 times the electron thermal velocity are considered for a first level of analysis of this effect. In this paper, the model is described in Sec. 2. Comparisons of the modelled ECE spectra (at both the 2 nd and the 3 rd harmonic) with experimental measurements are presented in Sec. 3. Conclusions and possible impact on ITER are presented in Sec. 4.
Electron distribution function model
A toy model of isotropic perturbation f1(p) of the electron distribution function f(p), where p is the modulus of the electron momentum, is developed as follows. We take the relativistic Maxwellian as the unperturbed distribution: m is the electron rest mass, c the speed of light, Te the electron temperature and K2 the modified Bessel function of the second kind. The perturbed distribution function is defined as: and a suitable bipolar isotropic form of the perturbation f1 is given as a function of three parameters f0, p0 and by: Various types of anisotropic forms can also be defined by multiplying f1u by functions of the pitchangle , e.g., Two examples of isotropic and anisotropic perturbed distribution functions of this kind are shown in Fig. 1. All these functions allow analytical calculations of the electron cyclotron absorption () and emission () coefficients for perpendicular propagation. The general expressions of the absorption and emission coefficients for an arbitrary electron distribution function, as momentum-space integrals, are well known and available in a number of papers. Starting, for instance, from Eqs. 10-13 of Ref. [10], for the extraordinary wave and n > 1, these expressions are given by: where n is the harmonic number, the wave frequency (times 2), p and c the plasma and the electron cyclotron frequencies, Nx the cold refractive index and the cold plasma dielectric tensor; (x,y) is the Beta function and H the Heaviside function.
The function Qn depends on the choice of the perturbed distribution function as follows: In the following, only the isotropic perturbation f1u is considered.
Using these expressions of the emission and absorption coefficients, the radiation temperature, measured along a line of sight in the equatorial plane (as it is approximately the case for JET), is given by: where R is the major radius coordinate and a, R0 are the minor and major radii. An example of the absorption and emission coefficients, as well as of the radiation function = (− ∫ ) , i.e., the integrand appearing in the radiation temperature expression, at various wave frequencies and for typical JET parameters is presented in Fig. 2. This figure shows that and are broad functions of R, nevertheless, emission is well localised in space owing to the exponential reabsorption term that multiplies in the radiation function expression.
Owing to the resonance condition = nc(R)/, for a given wave frequency the localisation in R corresponds to a localisation in electron kinetic energy = 2 ( − 1) = 2 ( ⁄ − 1), where Rc is the cold resonance location, defined by ( ) = . The location of the maximum of the radiation function Frad in normalised momentum p/pth (where pth=mvth=(mTe) 1/2 ) depends on the harmonic number and on the electron temperature, as shown in Fig. 3, where the maximum is plotted together with the widths at half height. This figure illustrates a fundamental property of the ECE diagnostic: different harmonics probe different parts of the electron distribution function, both momentum and width decreasing with the electron temperature. This means that ECE at 2 nd and 3 rd harmonics can be used to constrain the electron distribution function in the region pth<p<2pth, in a more and more precise way for higher and higher temperatures.
Two examples of the previously defined isotropic model perturbation are shown in Fig. 4, together with the ranges seen by the 2 nd and 3 rd harmonics, for Te=7 keV.
It appears that the two harmonics can be differently affected by perturbations localised at different momenta, i.e., with a different p0 parameter. In general, the smaller the absolute value of the distribution function derivative in a given harmonic range, the larger the corresponding radiation temperature. Moreover, since the locations and widths of the momentum regions probed by the two harmonics vary with temperature, the same perturbation will affect the Te profile measured by ECE more or less significantly, depending on the temperature. An example is shown in Fig. 5, where the 2 nd harmonic ECE profiles are computed for typical JET parameters and two different values of the central temperature Te0: 3 and 10 keV. For the same perturbation parameters (f0 = 0.03, p0/pth = 1, /pth = 0.25), the radiation temperature is significantly affected at high temperature, but practically unaffected at low temperature, in agreement with experimental observations [1,2].
The strong sensitivity of ECE to very small perturbations of the electron distribution (a few percent in the example of Fig. 5) is due to the presence of the exponential term in Frad and to the fact that the absorption coefficient is an integral in momentum space of perpendicular derivative of the distribution function [10]. The same perturbation would have very little effect on the Thomson scattering measurement, which is simply proportional to the distribution function (see, e.g., Eqs. 5.8, 5.9 of [11]). This is illustrated in Fig. 6, where Frad is plotted as a function of the normalised electron kinetic energy, together with the equivalent quantity for Thomson scattering (scattered radiation spectrum). Clearly, the impact is completely different. This gives the main key to understand why the two measurements of the electron temperature can give different results if the distribution function is not Maxwellian.
Comparison with ECE and TS data
The distribution function model can now be used as an analysis tool of the extensive JET database described in Ref. [9]. Examples of various subsets of the database are shown, corresponding to specific experimental scenarios.
In Figs. 7-9, ECE central temperatures measured at both the 2 nd and 3 rd harmonics by means of the JET Martin-Puplett interferometer [12] are shown vs the corresponding temperature measured by Thomson scattering via the so-called LIDAR system [13] (left panels). These temperatures are averaged over a region covering 10% of the minor radius around the centre. In the right panels, the central temperatures measured at the 3 rd harmonic are directly plotted vs the corresponding 2 nd harmonic ones. In Fig. 7 only, error bars are also shown. On both panels, the corresponding quantities computed using the above described model are also shown, choosing (by trial and error) a set of distribution function parameters that optimise the agreement with data, identical for both harmonics. The perturbation set of parameters used is displayed at the top of the figures. In all cases, a wall reflection coefficient of 0.55 has been assumed, according to previous evaluations for the JET machine [12]; however, the results are very weakly dependent on this parameter. The three figures refer to three different subsets of data, corresponding to different experimental scenarios [9]: - Fig. 7: DD baseline discharges with low gas, low Neon injection and pellets, characterised by small ELMs and partially detached divertor [14]. - Fig. 8: DT discharges in the baseline scenario. - Fig. 9: DT discharges in the hybrid scenario. In all these figures, the 3 rd harmonic points are well below the corresponding TS measurements at low temperature (Te0 < 4 keV, approximately), because of the low optical thickness. Fig. 9. As in Fig. 8 In Figs. 7 and 8, it appears that the temperature measured by the 3 rd harmonic, in the optically thick range, is close to the LIDAR temperature, or slightly lower. Following the discussion of Figs. 3 and 4, this means that the distribution function is nearly unperturbed in the velocity range covered by the 3 rd harmonic (~ 2vth). On the other hand, the 2 nd harmonic temperature is significantly higher than the TS one at high Te0, meaning that some flattening of the distribution function takes place around vth. Indeed, the optimum fit is obtained for p0/pth = 1.1 -1.2. This situation is very similar to that of past JET experiments [2]. In Fig. 9 (DT hybrid scenario), the 3 rd harmonic tends to yield lower temperatures and the optimum fit is obtained for p0/pth = 1.4. There are also cases in which the 3 rd harmonic is strongly affected by the perturbation, whereas the 2 nd one is weakly affected. In general, phases with ICRH only have these characteristics. As an example, it is interesting to show how the model behaves for the simulation of an individual pulse (96850), characterised by low density and, in the high temperature phase, ICRH only. Figure 10 shows that in this case the 2 nd harmonic is nearly unperturbed, whereas the 3 rd harmonic has non-Maxwellian features that are well reproduced by a broad perturbation located at p0/pth = 2. The fact that ICRH tends to produce perturbations at somehow higher momenta is a general trend, that can be illustrated by considering all the DT discharges (belonging to both baseline and hybrid scenarios) and selecting data points with NBI only and with ICRH only. There is a significant number of them. Results for these two data sets are shown in Figs. 11 and 12, respectively. ICRH only phases of the DT discharges are clearly perturbed at higher momenta than NBI only ones, i.e., p0/pth = 1.7 with respect to 1.35. The intensity of the perturbation is also higher (f0 = 0.05 instead of 0.03). This is possibly related to the different types of ion tails that the two heating systems produce and/or to their different mechanisms and intensity of direct interaction with the electrons.
These examples have shown that the most sensitive parameter of the model is the perturbation location in momentum space, p0. This suggests a possible use of the model as an analysis tool, in order to detect trends in the database with respect to various distinctive quantities. This is illustrated in Fig. 13. Using the full database, the p0 value is determined by fitting the measured ECE central temperature (at both harmonics) for selected values of four quantities: central density, ratio of heating power and central density, Alfvén velocity normalised to electron thermal velocity and fast ion beta (this quantity is not a measured one, but is obtained from results of NBI and ICRH modelling codes that are available in the JET database). Slight adjustments of the other model parameters (perturbation width and intensity) are made in some cases. Figure 13 shows that a regular behaviour of p0 is obtained in all these cases; the trends observed can be used to guide the search for an interpretation of the experimentally observed non-Maxwellian features.
Conclusions
The discrepancy observed in JET plasmas [2,4,9] between ECE and TS measurements has been analysed with the hypothesis that it could be associated with a non-Maxwellian bulk electron distribution function. This hypothesis has been already formulated in the past [4,5], however, it remained to be quantitatively assessed on an extensive database. In order to perform such a quantitative comparison, a toy model of non-Maxwellian electron distribution function has been developed, allowing analytical computation of the Xmode ECE spectra and massive comparison with the JET central temperature database [9], with more than 12000 data points.
Using this model, various points have been clarified: an electron distribution function perturbation of a few percent, localised around 1-2 thermal velocities, is sufficient to explain the level of discrepancy observed; such a perturbation would be practically invisible to TS diagnostics, at least those of JET and other existing machines; however, specific TS systems can be conceived to this end [15]; even if such a perturbation is present at any temperature, it becomes visible and more and more significant at higher and higher temperatures, because the radiation function becomes narrower and shifts to lower velocities; at high temperature (Te0 > 4-5 keV), X-mode measurements at both 2 nd and 3 rd harmonic (possibly also higher harmonics and/or 1 st harmonic O-mode) are essential in order to properly constrain the distribution function in different velocity ranges.
An important open question is whether a perturbation of this kind could significantly affect measurements of the ECE profile in ITER. Of course, since the cause of the perturbation is at present still unknown, there is no reason to assume that the same mechanisms acting in JET (and TFTR) could also be present and significant in an ITER plasma. The model only allows quantifying whether a given perturbation would affect the ECE temperature profile in a sizeable way. This is what is shown by the example of Fig. 14. Two non-Maxwellian distributions are considered, with perturbations of the same intensity as those observed in JET plasmas, similar width and two different momentum localisations: p0/pth = 0.75 and p0/pth = 1. The impact on the 2 nd harmonic ECE profile is shown in the left panel (red and blue stars, respectively). It appears that at the high temperatures expected in ITER (25 keV in this example) the effect can be stronger and acting in two opposite ways, depending on the momentum localisation of the perturbation. Because of the large temperature variations on an ITER profile, both effects can be observed on the same ECE profile: note that in the 10-15 keV range, the blue ECE profile is higher than the Maxwellian, whereas it becomes lower beyond 15 keV. Therefore, it would not be just a matter of central temperature value, but a distortion of the full profile. The model described here should be regarded as an analysis tool only. It is not linked to a theoretical explanation about the origin of the electron distribution function perturbation. At present, we could just formulate hypotheses. Because of the nature of the JET high performance discharges analysed [9], which is analogous to those of TFTR [1], likely explanations involve the fast ion population, which is ubiquitous in these discharges (driven by NBI, ICRH and the fusion reactions themselves for DT pulses). In addition, ICRH also directly interacts with the electrons. Fast ions might interact with the electrons via two mechanisms: collisional relaxation of the fast ion tail on the electron distribution; interaction of fast ion driven MHD modes with the electrons (e.g., via Landau damping) About the first mechanism, it is generally assumed that the electron distribution function basically remains Maxwellian, because interaction with the fast ion tail takes place at low velocities (around thermal or subthermal), where collisions are very strong. However, we have demonstrated that a tiny perturbation (~a few percent) is sufficient to explain the observed effects. In this velocity range, asymptotic expansions of the collision operator cannot be used and, looking for small effects, even the usual linearization of the collision operator could be questionable. Therefore, a solution of the kinetic equation with the full integro-differential collision operator is likely to be needed, which is not generally available in the literature. In Ref. [6], this type of problem is solved for inertial fusion applications and the resulting perturbation has a clear bipolar structure, in the thermal velocity range. Work is now ongoing in order to solve the problem numerically in the JET parameter range and for various input fast ion distributions (NBI, ICRH driven tails or alpha particle distributions can be computed by means of Monte-Carlo or Fokker-Planck codes).
About the second mechanism, direct observation by probes in the magnetosheath have revealed the presence of bipolar distortions of the electron distribution function, around the electron thermal velocity [7]. Gyrokinetic simulations have provided a convincing interpretation of such observations in terms of Landau damping of kinetic Alfvén waves [8], which are also known to be present in tokamak plasmas. Nevertheless, magnetosheath plasmas have very different characteristics with respect to tokamak plasmas (for instance, they are nearly collisionless), therefore, these results cannot be easily generalised to the case of interest here. However, they have inspired analogous gyrokinetic calculations that are now in progress, for parameters close to those of the JET experiments. Note that, in principle, other MHD modes (not necessarily directly excited by fast ions) could also interact with the electrons and cause distribution function distortions.
In conclusion, as for the diagnosis of the electron temperature profile in high temperature plasmas, TS appears rather insensitive to small perturbations of the electron distribution function, therefore, in this respect, it is expected to provide a reliable measurement of the electron temperature. Conversely, ECE can definitely be affected by tiny perturbations of the electron distribution (a few percent) localised in the range 1-2 vth, in a different way at different harmonics and in different temperature ranges. Therefore, the temperature measurement can be considered as less reliable than that of TS. However, this high sensitivity of ECE can be exploited to constrain the electron distribution function in order to extract information on its detailed shape and explore fundamental physics effects, such as, e.g., those related to fast ion physics. | 4,585 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Improving the Precision and Speed of Euler Angles Computation from Low-Cost Rotation Sensor Data
This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)—the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer). Integration of the Euler angles’ time derivations has a singularity, therefore is not accurate at full range of object’s attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer). Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware).
Introduction
Micro-Electro-Mechanical systems (MEMS) represent the integration of mechanical elements, sensors, actuators, and electronics on a common silicon substrate through the utilization of microfabrication technology [1]. The number of MEMS used in various applications is permanently growing due to the small dimensions, light weight, lower power consumption, higher reliability, and relatively low cost which makes them commercially available. Typical MEMS-based low-cost products are accelerometers, gyroscopes, pressure sensors, microphones, digital mirror displays, micro pumps, etc. For the purpose of low-cost navigation solutions MEMS-based inertial sensors (accelerometers and gyroscopes) have been developed since orientation of an object in the three-dimensional space is key information needed for navigation, guidance and control tasks. MEMS inertial sensors may be found in variety of applications from traditional ones (navigation and positioning of various transport means and/or robots) to sensing of human body walking and movement [2][3][4][5], daily life surveillance [6] or new commercial applications available through smart phones [7]. Most studies on MEMS gyroscopes are focused on their performance, and common methods to improve the performance [8]. Unlike non-micro devices MEMS sensors experience more errors that build up over time, corrupting the precision of the measurements and eventually rendering the navigation solution useless [9,10]. Thus the first and easiest-to-measure performance criterion of a gyro is its static readout as a function of time. Accuracy is usually limited by electrical noise, systematic errors and/or mechanical thermal noise [11,12]. The static compensation of sensor inaccuracies can be enabled by proper calibration methods designed for MEMS gyroscopes and accelerometers [13]. The principle of recently developed micro-machine gyroscopes, their structures and classification can be found in [14].
Generally, gyroscopes measure rotational rate, which can be integrated to yield changes in orientation. An effective method most used to parametrize the orientation space is based on usage of so called Euler angles. Euler angles are used as a framework for formulating and solving the equations for conservation of angular momentum. This article has been written with motivation to analyze and show how precision and speed of computations of Euler angles could be improved when processing data from the MEMS gyroscope. It is organized as follows: Section 1 (Introduction) describes theoretically several methods of notation to express rotation of a body (particularly the rotation matrix, Euler angles, rotation around arbitrary axis, and quaternion). Section 2 (Experimental Section) is focused on comparison of errors occurring when algorithms utilizing described notations process data from the gyroscope. If applicable, more versions of the same algorithm are considered (focused either on accuracy or fastness of computation). At the end of the section there is discussion on how errors presented in real gyroscopes could be compensated. Section 3 (Results and Discussion) summarizes analyzed properties and gives final comparison and overview of obtained results. Finally, Section 4 gives the conclusions. The article is an extended version of the conference paper [15], elaborated and supported by the VEGA1/0453/12 grant and used with kind permission of Springer Science + Business Media. Article extensions resulted from the work under another project as stated in the Acknowledgments section.
The purpose of the inertial navigation in the 3D space is to determine six independent variables: translation of an object in three axes and its rotation in three axes, relative to the inertial frame of the reference body. In this article we describe possible ways how to express rotation (attitude) of the object and calculate it from angular velocity measured by the gyroscope.
We consider the Cartesian (orthonormal) right-hand coordinate system oriented by convention NED, i.e., North-East-Down. Moving object axes' orientations are x → forward, y → right and z → down ( Figure 1). Two reference frames are used: • Frame of reference joined with Earth (considered to be approximately inertial), marked S. All variables measured with respect to Earth will be marked without a dash.
• Frame of reference joined with rotating object, marked S'. All variables measured onboard the moving object will be marked with a dash. First we will analyze four used methods of notation that allow us to express rotation of a body. Differences among those individual approaches can be seen in data redundancy and consumption of computer time during processing of raw data from the gyroscope and during conversion from one notation to another (which has direct impact on algorithm efficiency).
Euler Angles
Euler angles are expressing rotation of the object as a sequence of three rotations around objects' local coordinate axes. This way of rotation expression is most interpretative and has zero data redundancy because only three real numbers are needed. Different sequence of axis rotation produces different resultant rotation; therefore Euler angles are defined according to chosen sequence (convention). In aviation the most used convention is z-y-x convention (sometimes called Yaw-Pitch-Roll convention or 3-2-1, see Figure 2): 1. Rotate the object around its z-axis by angle Yaw (marked γ); 2. Rotate the object around its new y1-axis by angle Pitch (marked β); 3. Rotate the object around its new x2-axis by angle Roll (marked α).
Rotation order of z-y-x convention can be expressed by the following operator: Inverse rotation is given by the reversed rotation order by inverted angles: Main disadvantage of representing object's rotation by Euler angles is a lack of the simple algorithm for vector transformation. This can be realized by transferring Euler angles to the rotation matrix by Equation (10) and following application of Equation (4). Trivial chaining (adding) of two rotations represented by Euler angles is not possible.
The Rotational Matrix
The rotation matrix defines change of coordinates of the object in the coordinate system S during rotational movement. It is a typical representation of object's attitude (very often used, e.g., in computer graphics). It is clear that this form has the greatest data redundancy due to needs of saving nine real numbers: Transformation of coordinates from the system S to the system S' can be done by multiplication of the position column vector r by the rotation matrix: Result of rotation R1 followed by R2 is given by matrix multiplication: Inverse rotation is given by the transposed matrix: While the original vector r has the same length as the resultant vector r', the rotational matrix has to be orthogonal with its determinant equal to 1. The matrix is orthogonal when all its row or column vectors are perpendicular to each other. The following algorithm can be used to normalize the matrix to be pure rotational [16]: 1. Calculate deviations eik from orthogonality of the matrix columns: Conversion from 3-2-1 Euler angles to the rotational matrix is given by the following formula [17]: where: Conversion from the rotational matrix to 3-2-1 Euler angles can be done by the following algorithm: The function atan2(y, x) is a four quadrant inverse tangent function, i.e., arctangent function extended to the output angle interval from -π to π. Inputs x and y are coordinates of any point in 2D plane, output is an oriented angle between x-axis and the vector [x, y]. Function is supported by many programming languages by standard (e.g., C-language), having two arguments. The purpose of using two arguments instead of one is to gather information on the signs of the inputs in order to return the appropriate quadrant of the computed angle, which is not possible for the single-argument arctangent function.
Rotation around Arbitrary Axis
According to the Euler theorem it is possible to replace every rotation representation by simple rotation around angle θ around the arbitrary axis given by the unit vector n = n' = [nx, ny, nz] (length of the axis vector is |n| = 1). Note that the axis vector has the same coordinates in the inertial system S and the body-fixed system S'.
Transformation of the vector r from the system S to S' is expressed by the Rodriguez rotation formula: (13) Inverse rotation is expressed by the identical axis n and opposite angle −θ. Chaining of two rotations around non-parallel axes of rotation is impossible to implement trivially, transformation to another type of expression is needed.
Quaternion
Quaternion (invented by sir William Rowan Hamilton in 1843) is a modification of rotation around arbitrary axis expression utilizing algebra of complex numbers expanded to three imaginary dimensions with the complex units i, j, k, for which it is valid: Based on the expanded Euler's formula, the rotation for quaternion around the axis ] , , [ While the axis n is a unit 3D vector, quaternion must follow unit constraint to be pure rotational: Normalization of quaternion is done by the similar way like normalization of any vector. An approximate formula (like matrix normalization) can be used only if normalization is performed after each update of the quaternion: The advantage of quaternions is quick computing of chaining of rotation q1 followed by q2 utilizing Hamilton's product: (18) There are two basic variants of vector transformation utilizing quaternion. The first one takes the transformed vector as a quaternion 0 Concerning speed it is better to use the following formula: where q is a vector part of quaternion: Conversion from 3-2-1 Euler angles to unit quaternion is given by the following formula:
Experimental Section
In this section we compare errors of Euler angle estimation caused by algorithms processing gyroscopic data and being based on different rotation notations. These errors increase during run-time and depend on sampling frequency. The gyroscope firmly joined with the moving object S' is measuring angular velocity as a tri-component vector . These data are sampled with given sample frequency fsample = 1/ΔT. The sensor system has to process data sample by sample in real-time ( Figure 3). As mentioned above, outputs of the algorithm are Euler angles α, β, γ, the system should also provide utility of the transformation of the vector from the S to S' coordinate system.
In order to eliminate influence of the sensor itself a model of the ideal digital-output gyroscope with the following properties was used for algorithm testing: • Gyroscope output in each axis is a signed integer with 16-bit precision (like in many of available low-cost gyroscopes). Full-scale range of the output angular rate is ±500°/s.
• No noise is present at gyroscope output; also sampling frequency is absolutely precise (we want to examine errors of data processing algorithms, not precision of data itself). Therefore, data simulation was used instead of real experiment. In order to obtain comparable results, simulated movement of the object has to be exactly the same for all experiments. Therefore the pre-defined non-random movement has to be simulated. As a test input for algorithms we used a model of precession motion with perpendicular precession axis (see Figure 4). Such rotational movement is easy to define and also it is possible to analytically compute object's attitude (Euler angles) at any time.
Angular velocity of primary rotation and precession was chosen A = 1 rad·s −1 . Simulated angular velocity of the object (measured in its frame of reference) is then given by following: Simulation time corresponds to 20 turns (tend = 40π/A ≈ 2 min). Euler angles during simulated movement are shown in Figure 5. Initial rotation is {α0 = 0°, β0 = 60°, γ0 = 0°}. Euler angles (3-2-1 convention) during defined movement are given by following: ( ) Note that the difference between two angles has to be computed as angular difference (e.g., difference between 180° and −180° is zero) and the maximal shown error is 180°. Figure 5. Euler angles during one turn of the simulated movement.
The Algorithm Based on Updating of the Rotational Matrix
The first version of the algorithm for processing of measured angular velocity is utilizing a matrix as a primary expression of rotation. The principle of this method is shown in Figure 6. The update matrix defines rotation of the object between 2 recent samples of the angular velocity vector ω' (samples ωn-1 and ωn) with time span ΔT. It is possible to create the update matrix from angular velocity by two ways-precise and fast. Fast version uses linear approximation of sine and cosine functions which significantly reduces computational demands; precise version uses non-linear goniometric functions. There is a possibility of using Taylor series of higher order as an approximation of sine and cosine functions.
Precise Version
We can assume that between 2 samples there is constant angular velocity, so its direction defines rotation axis and magnitude multiplied by sample period ΔT defines the angle of rotation: The corresponding update matrix is: Because of the linearity of equations (there is no need for calculation of trigonometric functions or normalization of the axis vector) this is the fastest of all mentioned methods (it is about 3-times faster than precise version, depending on the used hardware). However, the main disadvantage is low accuracy, which constrains this algorithm for systems with high sampling frequency. Figure 7 compares the fast and precise versions by their relative errors with respect to sampling frequency. Expression of rotation based on rotational matrices does not contain any singularities; therefore it is working with constant precision for every tilt. The advantage is also the quick algorithm of vector transformation. In order to maintain rotation matrix orthogonality, normalization is strongly recommended if fast version of the matrix-based algorithm is used. Shown results are computed after normalization in each step.
The Algorithm Based on the Integration of the Euler Angle Rates
Using this algorithm it is possible to avoid intermediate expression of rotation (e.g., by the matrix) and following need for conversion to Euler angles. The principle is shown in Figure 8. This version uses relation between angular velocity ω' measured in the coordinate system S' and time derivations of Euler angles (Euler angle rates): , we get resulting Euler angles. There are two algorithms of numerical integration used in real-time processing: Step integration: Although trapezoidal integration is usually more precise than simple step integration, according to Figure 9 step integration is in case of Euler angle rates little more precise. This is caused by non-linearity of transformation Equation (34). The algorithm is precise enough only at high sampling frequency. The main disadvantage of this algorithm is singularity of expression Equation (34) in case of cosβ = 0 called gimbal-lock, which is representing the state, when x-axis is pointing downwards or upwards (β = 90° or β = −90° respectively). In surroundings of this singularity numerical error is rising. In case that position reaches this singularity, information about two DoF is lost (see Figure 10). This error can be avoided by early conversion to another Euler convention which reaches singularity in other points (for example conversion to 1-2-1, 1-3-1, 2-3-1, 3-1-2, 3-1-3 or 3-2-3 Euler angle convention [18]). After calculation of Euler angles in substitute convention, they are transformed back to the primary convention. Accuracy is then achieved in the whole angle range. This is computation demanding non-linear operation [18].
The Algorithm Based on Quaternion
The third possibility is to utilize primary expression of rotation using quaternion. The principle is expressed by Figure 11. Similarly as in the case of the rotational matrix, two variants of calculation are possible.
Precise Version
It is an analogy of the precise matrix-based algorithm. The form of update quaternion is following: Then it is valid: 1 . − = n update n q q q (38) Figure 11. Principle of the quaternion-based algorithm.
Fast Version
Neglecting higher order members, using approximations: We obtain update quaternion in the form: By integration of quaternion derivation by time we get resulting rotation quaternion. Figure 12 compares precision of fast and precise versions of the algorithm. Like the fast matrix-based algorithm also the fast quaternion-based algorithm requires normalization of quaternion after each step. Normalization of rotation quaternion is described by Equation (17). Presented results are obtained with normalization. Figure 12. Errors of the quaternion-based algorithms during simulated movement.
Compensation of MEMS Gyroscope Data Using a MEMS Accelerometer and Magnetic Compass
Results given above are valid in an ideal case when gyroscope data are absolutely precise. Real MEMS gyroscope readings are noisy and sensitive to vibrations. The greatest impact on precision of Euler angles estimation has offset of the gyroscope. Due to variance of parameters of an electro-mechanical system with temperature the offset is also temperature dependent. The aim is to use secondary sensor (accelerometer, magnetic compass) to compensate increasing (offset-caused) error of the gyroscope-only system.
The accelerometer is sensing its acceleration (3D vector) relative to inertial frame of reference. In gravitational field the accelerometer is sensing gravity as acceleration upwards. Reading of the accelerometer is (see Figures 13 and 14): where a' is own acceleration of the object expressed in the coordinate system S', g' is a vector of gravitational acceleration (depending on locality near Earth) transformed to the coordinate system of the object S' based on data concerning object rotation and anoise is the noise caused by: • Vibrations of this object • Thermal noise of the sensor • Quantization noise of the A/D converter Figure 13. Accelerometer and magnetic compass readings at non-zero pitch β and yaw γ. Acceleration aacc is measured by the on-board accelerometer as a sum of the gravity acceleration g and object's acceleration a. Earth's magnetic field induction B has inclination θ, declination δ and its horizontal complement points to magnetic North. According to Figure 14 for roll and pitch angle we obtain: If we assume that noise anoise has zero mean value and lower limiting frequency fmin, then the noise can be effectively suppressed by the low pass filter.
Since we cannot determine the rotation around vertical z'-axis (yaw γ) from accelerometer data, it is necessary to add a magnetic compass to the sensor system. For ensuring proper function of the system for all rotations of the object, the magnetic sensor has to determine magnetic induction B' of the Earth's magnetic field in all three axes (compass output is the vector ] , , [ ). For yaw rotation calculated from readings of the magnetic sensor it is valid: where δm is magnetic declination (offset between magnetic and geographic north direction, depending on actual position on Earth), Bx1 and By1 are components of measured magnetic induction after transformation to the coordinate system S1 (inverted x-and following y-rotation, see definition of Euler angles) according to the formula: In terms of avoiding preparation of partial inverse rotation, it is more convenient to determine the difference between yaw γgyro calculated from gyroscope data and yaw from the magnetic compass γmag as: where Bx a By are components of measured magnetic induction after transformation to the coordinate system S (inverted x-, y-and z-rotation), which are: For fusion of Euler angles measured by the gyroscope as a primary sensor and accelerometer and magnetic compass as secondary sensors we can use the algorithm as shown in Figure 15. Gain K << 1 expresses relative weight of the accelerometer with respect to the gyroscope (if K = 0, the accelerometer does not affect output Euler angles). Delay block and gain forms the first order discrete low pass filter in the accelerometer signal path with cutoff frequency: The fusion schema does not filter out any noise from gyroscope reading; it suppresses increasing error of estimated Euler angles in long term caused mainly by offset.
While the schematics in Figure 15 contains the reverse conversion block from Euler angles to the rotation matrix, normalization of the matrix is no longer needed.
Results and Discussion
Effect of sensor fusion is more significant after longer time (especially at low angular velocities). Figure 16 shows effect of using fusion of gyroscope, accelerometer and magnetic sensor readings. Simulated rotation was slowed down 100-times (A = 0.01, compare with Equation (24)). Fusion gain was K = 0.01, noise in secondary sensor data has SNR = 0dB. The precise quaternion-based algorithm at sampling frequency 1 kHz was used. Due to gyroscope offset the estimation error continuously increases with time. The low pass filter within the data fusion algorithm suppresses noise in secondary sensor data and roll angle obtained by fusion slightly oscillates around actual roll.
As can be seen in Figure 17, sensor fusion with weak bound of secondary absolute but noisy sensor can effectively suppress error of estimation caused by sensor offsets. Fusion gain K has to be set according to offset variance of the gyroscope (the more precise the gyroscope is the lesser fusion gain can be obtained). The second great aspect of the algorithm is its computational time. Two types of reference hardware were used: • 8-bit low-cost microprocessor (Atmel ATmega1284P running at 20 MHz); • 32-bit microprocessor with FPU and DSP support (Atmel UC3C1512C running at 48 MHz). Table 1 compares computational time of algorithms in terms of the CPU cycles of 8-bit low-cost microprocessor. The mentioned cycle counts are average values from 1000 random inputs, using the mathematical library optimized for AVR 8-bit microcontrollers. Algorithms are using software-implemented single precision floating point arithmetic (according to IEEE 754) due to the fact that AVR microcontrollers do not contain the floating point unit (FPU). Using highly optimized implementation of the matrix-based algorithm including fusion of the gyroscope with accelerometer and magnetic compass allows algorithm sampling rate up to approximately 200 Hz (running on AVR 8-bit core @ 20 MHz). Table 2 shows the same algorithms running on the 32-bit microprocessor. Utilization of the 32-bit microcontroller with FPU significantly reduces the count of needed clock cycles (in case of adding and multiplication of real numbers approx. 30 times depending on the used processor). While the representation of numbers is the same for all architectures (32-bit floating point number), accuracy of the algorithm does not depend directly on the used microcontroller. However, decreasing time needed for one cycle of the algorithm allows higher maximal sample rate (up to maximal sample rate of gyroscope itself). Increasing sample rate will improve accuracy significantly. For example, increasing sampling rate from 200 Hz to 1 kHz will decrease error caused by the algorithm by approx. 50% (see Figures 5, 7 and 10). Table 1. Comparison of methods in terms of 8-bit AVR processor clock cycles.
If the microcontroller with hardware support of floating-point calculations is used, linear (fast) versions of algorithms are much faster than the precise non-linear algorithms. Results given in Tables 1 and 2 strongly depend on implementation of the discussed algorithms (execution speed can be improved by using optimized mathematical libraries for hardware supporting floating-point calculations). Number of the clock ticks is shown mainly for simple comparison purposes.
The Euler angle rates integration can be faster than the remaining two algorithms but it has significantly worse accuracy at the same sampling frequency and also has intrinsic singularity. Therefore the choice should be between the matrix-and quaternion-based algorithms. If the sensor system should be able to quickly transform many vectors between inertial and local frame of reference the matrix-based algorithm can be a better choice (2-3 times faster vector transformation than by quaternion).
Conclusions
By comparing relative errors of each mentioned algorithm we can see that the worst algorithm is direct Euler angles integration due to its singularity. Precise version of the quaternion-based algorithm is slightly faster than the precise matrix-based algorithm. Fast version of the quaternion-based algorithm at lower sampling frequency is also more accurate than the matrix-based algorithm (see Table 3). Difference in accuracy between fast and precise versions of the same algorithm decreases with sampling frequency (see Figures 7 and 12). The choice of the proper algorithm depends on: • Available computational power (CPU) and maximal sampling frequency of the sensors (which reflects in overall cost of the sensor system and its accuracy). At lower sampling frequency the fast quaternion-based algorithm is more precise than the fast matrix-based algorithm.
• Precision requirements (in order to achieve long-term stability the compensated system with sensor fusion has to be used).
• Amount of vectors transformed from the non-rotated coordinate system to rotated coordinates and vice versa (transformation performed by the matrix is faster). | 5,867.4 | 2015-03-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Microbial Regulation of p53 Tumor Suppressor
p53 tumor suppressor has been identified as a protein interacting with the large T antigen produced by simian vacuolating virus 40 (SV40). Subsequent research on p53 inhibition by SV40 and other tumor viruses has not only helped to gain a better understanding of viral biology, but also shaped our knowledge of human tumorigenesis. Recent studies have found, however, that inhibition of p53 is not strictly in the realm of viruses. Some bacterial pathogens also actively inhibit p53 protein and induce its degradation, resulting in alteration of cellular stress responses. This phenomenon was initially characterized in gastric epithelial cells infected with Helicobacter pylori, a bacterial pathogen that commonly infects the human stomach and is strongly linked to gastric cancer. Besides H. pylori, a number of other bacterial species were recently discovered to inhibit p53. These findings provide novel insights into host–bacteria interactions and tumorigenesis associated with bacterial infections.
Author Summary
This review focuses on a novel aspect of host-bacteria interactions: the direct interplay between bacterial pathogens and tumor suppression mechanisms that protect the host from cancer development. Recent studies revealed that various pathogenic bacteria actively inhibit the major tumor suppression pathway mediated by p53 protein that plays a key role in the regulation of multiple cellular stress responses and prevention of cancerogenesis. Bacterial degradation of p53 was first discovered in the context of Helicobacter pylori infection, which is currently the strongest known risk factor for adenocarcinoma of the stomach. This phenomenon, however, is not limited to H. pylori, and many other bacterial pathogens inhibit p53 using various mechanisms. Inhibition of p53 by bacteria is linked to bacterial modulation of the host cellular responses to DNA damage, metabolic stress, and, potentially, other stressors. This is a dynamic area of research that will continue to evolve and make important contributions to a better understanding of host-microbe interactions and tumorigenesis. These studies may offer new molecular targets and opportunities for drug development.
Historical Perspective of Microbial Inhibition of p53 p53 protein has been receiving significant attention for more than 30 years. This interest originates from the protein's prominent role in tumor suppression that was eloquently paraphrased in the scientific literature as "the guardian of the genome" [1]. p53 is a key component of the cellular mechanisms controlling cellular responses to various cellular stresses, including DNA damage, aberrant oncogene activation, loss of normal cell-cell contacts, nutrient deprivation, and abnormal reactive oxygen species (ROS) production. Following cellular stresses, p53 is activated and primarily functions as a transcriptional regulator of expression of multiple effector proteins and miRNAs, which, in turn, regulate key cellular processes such as apoptosis, cellular proliferation, and autophagy. Since regulation of cellular stress responses is tightly intertwined with metabolic regulation, there is an interplay between p53 and multiple pathways involved in regulation of metabolism and cellular homeostasis that is complex and not fully understood. One prominent example is a reciprocal signaling between p53 and mTOR [2]. The latter pathway plays a key role in cell growth and proliferation. p53 is also directly involved in regulation of the cellular energy metabolism and the redox balance regulating glycolysis, oxidative phosphorylation, and the pentose phosphate pathway (PPP). Through multiple mechanisms, p53 can dampen glycolysis and the PPP and promote oxidative phosphorylation. The metabolic functions of p53 are likely to significantly contribute to its tumor suppression activity (Fig 1).
Inactivation of p53 is a hallmark of tumorigenic changes. More than half of all tumors carry p53 mutations, rendering the p53 gene (tp53) the most mutated gene in human tumors. p53 can also be inhibited by mutation-independent mechanisms. Inhibition of wild-type p53 by the SV40 virus was one of the first reported examples. SV40 is a small DNA tumor polyomavirus that induces cellular transformation in cell culture and an array of different tumors in animals. In infected cells, viral protein (SV40 large T antigen [T-Ag]) binds p53 and inhibits p53-dependent transcription, resulting in accumulation of inactivated p53 protein [3,4]. Inhibition of p53 by large T-Ag is closely linked to the ability of the SV40 virus to induce tumorigenic transformation; SV40 mutants, which are defective in inhibition of p53, are also defective in cellular immortalization and transformation [5,6].
p53 by itself was originally identified as a protein binding to SV40 large T-Ag [7,8]. Later studies have shown that SV40 T-Ag is not unique in this sense, and other small tumor DNA viruses (adenoviruses and papillomaviruses) also produce similar proteins (E1B-55K and E6) that interact with p53 [9,10]. Although adenoviral protein E1B-55K and human papillomavirus (HPV) protein E6 are different in their amino acid sequences, they converge at the same function, forming protein complexes with p53 to inhibit its activity. HPV and adenovirus (Ad) can also induce ubiquitination and proteasomal degradation of p53 [11]. The ability to degrade p53 varies among viruses. For example, high-risk genital HPV types 16 and 18, which cause around 70% of cervical cancers, efficiently degrade p53, while low-risk viruses such as HPV types 6 and 11 are unable to do so [12,13]. Similarly, p53 is degraded by human adenovirus serotypes 12 and 5 (Ad12, Ad5), while Ad9 and Ad11 do not have this ability [14,15]. To degrade p53, both HPV and Ad use the host protein degradation machinery. HPV E6 protein interacts with the host E3 ubiquitin ligase, E6AP, causing its substrate specificity to be altered so that it ubiquitinates p53 and induces its degradation by the 26S proteasomes [16]. In Ad-infected cells, viral proteins E1B-55K and E4orf6 interact with cellular proteins Cullin5 (or Cullin2), Rbx1, and Elongins B and C to form a Cullin-containing E3 ubiquitin ligase that targets p53 for proteasomal degradation [14,17,18]. A similar degradation strategy is also used by the Epstein-Barr virus (EBV), which forms a complex containing viral protein BZLF1 and cellular Cullin2/ 5-containing E3 ubiquitin ligase to degrade p53 [19].
Due to a relatively simple organization of the viral genomes, viruses have to rely on host resources for most aspects of their life cycle. In the process of interacting with host cells, they alter the intracellular environment to make it suitable for viral replication. These drastic alterations, however, may cause cellular stress and activate p53, resulting in cell cycle arrest or apoptosis of host cells; both outcomes are detrimental to viral replication. It is plausible that inhibiting p53 may provide advantages to viruses that have evolved to do so. Recently, this concept was further expanded to include additional microorganisms. These novel data are discussed in this review, focusing on specific mechanisms of bacterial inhibition of p53.
If Viruses Can Do It, Why Can't Other, More Complex Microorganisms?
Recent studies have found that it is not only viruses, but also some pathogenic bacteria, that actively inhibit p53 and induce its degradation. This phenomenon was initially described in gastric cells co-cultured with Helicobacter pylori [20]. H. pylori is a gram-negative, spiralshaped pathogen that lives in the stomachs of approximately half of the world's population. The infection is typically acquired during childhood and causes lifelong chronic infection. Because of the association between H. pylori infection and the incidence of gastric cancer, the International Agency for Research on Cancer (IARC) has classified this bacterium as a Group 1 carcinogen. H. pylori infection is considered to be the strongest known risk factor for gastric cancer, and epidemiological studies have estimated that, in the absence of H. pylori, 75% of gastric cancers would not occur [21]. Outline of the regulation of cellular stresses by p53. p53 protein is induced by multiple cellular stresses leading to transcriptional up-regulation of p53 target genes that are involved in regulation of apoptosis, proliferation, metabolism, and immune response. Under normal (unstressed) conditions, levels of p53 protein are tightly controlled by HDM2 E3 ubiquitin ligase, which ubiquitinates p53 leading to its proteasomal degradation. The p14ARF tumor suppressor, which functions upstream of HDM2 and p53, is required for accumulation of p53 under oncogenic stress. The role of p14ARF is to inhibit proteasomal degradation of p53 by sequestering the HDM2 protein in the nucleoli and inhibiting its E3 ligase activity. Pathogenesis associated with H. pylori infection is determined by interactions between bacterial factors and host cells. The most well characterized bacterial virulence determinants are the vacuolating cytotoxin A (vacA) and the cag pathogenicity island (cag PAI). The cag PAI is a 40 kb region of DNA that encodes a type IV secretion system (T4SS) that forms a syringe-like pilus structure used for the injection of a bacterial protein CagA (cytotoxin-associated gene A) into gastric cells. Following the delivery, intracellular CagA is localized to the plasma membrane and triggers complex alterations of the host signaling pathways [22], including activation of cellular oncogenes (Fig 2). CagA itself functions as an oncoprotein. In laboratory tests, CagA promoted anchorage-independent growth and, when transgenically expressed in mice, led to spontaneous development of gastrointestinal and hematopoietic neoplasms [23,24]. Oncogenic potential of CagA has also been demonstrated using Drosophila and zebrafish experimental models [25,26].
H. pylori infection results in conditions of cellular stress because the bacteria induce DNA damage and disturb normal cellular homeostasis (including aberrant activation of multiple oncogenic pathways), all of which are conditions that typically activate p53 [27,28]. However, initial studies of the p53 stress response revealed that H. pylori is able to dampen activity of p53 protein by inducing its rapid degradation [20]. The ability of H. pylori to suppress the p53 response was also demonstrated when DNA damage was experimentally induced by DNAdamaging agents [20,29,30]. The bacteria specifically target p53, as p73-another member of the p53 protein family, which has significant functional and structural similarities to p53-is Interaction between H. pylori and gastric epithelial cells results in cellular stress. After adherence, H. pylori translocates CagA protein into host cells using the T4SS. Translocated CagA is rapidly tyrosine phosphorylated by host kinases c-Src and c-Abl and binds to SHP2 phosphatase, leading to alteration of intracellular signaling, including activation of multiple oncogenic pathways and cytoskeletal rearrangement [22]. H. pylori also produces VacA toxin, which binds to the cell surface and forms oligomers. VacA is internalized and forms anion-selective channels in the membranes of endocytic compartments, resulting in cell vacuolation. In addition, H. pylori compromises the integrity of the host genome by inducing oxidative DNA damage and DNA double-strand breaks [27,28]. Insert: An electron microphotograph of H. pylori attached to the surface of AGS human gastric epithelial cells. AGS cells were co-cultured with H. pylori strain 26695, and cag T4SS pili were visualized by scanning electron microscopy (white arrows).
doi:10.1371/journal.ppat.1005099.g002 not down-regulated by H. pylori but rather induced [31]. The ability to induce degradation of p53 varies between H. pylori strains, with CagA-positive bacteria being more potent [20,29]. Although CagA likely does not directly bind to p53, it induces its degradation [29]. Notably, ectopic transfection of CagA is sufficient to inhibit p53 activity and induce its degradation [20,30]. Recent studies pointed out a complex nature of CagA-p53 interactions. It was shown that levels and natural variability of CagA protein highly affect p53 degradation [32]. Among other bacterial factors, VacA was also reported to regulate p53 [33][34][35]. Down-regulation of p53 was found to facilitate autophagy in infected cells [35].
The kinetics of p53 in infected cells in vivo appears to be complex. In infected Mongolian gerbils, which are commonly used for studies of H. pylori infection, expression of p53 was changed in a bimodal fashion, with an accumulation after initial infection that was followed by a rapid down-regulation of p53 protein in gastric epithelial cells. A second peak of p53 was observed later, when gastritis (inflammation of the lining of the stomach) developed. These findings led to a hypothesis that, at a certain time, levels of p53 reflect a balance between p53 degradation induced by the bacteria and p53 induction caused by cellular stress [20]. A downregulation of p53 protein, but not p53 mRNA, was observed in H. pylori-infected mice [36].
In contrast to small DNA tumor viruses, H. pylori takes advantage of host mechanisms normally regulating p53 [20,35]. The bacteria enhance proteasomal degradation of p53 mediated by E3 ubiquitin ligase HDM2 by increasing its phosphorylation at serine 166. An increased phosphorylation of HDM2 was found in gastric epithelial cells co-cultured with H. pylori in vitro and H. pylori-infected animals and humans in vivo [20,35,37]. Inhibition of HDM2 activity with siRNA or chemical inhibitor Nutlin3 suppresses bacterial degradation of p53 [20,35,38]. A similar effect can be achieved by inhibition of Akt and Erk kinases, showing that these enzymes mediate phosphorylation of HDM2 protein in infected cells [35,38]. Expression of HDM2 was found to correlate with phosphorylated Akt (pAkt) in patients infected with H. pylori [37]. In addition to HDM2, recent studies reported that another cellular E3 ubiquitin ligase, Mule/ARF-BP1, is involved in degradation of p53 in H. pylori-infected cells [32]. It remains unclear how this enzyme is activated by the bacteria.
p14ARF tumor suppressor (termed p19ARF in rodents and p14ARF in humans), which functions upstream of p53, was found to be a critical modulator of p53 protein stability in infected cells [32], as ARF inhibits activities of both HDM2 and ARF-BP1 proteins [39][40][41]. It was shown that cells expressing functional ARF are significantly more resistant to degradation of p53 (Fig 3). However, when ARF protein levels are decreased due to hypermethylation or deletion of the ink4a/ARF locus, H. pylori efficiently degrades p53 [32]. Loss of ARF occurs during gastric tumorigenesis and can be found in gastric precancerous lesions. Methylation of the p14ARF gene is also increased with age [42]. Given these findings, it was hypothesized that older people with gastric precancerous lesions, who are infected with H. pylori, may be particularly vulnerable to degradation of p53 [32].
Among other cellular factors, ASPP2 protein (apoptosis-stimulating protein of p53), which normally activates p53, was identified to regulate p53 in H. pylori-infected cells [29]. Buti et al. showed that binding of CagA protein to ASPP2 results in inhibition of transcriptional and proapoptotic activities of p53 and induction of proteasomal degradation of p53.
Recent studies suggest that bacterial degradation of p53 may contribute to gastric tumorigenesis. It was reported that clinical isolates of H. pylori varied greatly in their ability to degrade p53, but that, generally, isolates associated with a higher gastric cancer risk more strongly affect p53 when compared to low-risk counterparts [32].
H. pylori inhibits p53 through multiple mechanisms, implying that inhibition of p53 activity is an important factor for successful infection. The bacteria not only induce degradation of p53, but also alter the expression profile of p53 isoforms [43]. Interaction of H. pylori with gastric epithelial cells, mediated via the cag PAI, induces N-terminally truncated Δ133p53 and Δ160p53 isoforms, which inhibit transcriptional and proapoptotic activities of p53, resulting in activation of NFkB. Induction of proinflammatory cytokine Macrophage Migration Inhibitory Factor (MIF) by H. pylori was suggested to inhibit p53 by decreasing its phosphorylation [44]. It was also shown that H. pylori can facilitate mutagenesis of the p53 gene. Infection with H. pylori leads to aberrant induction of activation-induced cytidine deaminase (AID), which deaminates cytosine residues, leading to accumulation of p53 mutations in gastric tissues [45]. Interestingly, AID and other cytidine deaminases are induced by a number of viruses such as HPV, HTLV-1, HCV, and others [46][47][48]. SV40 and influenza A viruses have been shown to affect expression of p53 isoforms [49,50].
A new and exciting development in this area is that other bacteria induce degradation of p53 using a similar mechanism to that of H. pylori (Fig 4). Two research groups have recently reported that the intracellular bacterial pathogen Chlamydia trachomatis, and potentially other Chlamydia species, induces degradation of p53 by activating HDM2 protein [51,52]. C. trachomatis is a common cause of bacterial sexually transmitted disease (STD) and blinding trachoma. Similar to H. pylori, C. trachomatis activates the PI3K/Akt pathway and increases phosphorylation of HDM2 (Ser166), leading to activation of HDM2 and proteasomal degradation of p53. Down-regulation of p53 allows Chlamydia to enhance activity of the PPP that provides bacteria with necessary metabolites, such as nucleotides precursors, and protects against oxidative stress by increasing the cellular NADPH pool [52]. Enforced expression of p53 in [20,35,38]. The consequent activation of HDM2 and ARF-BP1 E3 protein ligases induces a rapid degradation of p53 protein. Binding of CagA to ASPP2 protein facilitates this process [29]. Degradation of p53 is strongly suppressed in cells expressing functional p14ARF, since ARF inhibits activities of MDM2 and ARF-BP1 proteins [32]. infected cells results in strong inhibition of chlamydial growth, while overexpression of glucose-6-P-dehydrogenase, a key enzyme in the PPP that is inhibited by p53, rescues the bacterial growth. The authors reported that degradation of p53 by Chlamydia interferes with the host's response to genotoxic stress and may contribute to cancerogenesis in the female genital tract [51,52].
Inhibition of p53 through the HDM2-dependent mechanism is also employed by enteropathogen Shigella flexneri, which causes bacillary dysentery in humans. Infection with Shigella is accompanied by strong genotoxic stress and cellular damage [53]. To prevent activation of p53, Shigella causes rapid degradation of p53 using two distinct mechanisms. During the early phase of infection, the bacterial virulence effector IpgD promotes activation of the host PI3K/ Akt pathway and phosphorylation of HDM2 at serines 166 and 186, causing activation of HDM2 and degradation of p53. The second mechanism for p53 inhibition comes into play during the late phase of infection. p53 is proteolytically cleaved by the calpain protease system, in which activation is facilitated by the Shigella virulence effector VirA. The VirA activates calpain by promoting proteolysis of the calpain inhibitor calpastatin. Bergounioux et al. suggested that Shigella inhibits p53 to prevent apoptotic cells death that saves energy and preserves its own epithelial niche [53]. Interestingly, not all enteric pathogens inhibit p53. Activation of p53 was reported in the context of Salmonella typhimurium infection [54]. Outside the Enterobacteriaceae family, down-regulation of p53 protein was reported in studies of Neisseria gonorrhoeae, which is responsible for the sexually transmitted gonorrhea that may increase the risk of genital neoplasms [55]. Similar to the aforementioned pathogens, N. gonorrhoeae causes strong genotoxic stress and induces both single and double strand DNA breaks. The mechanism of p53 down-regulation is not fully understood, but Vielfort et al. reported that the bacteria can inhibit transcription of the p53 gene [56].
Inhibition of p53 may provide certain benefits to bacteria. One particular mechanism that may be targeted by bacteria is the p53 DNA damage response. Inhibition of p53 may allow bacteria to subvert the host cell cycle control and apoptosis mechanisms, resulting in inhibition of cell death and survival of host cells damaged by infection. This is in agreement with the findings of antiapoptotic and prosurvival effects produced by bacterial pathogens, which inhibit p53 [20,29,52,53]. In the case of H. pylori, expression of the CagA virulence factor is sufficient to inhibit p53 and extend short and long term survival of gastric epithelial cells that underwent DNA damage [20]. Besides the DNA damage response, bacteria may also target the metabolic control of p53. Inhibition of the p53 metabolic regulation may be particularly important for obligatory intracellular pathogens such as Chlamydia. As described above, degradation of p53 allows C. trachomatis to release inhibition of the PPP elicited by p53. When bacterial degradation of p53 was experimentally inhibited, the development and formation of infectious progeny was blocked, suggesting that metabolic control of p53 provides antibacterial protection. It is possible to draw a parallel between Chlamydiae and viruses since both are obligatory intracellular pathogens, which strictly rely on the host resources. Similar to viruses, inhibition of p53 allows Chlamydia to reprogram the host cell signaling to create a metabolic environment necessary for chlamydial survival and growth. To some extent, this may also be applied to obligate parasitic Mycoplasma bacteria, which inhibit activity of p53 [57]. A more complex picture emerges in regards to the role of the p53 signaling in the context of chronic infections with extracellular pathogens such as H. pylori. One proposed possibility is that inhibition of p53 helps H. pylori to compromise the gastric epithelial barrier, allowing the bacteria to acquire nutrients from the host or get access to the lamina propria. This concept is supported by recent findings showing that H. pylori inhibits activation of p53 induced by disruption of the adherens junctions, which stabilize cell-cell adhesion [38]. It was also suggested that suppression of p53 responses may help H. pylori adapt during the early phase of infection and prevent the host immune response [20]. The p53 pathway is known to affect immune response [58]. Among direct transcription targets of p53 are a number of proteins regulating innate immunity and cytokine and chemokine production. p53 is also known to affect NF-κB activity and proinflammatory signaling. Although immunomodulatory function may play a role, there is no direct evidence yet that bacterial inhibition of p53 affects the host immune response. Additional studies are needed to further explore these mechanisms.
Summary
Interaction of bacterial pathogens with the host cells induces DNAdamage, alters intracellular signaling, and profoundly affects normal cellular homeostasis. To prevent the cellular stress response, which may be detrimental to a successful infection, some bacteria have evolved to inhibit p53, a key component of the stress response machinery. Bacteria inhibit p53 through multiple mechanisms, including protein degradation, transcriptional inhibition, and posttranslational modifications. Current research revealed that p53 has a role in controlling the bacterial infections and that inhibition of p53 may confer certain selective advantages to bacteria. Unfortunately, this may have grave consequences for the hosts, increasing the risk of tumor development. It is particularly relevant to prolonged chronic infections. Initial experiments with inhibition of protein degradation of p53 demonstrate that p53 activities can be restored in infected cells using specific chemical inhibitors. These findings may offer new and exciting opportunities for therapeutic targeting of p53 in infected cells. Future studies of the bacterial regulation of p53 hold the promise of a better understanding of pathogenesis and tumorigenesis associated with bacterial infections. | 5,087.6 | 2015-09-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Identification of early molecular markers for breast cancer
Background The ductal carcinoma in situ (DCIS) of the mammary gland represents an early, pre-invasive stage in the development of invasive breast carcinoma. Since DCIS is a curable disease, it would be highly desirable to identify molecular markers that allow early detection. Mice transgenic for the WAP-SV40 early genome region were used as a model for DCIS development. Gene expression profiling was carried out on DCIS-bearing mice and control animals. Additionally, a set of human DCIS and invasive mammary tumors were analyzed in a similar fashion. Enhanced expression of these marker genes in human and murine samples was validated by quantitative RT-PCR. Besides, marker gene expression was also validated by immunohistochemistry of human samples. Furthermore in silico analyses using an online microarray database were performed. Results In DCIS-mice seven genes were identified that were significantly up-regulated in DCIS: DEPDC1, NUSAP1, EXO1, RRM2, FOXM1, MUC1 and SPP1. A similar up-regulation of homologues of the murine genes was observed in human DCIS samples. Enhanced expression of these genes in DCIS and IDC (invasive ductal carcinoma) was validated by quantitative RT-PCR and immunohistochemistry. Conclusions By comparing murine markers for the ductal carcinoma in situ (DCIS) of the mammary gland with genes up-regulated in human DCIS-samples we were able to identify a set of genes which might allow early detection of DCIS and invasive carcinomas in the future. The similarities between gene expression in DCIS and invasive carcinomas in our data suggest that the early detection and treatment of DCIS is of utmost relevance for the survival of patients who are at high risk of developing breast carcinomas.
Background
Early diagnosis and administration of effective treatment is the best strategy to combat cancer [1]. Starting in the early 1980 s, the increasing use of mammography screens has resulted in an increase in diagnosis of the ductal carcinoma in situ (DCIS), especially among women more than 50 years of age [2]. DCIS represents 20-45% of all new cases of mammographically detected breast cancer, and about 10% of all breast carcinomas [3]. Up to 50% of DCIS lesions progress to invasive breast cancer, but there is tremendous variability in the time of progression to invasive disease [4]. Today most DCIS cases are identified as suspicious microcalcifications through mammography. However, the accuracy of mammography in diagnosing DCIS is suboptimal [4]. The main drawback with respect to DCIS is that mammography often underestimates both the pathologic extent of DCIS and the number of tumour foci in patients with multifocal disease [2]. Early detection of DCIS is very important because it is a highly curable disease, with a 10-year cancer-specific survival rate of over 97% [3]. Therefore, biomarkers for DCIS are needed. In many types of carcinomas, biomarkers have enhanced our ability for diagnosis, prognosis, and for therapy prediction. In general, an appropriate biomarker should be useful in defining risks and identifying the early stages of carcinogenesis. Furthermore, biomarkers can be analyzed in a noninvasive and economic way and therefore it is worth investing in the search for more biomarkers [5].
The use of microarray technologies for gene expression profiling provides insight into the molecular basis of DCIS. Only a few gene expression profiling studies of DCIS have been published to date and most focus on the identification of progression-associated genes by comparison of in situ and invasive disease [6][7][8]. Gene expression profiling of DCIS is hindered by the limited numbers of samples available. To overcome the latter problem, our study used a transgenic mouse model for DCIS [9]. Mice were transgenic for the WAP-SV40 early genome region, so that expression of the SV40 oncogene is activated by lactation. The use of these transgenic animals offers the possibility of determining tumour-initiating factors and investigating gene expression at different stages of tumour development.
In the present work, we identified molecular markers for the ductal carcinoma in situ. Marker genes identified in the WAP-TNP8 mouse model were further investigated in a small human DCIS cohort. Identification of markers for DCIS and early invasive tumours is important for early detection and the development of improved therapeutic strategies.
Materials and methods
Mice WAP-TNP8 animals, which selectively synthesize the T/ t-antigen under the control of the WAP promoter in mammary gland epithelial cells, were used for this study [9]. In these mice the SV40 large tumour antigen is specifically induced by lactation. As a consequence of continuous expression of the oncogene, the animals develop multifocal DCIS and consequently invasive carcinoma. In general, the SV40-Tag system has very well documented intraluminal lesions which have been thoroughly analyzed with histology, immunohistochemistry, whole mounts and electron microscopy. These early lesions are typically solid masses of poorly differentiated cells with relatively compact hyperchromatic nuclei and scanty cytoplasm. They resemble some forms of human intraductal carcinomas [10]. WAP-TNP8 mice show rapidly growing, palpable tumours which are evident on average 4 months after induction. DCIS lesions of the transgenic mice exhibit distinct architectural and cytological features which closely resemble those commonly present in humans. The tumours mostly display a poorly differentiated solid or even anaplastic morphology, well differentiated tumours are rarely found. More precisely, WAP-T-NP8 mice show cribriform morphology of in situ carcinoma [9].
Wildtype mice and transgenic mice before lactation were used as negative controls, so that changes simply related to the transgenic profile could be ruled out. Mice were analysed one month after lactation (abbreviated as 1 m), two months after lactation (2 m), three months after lactation (3 m), four months after lactation (4 m) and five months after lactation (5 m). In this way we were able to study the development of DCIS at different time points. Similarly, invasive ductal carcinomas (IDC) were investigated and served as a positive control. Invasive tumors were obtained from mice taken at 4 or 5 months after lactation. Each group consisted of at least seven mice. For subsequent analysis, mice were sacrificed and mammary glands were dissected. From each mouse four milk ducts were prepared. One part of each mammary gland was cryopreserved in liquid nitrogen and stored at -80°C for RNA preparation and another part was fixed overnight in 5% formaldehyde and embedded in paraffin.
Human tissue
Nineteen freshly frozen human breast tumour samples were obtained from the Robert-Rössle-Biobank at the ECRC (Experimental and Clinical Research Center). Tissue samples were cryopreserved immediately after surgery in liquid nitrogen and stored at -80°C. All participants have given written, informed consent. The study was approved by the local ethics committee (Charité Universitätsmedizin Berlin). The patient cohort consisted of nine DCIS, five invasive ductal carcinoma (IDC) and five healthy control samples obtained from patients with breast reduction surgery. A second panel consisting of human formalin-fixed paraffin-embedded (FFPE) tissue samples was used for immunohistochemical stainings. The panel consisted of 5 healthy, 10 DCIS and 5 IDC. DCIS samples were distinguished according to their grade (5 low grade DCIS/5 high grade DCIS). All samples were reviewed for histological classification according to nuclear grade and classified as low, intermediate, and high nuclear grade; additionally, the TNM-Stage and hormone receptor status were determined.
RNA isolation, amplification and microarray analysis
RNA extraction from murine samples was performed using Qiagen RNeasy mini kit (Qiagen, Hilden, Germany) with on column DNAse I digestion in accordance with the manufacturer's guide. Human RNA was isolated with RNeasy Lipid Tissue Mini Kit (Qiagen). RNA quality was checked on Agilent 2100 Bioanalyzer (Agilent Technologies, Böblingen, Germany). For further analysis only samples with a RIN (RNA integrity number) of more than seven were taken.
Two-round linear amplification, using 50 ng total RNA, was carried out for the murine samples according to the GeneChip ® Two-Cycle Target Labelling protocol (Affymetrix, Santa Clara, CA, USA). In human samples cRNA was amplified from 1 μg of total RNA using the GeneChip ® One-Cycle Target Labelling Kit (Affymetrix). Quantities of in vitro transcription and fragmentation products were assessed using the Agilent 2100 Bioanalyzer. Labelled and fragmented cRNA was hybridized for 16 h at 45°C on Affymetrix oligonucleotide Murine Genome 430 2.0 or Human Genome U133 plus 2.0 Arrays. Hybridized arrays were scanned using the Gene-Chip Scanner 3000.
Statistical analysis
An initial analysis was performed using the Affymetrix Microarray Suite 5.0 (MAS5) software. The percentage of present calls, background noise, the scaling factor, and the ratio of 3' to 5' hybridization for GAPDH and β-actin were used to assess quality of hybridization. Raw image data were converted to CEL files using the Affymetrix GeneChip Operating Software (GCOS). For adjacent analyses of microarray data, the GeneSpring GX 10.0 Software (Agilent Technologies) was used. GCRMA (GC robust multiarray average) was used to perform background correction and normalization. The mouse data is deposited as GEO series GSE21444, http://www. ncbi.nlm.nih.gov/geo/query/acc.cgi?token=btetzoskmeo-guzg&acc=GSE21444, and the human as GSE21422, http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi? token=lhsfdsoicaekcho&acc=GSE21422.
In order to identify differentially expressed genes between controls and samples taken at early time points (month 2-3 after lactation), as well as between controls and tumours, probe sets were filtered using the Welch-Test (unpaired T-test; unequal variance) with Benjamini and Hochberg False Discovery Rate. The fold-change threshold was 5.0 and the corrected p-value was set to ≤ 0.01. Volcano Plots visualize all probe sets according to corrected p-value and fold change. Using a Venn diagram, probe sets present in both lists were selected. The annotations of each probe set were obtained from the Affymetrix's NetAffx™ database. Two-dimensional unsupervised and supervised hierarchical clustering using Euclidean distance as distance function and complete linkage were performed. This method groups samples on the basis of similarity in their expression pattern.
Quantitative RT-PCR
Quantitative RT-PCR was performed using TaqMan ® Gene Expression Assays and the ABI Prism™ 7900 HT Sequence Detection System (Applied Biosystems, Foster City, CA, USA). Gene Expression Assay IDs are listed in additional file 1 (Table S1 +S2). For the murine samples, the RNA UltraSense™ One-Step Quantitative RT-PCR System (Invitrogen, Carlsbad, CA, USA) was used. The procedure was performed in accordance with the manufacturer's guide. For human RNA, cDNA synthesis was done using Oligo(dT) primers and SuperScript II. For the relative quantification of gene expression, triplicate reactions were conducted. The expression of β-actin served as an internal control because β-actin expression levels were consistent throughout all samples through the cDNA microarray data. Relative expression was calculated according to the ΔΔCt method [11] using an internal reference sample as calibrator.
Immunohistochemistry and H&E staining
Thin paraffin sections of the murine mammary glands (2-4 μm) were stained with haematoxylin and eosin according to standard procedures and histomorphologically evaluated by light microscopy. After deparaffinisation and rehydration, human tissue samples were boiled in citrate buffer (pH 6.0) for 5 min. Endogenous peroxidase was blocked using the DAKO Biotin Blocking System (DAKO, Glostrup, Denmark). Primary antibodies (additional file 1, Table S3) were mostly applied (1:100) for 1-2 h at room temperature. For each antibody, internal and external controls were included in the experiments. In negative controls the primary antibody was omitted. Sites of antigen-antibody binding were detected using biotinylated anti-mouse/rabbit/goat antibodies (Vector Laboratories, Burlingame, CA, USA). The chromogen used was Neufuchsin (Merck, Darmstadt, Germany). Slides were counterstained with haematoxylin and after dehydration were mounted in Entellan.
For each protein multiple immunohistochemical stainings were performed (5 healthy,5 low grade DCIS, 5 high grade DCIS and 5 IDC). A semi-quantitative scoring system was used for the evaluation of the immunohistochemical staining (Table 1)
Identification of murine DCIS markers
Gene expression patterns of control samples, of samples taken at different time points after lactation, and of invasive breast tumours (IDC) from 40 mice (five samples per group) were analysed. Animals examined one month after activation of the oncogene were excluded from further analysis because of artifacts due to Table 1 Staining pattern of the immunohistochemical analysis of different human mammary tissue samples using a semi-quantitative scoring system healthy control low grade DCIS high grade DCIS IDC
lactation. Histological investigations of all groups were performed. The majority of DCIS arises by month three or later. First a t-test was conducted comparing the control groups (wild type mice + mice before lactation) with mice taken two and three months after lactation. This comparison revealed 230 probe sets which are differentially expressed between control samples and mice in which the development of DCIS had already been induced. A second t-test was conducted in order to compare controls and invasive mammary tumours. This procedure resulted in a list of 2398 probe sets which were differentially expressed between controls and invasive mammary tumours. To obtain tumour-specific genes that are already up-regulated in DCIS, only genes present in both lists were used for further analysis. A total of 173 probe sets met these criteria and were considered as potential candidate genes for early DCIS detection. These 173 probe sets cover 140 genes (additional file 1, Table S4).
Supervised hierarchical clustering using the 140 candidate genes revealed tight clustering of murine samples of the same month after lactation ( Figure 1A). The vast majority of the 140 candidate genes were up-regulated in DCIS and tumour samples. As the pattern and length of the branches reflects the relatedness of the samples, these 140 genes clearly distinguish between control samples and malignant samples. Besides, it is obvious that the samples of the late time points after lactation (3 -5 months) exhibited an expression of the 140 genes similar to that of invasive tumour samples.
In order to identify a minimal set of genes as final candidates, the distribution of the expression values of the 140 significantly changed candidate genes was investigated. Only genes showing a enhanced expression in the malignant samples were considered. Genes which showed constant up-regulation during DCIS-development and low variance within the groups were chosen as final marker genes. These are: MUC1, SPP1, RRM2, FOXM1, EXO1, NUSAP1 and DEPDC1. Using these seven genes for supervised hierarchical clustering allowed us to separate healthy control samples from all other samples. Again, the tumour samples clustered in the same branch as most of the samples of the late time points (3, 4 and 5 months) ( Figure 1B).
To confirm the microarray results, the expression of the seven marker genes was validated by quantitative RT-PCR ( Figure 2A). Each group consisted of seven murine samples. Results confirmed very well the findings of the microarray analysis. A comparison of microarray and qRT-PCR box plots showed nearly identical pictures, hence only the RT-PCR results are shown here. With the exception of two cases, the expression of the marker genes was already significantly up-regulated two months after lactation, although in histological investigations almost no DCIS was found. In the case of FOXM1 and DEPDC1 up-regulation in month two was not significant, but that had changed by month three. In most of the genes there was a continuous increase of expression which reached the highest point in the IDC.
Analysis of human DCIS samples
As a next step we investigated the gene expression of human DCIS samples. To this end we used a set of 19 samples consisting of five healthy controls, five invasive tumours and nine DCIS samples. Expression profiles were recorded by Affymetrix U133 plus 2.0 GeneChips. An unsupervised hierarchical clustering of the human samples shows the healthy samples separated from the DCIS and IDC samples. The DCIS samples showed a comparative expression profile similar to that of the invasive breast carcinomas (Data not shown). The human data were analyzed in the same fashion as the murine samples. However, we focused on the markers found already in the murine analysis. Statistical analysis revealed a strong upregulation of the seven previously identified marker genes in human DCIS as well. This led us to conclude that the marker genes can be used as early detection markers also for human DCIS. Hierarchical clustering using these seven genes showed that DCIS and invasive carcinomas were clearly separated from healthy samples (Figure 3). Within the malignant branch DCIS and invasive carcinomas could not be distinguished.
We also analysed genes which were significantly upregulated only in DCIS but not in IDC. In the murine samples no such genes could be identified. In the human samples 5 genes were found which showed significant up-regulation in DCIS but not in IDC in comparison to healthy samples. The most interesting gene was WNT5A. Recent work in a wide range of human tumours has pointed to a critical role for the Wnt signaling molecule Wnt-5a in malignant progression, but there is conflicting evidence whether Wnt-5a has a tumour-promoting or -suppressing role [12]. Expression of WNT5A was not further investigated in the present contribution.
Microarray results for the seven candidate genes described above were validated by quantitative PCR. Expression differences were highly significant between healthy controls and DCIS samples ( Figure 2B). In table 2 the most important reported functions of each of the seven marker genes are depicted.
In order to further investigate the expression of these candidate genes at the cellular level in vivo, we performed immunohistochemical analyses in a panel of healthy human mammary gland tissue samples, DCIS and invasive breast tumours. To do so we used another set of formalin-fixed paraffin-embedded human tissue samples. For each protein multiple immunohistochemical stainings were performed (five samples per group). Representative examples are shown in figure 4. For EXO1 no specific antibody was found. Immunoreaction of the marker genes in healthy tissues was negative or very weak. However, immunoreaction in DCIS and IDC samples in the majority of cases was very intense. The expression of the protein was indicated by pink staining (exemplarily see arrowhead). Positive staining was predominantly visible within the lumina of the ducts, predominantly epithelial cells showed a positive signal (See arrows for examples). A positive staining was already visible in the low grade DCIS samples. The staining pattern was cytoplasmatic for SPP1, RRM2, FOXM1, DEPDC1 and NUSAP1. Membranous as well as cytoplasmatic staining was visible for MUC1.
Discussion
The identification of gene expression signatures or molecular markers in DCIS is hindered by difficulties in obtaining sufficient numbers of frozen DCIS-samples from the hospital. Thus, we first approached the problem using a mouse model. We choose the WAP-TNP8 mouse model of Schulze-Garg et al. [9] because it is a well described model for DCIS and exhibits long latency in developing invasive tumours. This animal model has been used for detection of different tumour growth kinetics by flat-panel volume computed tomography [13], for the analysis of cell type-specific expression of Casein kinase 1 epsilon (CK1e) [14] and for a molecular imaging study of extradomain-b fibronectin (EDB-FN) targeting neoangiogenesis by near-infrared fluorescence [15]. In our study, we used this model for determining tumour-initiating factors and investigating gene expression profiles at different stages of tumour development. Gene profiling was confirmed within two panels of human DCIS samples. A panel of fresh frozen human samples was used for another gene expression profiling analysis in order to verify whether the expression of the marker genes identified in the murine samples agrees with that found in the human samples. A second panel of human FFPE samples, including high but also low grade DCIS, was used for a validation of the expression of the candidate genes on the protein level.
In this study, we identified seven marker genes which are overexpressed in DCIS and invasive carcinomas and allowed us to distinguish between healthy and DCIS samples. Our marker genes include MUC1, SPP1, RRM2, FOXM1, EXO1, NUSAP1 and DEPDC1. Some of these markers are already known to be related to DCIS; others are completely novel for DCIS and even for breast cancer. In the future, such molecular markers may allow an early detection of DCIS.
Epithelial mucin 1 (MUC1) is an accepted serum tumour marker and cellular tumour antigen [16]. According to immunohistological studies MUC1 protein expression is particular high in tumours, where it undergoes changes in glycosylation and distribution [17]. However a low level of expression of MUC1 is also found in healthy, undifferentiated (non-lactating) breast tissue [18]. The correlation between MUC1 expression and the clinical outcome of the patients is still under debate. While some in-vitro studies showed that MUC1 overexpression promotes cellular invasion [19,20] investigations of MUC1 expression of breast carcinomas have shown a better outcome for patients overexpressing MUC1 [21]. MUC1 was found to be commonly upregulated in both DCIS and IDC [7]. Our results also confirmed earlier findings showing that MUC1 is also up-regulated on the protein level in DCIS [22].
Similarly, overexpression of Osteopontin (SPP1) has been found in a variety of cancers, including breast, lung, colorectal, stomach, ovarian cancers and melanoma [5,23]. SPP1 is a phosphorylated glycoprotein secreted by several cell types, including those involved in bone turnover and cells of the immune system [5,24]. SPP1 has been associated with breast cancer progression, invasion and metastasis [24][25][26][27][28][29] and is present in elevated levels in the blood and plasma of some patients with metastatic cancers [5]. We have found SPP1 to be significantly up-regulated in DCIS. Previously, Reinholz et al. investigated the expression of SPP1 in normal, non-invasive, invasive and metastatic human breast cancer specimens by RT-PCR [30]. They showed that the mRNA level of SPP1 increased in non-invasive, invasive and metastatic breast tumour tissue compared to normal breast tissue. We found an increase in staining intensity for SPP1 in DCIS samples compared to healthy controls, which confirms a study by Oyama et al., who detected positive staining of SPP1 using immunohistochemistry on paraffin-embedded tissues in most cases of low-grade cribiform and high-grade comedo-type ductal carcinoma in situ [31].
RRM2, a ribonucleotid reductase (RR), was shown to be overexpressed in human breast carcinoma tissue (DCIS) [32]. RR is responsible for the de novo conversion of ribonucleoside diphosphates to deoxyribonucleoside diphosphates that are essential for DNA synthesis and repair [33,34]. RR consists of two subunits, M1 (RRM1) and M2 (RRM2). It is known that alterations in RR levels can have significant effects on the biological properties of cells, including tumour promotion and tumour progression. In our findings, RRM2 was significantly up-regulated on the RNA as well as on the protein level.
Likewise, the transcription factor forkhead box M1 (FOXM1) was found to be differentially expressed in most solid tumours [35]. FOXM1 stimulates proliferation and cell cycle progression by promoting entry into both S-phase and mitosis. In addition, it plays a role in the proper execution of mitosis. FOXM1 is implicated in the tumourigenesis of more than 20 types of human tumours and contributes to both tumour initiation and progression [36]. FOXM1 is broadly expressed in breast epithelial cell lines and seems to be significantly increased in transformed breast epithelial cell lines. Consistently, FOXM1 expression is specifically elevated in breast carcinomas [37]. Using immunohistochemistry, Bektas et al. analysed FOXM1 expression in human invasive breast carcinomas and normal breast tissues on a tissue microarray [38]. In contrast to what could be expected from GO-analysis (Table 2) they found a strong cytoplasmatic expression of the transcription factor FOXM1, resulting most likely from its strong overexpression. Additionally, using RT-PCR, FOXM1 was found to be overexpressed in breast cancer in comparison to normal breast tissue both on the RNA and protein level. Furthermore, FOXM1 was found to be overexpressed during progression from DCIS to invasive breast cancer [7]. Our findings confirm these results. FOXM1 was significantly overexpressed already on the DCIS level and was even higher expressed in IDC.
In contrast, overexpression of EXO1, NUSAP1 and DEPDC1 in IDC and DCIS had not yet been described. We found these genes significantly up-regulated in DCIS as well as in IDC. EXO1 (exonuclease 1) has been implicated in a multitude of eukaryotic DNA metabolic pathways that include DNA repair, recombination, replication, and telomere integrity. This makes EXO1 a logical target for mutation during oncogenesis [39]. However, Rassmussen et al. have shown high expression levels of human EXO1 transcripts in liver cancer cell lines and in colon and pancreas adenocarcinomas, but not in the corresponding non-neoplastic tissue [40]. This is a first hint that EXO1 is up-regulated in tumours. Nucleolar spindle-associated protein (NUSAP1) was identified in 2003 as a novel 55-kD vertebrate protein with selective expression in proliferating cells [41]. mRNA and protein levels of NUSP1 peak at the transition of G2 to mitosis and abruptly decline after cell division. Interestingly, NUSAP1 was found to be upregulated in melanoma cells by gene expression profiling of a series of melanoma cell lines [42]. Proteins such as NUSAP that show little or no expression in G1 and G0 may be reliable histochemical markers for proliferation and might therefore be useful for cancer prognosis [41]. NUSAP1 expression was significantly increased in DCIS and IDC in our study and is therefore a promising new tumour marker. DEPDC1 (DEP domain containing 1) is also a newly detected gene. Kanehira et al. identified DEPDC1 as a novel gene that is highly overexpressed in bladder cancer samples, but not expressed in any human organs (heart, liver, kidney, lung) except the testis [43]. Our findings show that DEPDC1 is significantly up-regulated in DCIS and IDC. Preliminary results from a study of the functional relevance of DEPDC1 show that it seems to be an important gene for proliferation as well as for migration and invasion (C.S. manuscript in progress).
We found that the seven putative marker genes are strongly up-regulated in mice and in human DCIS samples. This reveals that the mouse model we used reflects human breast cancer development. Previously, Klein et al. [44] compared the expression profile of 24 human breast tumours and six WAP-SVT/t mice breast tumours. They found 597 genes which are overexpressed in breast cancer in mice [44]. Their list also contains DEPDC1, NUSAP1, MUC1, EXO1, and RRM2. Some of our marker genes have been described previously in human breast cancer. In a 22-gene signature investigated by Martin et al. [45], FOXM1 and RRM2 were included. This signature accurately predicts breast cancer outcome [45]. Additionally, Ma et al. developed a gene expression index for tumour grade in breast cancer patients which included RRM2 [6]. This is further evidence that the candidate genes we identified are important in tumour development.
Candidate genes were further validated using Oncomine http://www.oncomine.org, a database for online cancer gene expression analysis. In the data set of Richardson et al. which compared normal breast tissue with IDC, six of our seven marker genes are significantly up-regulated in IDC [46]. Additionally, also using Oncomine to search for the tumour grade and the prognostic impact, we found that all the marker genes except MUC1 were significant for prognosis in the calculation of this database. Using a p-value of 0.001 these genes are upregulated in multiple expression analyses in patients with a poor prognosis. This is an indication that our panel of marker genes could also be useful as a prognostic tool. Looking at the tumour grade, all the genes except MUC1 and SPP1 were significantly up-regulated in samples with a high tumour grade in Oncomine. Thus, the marker genes might indicate a high grade of malignancy. One explanation for this could be that in the analysis of the human samples, we used predominantly samples with a high tumour grade. On the other hand, in the case of the murine samples, the specimens we investigated were from a very early time point, where no DCIS (or few) were pathologically found.
In accordance with recent gene expression studies, our data support the hypothesis that critical molecular events which have a profound influence on development, progression and outcome of human breast cancer occur at an early stage. Despite significant morphologic differences between the different stages, expression profiles of early lesions are highly similar to the more advanced, invasive lesions [47]. This has been demonstrated also on the protein level [48]. Sorlie et al. claimed that extensive studies of DCIS and other preinvasive stages of tumours will enhance this hypothesis and substantiate the value of gene expression-based classification in the prognosis of breast cancer at an early stage [49]. Furthermore Ma et al. [50] showed that the tumour microenvironment of invasive breast tumours also participates in tumourigenesis even before tumour cells invade into stroma. This is a further hint that changes during breast cancer development occur at a very early time point and that also the tumour microenvironment plays an important role in the transition from preinvasive to invasive growth. We took a step in this direction by showing on the RNA level as well as on the protein level that the marker genes we found are already significantly up-regulated on the level of DCIS and likewise later on the IDC level.
Conclusions
Summing up, we found seven putative tumour markers which are strongly expressed at a very early stage of premalignancy and preneoplasia of breast carcinomas. In the future, the identified marker genes might allow an early diagnosis of DCIS and thereby improve prognosis of breast cancer. One next step will be to couple specific probes for these marker genes to near-infrared-dyes and examine whether early lesions can be detected also in an in-vivo animal model.
Additional material
Additional file 1: Table S1. Assays on demand (Applied Biosystems) used for the human RT-PCR. Table S1 gives an overview about the Assays on demand used for the RT-PCR on the human samples. Table S2. Assays on demand (Applied Biosystems) used for the murine RT-PCR. Table S2 gives an overview about the Assays on demand used for the RT-PCR on the murine samples. Table S3. Primary antibodies used for immunhistochemical staining. Table S3 gives an overview about the Antibodys used for the immunohistochemistry on the human tissue samples. The Table includes information about the dilution, the Company and the catalog number of the antibody. Table S4. 173 probe sets significantly changed between controls and DCIS/IDC in WAP-TNP-8 mice. Table S4 shows all the genes found to be differentially expressed between control mice and DCIS/IDC in the WAP-TNP8 mice. | 6,904.8 | 2011-02-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
EXPERIMENTAL RESPONSE OF A LOW-YIELDING, SELF-1 CENTERING, ROCKING COLUMN BASE JOINT WITH FRICTION 2 DAMPERS
ABSTRACT
INTRODUCTION
Eurocodes require to design structures to ensure the achievement of minimum performance levels under a set of design load combinations [1,2].Current design procedures are based on structural checks for Serviceability Limit States (SLS) (related to the most frequent conditions occurring during the life-time of the structure) and for Ultimate Limit States (ULS) for which the structure, in case of rare seismic events, can be designed to dissipate energy in selected zones.
The modern seismic protection strategies implemented into international building codes are based, in case of destructive seismic events, on the absorption of the seismic energy in dissipative zones, which are detailed to sustain cyclic inelastic rotation demands [3].In case of steel Moment Resisting Frames (MRFs) this strategy is traditionally applied by properly overstrengthening columns and connections enforcing, in this manner, the development of plastic hinges in the beam ends and at the base of the columns.Additionally, to maximize the energy dissipation, the plastic zones are spread along the elevation of the building, promoting the development of a global failure mode through the application of members hierarchy criteria and the design of full strength connections [4][5][6][7].Therefore, owing to the assumptions made in design, traditional procedures typically lead to structures characterized by weak beams and column bases, with strong joints.This approach, if on the one hand provides benefits, such as the development of a stable plasticization and the reduction of the inter-storey drifts under serviceability loading conditions, on the other hand, leads to significant shortcomings.The most substantial weakness is intrinsic in the design strategy itself.In fact, although the damage is needed to absorb the input earthquake energy, it also represents one of the main sources of economic loss [8][9][10][11].In fact, since the dissipative zones are constituted by sections or elements belonging to the structural system, after severe seismic events, the structure is affected by significant damage and, because of permanent plastic deformations, it is characterized by a pattern of residual drifts.In general, the magnitude of this out-of-plumbness may be significant in view of the actual possibility to repair the structure after a destructive seismic event.
Aiming to design structures undergoing minimal damage, special typologies of dissipative partial strength joints based on the inclusion of friction dampers in connections have been proposed and, recently, extensive studies have been carried out in research programs worldwide [12][13][14][15].These connections were initially proposed by Grigorian and co-authors in 1993 [16] and, subsequently, many other theoretical, experimental and modelling works, as well as practical applications, were carried out, especially in New Zealand, developing the socalled called Sliding Hinge Joint (SHJ).This connection is characterised by very simple details based on the inclusion of Asymmetric Friction Connections (AFC) or Symmetric Friction Connections (SFC) at the bottom beam flange, with friction pads made of mild steel, aluminium, brass or -in the most recent versions -abrasion-resistant steel (e.g.[16][17][18][19][20][21][22]).
Similar solutions were also patented in 2000 in Japan [23,24] while, more recently, other alternatives have been proposed suggesting new layouts, in which the friction damper is conceived as a separate element fabricated in the shop and fastened on site to the beam bottom flange [25][26][27][28].This layout, which is probably not as simple as the SHJ, provides the possibility to realise the whole damper in the shop allowing a better control on the materials quality (e.g. higher control of the surface conditions, continuous factory controls on the production, control on the employed bolts quality), and on the application of rigorous bolts installation procedures complying with the relevant European standards [28][29][30][31][32][33][34].The layout of the typical beam-to-column joint, recently proposed in Europe for application in semicontinuous steel Moment Resisting Frames (MRFs), represents an alternative to a stiffened Double Split Tee connection (DST) where, in place of the bottom Tee, a slotted friction device with a haunch slipping on friction shims pre-stressed with pre-loadable high strength bolts (Fig. 1) is realised.All the elements of the connection constitute a Symmetrical Friction Connection (SFC) which is, as already underlined, a friction damping device fabricated as a standalone element in the shop.With such detail the beam is forced to rotate around the pin located at the base of the upper T-stub web and the energy dissipation is ensured by the alternate slippage of the lower beam flange on friction shims (Fig. 1).Fig. 1 -Typical layout of one of the connections studied in [28] This connection, similarly to the SHJ, should be implemented to behave rigidly at SLSs and to allow the beam-to-column inelastic rotation at the ULS.Additionally, through the application of proper hierarchy criteria, both at the global and local level, it can be easily designed to be the only source of energy dissipation of the whole structure.Within this framework, considering the encouraging outcomes of previous research projects dealing with the application of such connections, in this paper, the problem of the selfcentering structures equipped with dissipative friction joints is analysed.In fact, due to permanent deformations in the friction dampers, similarly to what occurs when plastic zones are concentrated in the beams or in yielding connections, significant out-of-plumbness displacements can remain after a severe ground motion [15,[42][43][44].Indeed, although these connections are very effective from the point of view of the damage avoidance, they still provide significant problems related to the low self-centering capacity.This drawback is mainly due to the high unloading stiffness of the friction dampers in tension or compression.
To avoid this undesired behaviour, as already proposed in several past studies [34][35][36][37][38][39][40][41] a supplemental re-centering system can be adopted.Specifically, in this paper, the attention is focused on the problem of self-centering the column base joint, by studying a detail consisting in a column-splice equipped with friction dampers and threaded bars with Belleville disk springs, located just above a traditional full-strength base plate joint.The main advantages of the proposed layout are that: i) the self-centering capability is obtained with re-centering elements (threaded bars and Belleville springs) which have a small size, similar to the dimension of the column-splice cover plates; ii) all the recentering elements are moved far from the concrete foundation.The work reports the main results of an experimental investigation and preliminary analyses of MRFs equipped with recentering FREEDAM column base joints.The obtained results are hereinafter critically discussed showing the promising performances of the proposed column base connection.
Friction dampers and re-centering systems
Being an effective way of dissipating energy, dampers based on principles of dry friction have become very popular and are largely used in high risk seismic zones.In the last decades, the application of this concept has been subject of numerous studies [35][36][37][38]45] and many friction dampers have been proposed for practical purposes.This damper typology usually dissipates energy through the alternate slippage of at least two surfaces in contact, on which a transversal clamping force is applied with hydraulic systems [46], electromagnetic forces [47] or, in the simplest case, by means of mechanical devices such as high strength bolts.This last clamping method is the most common in civil engineering practice.
The cyclic behaviour of friction dampers is normally characterized by a rigid-plastic hysteresis which depends only on two parameters: the clamping force and the friction coefficient of the interfaces in contact.The first parameter is usually governed by the application tightening procedures which are based essentially on the control of the nut rotation (displacement-controlled procedure), applied torque (force-controlled procedure) or on the employment of specific devices which fail or squash at the achievement of the proof preloading level (e.g.DTI or squirter DTI washers and HRC bolts) [26].Conversely, the second parameter (namely the friction coefficient) is predicted by means of physical modelling or experimental testing.In the former case, the attention is focused on the modelling of complex and microscopic phenomena such as adhesion and ploughing which are dependent upon the surfaces topography, the materials hardness, the mechanical properties and the effects of interface layers.In the latter case, which is the most common in structural engineering practice, conversely, the properties of the friction interface are studied by means of experimental testing which, for seismic engineering purposes is generally considered sufficient to provide the information needed for designing the devices.A general discussion dealing with the main factors influencing the friction interaction is reported in [10,48,49].
The main proposals of application of friction dampers in steel structures are referred to bracing systems or beam-to-column connections.One of the first devices based on friction was that developed in [50] which introduced at the intersection of braces, brake lining pads between the steel sliding surfaces.One of the simplest forms of friction damper has been proposed in [51] who adopted simple bolted slotted plates located at the end of a conventional bracing member.The brace-to-frame connection was designed to slip with fully elastic braces.Another friction damper for chevron braces was proposed in [52].Concerning connections, as previously said, application of principles of dry friction were initially first developed by Grigorian and co-authors [16] and subsequently extensive studies were carried out in New Zealand by the research group at the University of Auckland [9,10, 17-22, 53,54] and in other countries applying these principles also to other structural typologies [55,56].
More recently, other works on a specific type of sliding hinge joint have been performed also in Europe in a research activity regarding the analysis of friction materials, the bolt installation procedures, the long-term response due to relaxation of the slip force, the robustness assessment, the FE modelling and the experimental analysis of real-scale structures or sub-assemblies of joints [26,28].
While, as herein summarized, friction dampers in beam-to-column connections have been largely investigated, the application of friction dampers in column base joints of steel structures is only a recent proposal and little knowledge is currently available.The idea to dissipate energy in the base plate with friction devices comes from the field observation of damages after the earthquakes of Northridge (1994), Kobe (1995) and Tohuku (2011).In fact, during the technical surveys, in many cases, severe damage involving plate and anchor bolts was observed.Additionally, past experimental tests have indicated that the traditional base plate connections are prone to the development of damage into elements which are not easy to replace such as the base section of the column (in case of full-strength connections) or the base plate/anchors (in case of partial strength connections) and, due to residual deformations, may give rise to a pattern of residual lateral displacements in the whole building.Therefore, in general, owing to the limited dissipative capacity and difficult reparability of the base joint (they are typically hidden by the flooring of the first storey) the occurrence of damage at the base of the building represents a significant shortcoming both in view of the actual reparability of the building and in terms of economic cost to be sustained in the aftermath.All these issues have recently motivated a significant number of research activities worldwide dealing with the development of innovative base plate connections equipped with dampers able to limit damage, while preserving the ability of the structure to dissipate energy in case of rare seismic events.These connections, in some cases, have been equipped also with re-centering elements able to restore the columns to the initial position.
Two layouts were proposed by McRae and co-authors [58], while in [59] the study of the efficiency of the dissipation of seismic energy through column base solutions has been performed carrying out a series of experimental tests on different low damage steel base connection.Within this work, two new design solutions were tested: the weak axis aligned asymmetric friction connection, where friction surfaces are parallel to the web on plates outstanding from the column flange, the strong axis aligned asymmetric friction connection, where friction surfaces are parallel to the column flange.It is worth noting that, as evidenced in [58], critical phenomena occurring with conventional full-strength connections can be mitigated by means of friction column base connection, such as that proposed in this paper.In fact, while in traditional frames due to yielding of the column base section and local buckling phenomena, axial shortening of the column may occur [58,60], with damage free connections, such as the double friction base columns suggested in [58,59] owing to the absence of plastic deformations in the column, the axial shortening and its detrimental effects can be completely avoided.Recently a novel type of rocking damage free connection has been proposed in [15].This column base, has a circular hollow section welded to a thick plate, four post-tensioning tendons to give a self-centering capacity to the connection and friction dampers to dissipate energy.
Other practical cases of self-centering systems proposed in literature usually include a tendon, applied in the joint or over the entire extension of the structure.In [39] it was proposed to include friction ring springs to the SHJ, obtaining a flag-shape behavior of the connection.A similar approach in terms of re-centering was proposed in [40] who employed as re-centering component rods applied at the tips of the whole beam, rather than only on the joint.In this work, the introduction of an "active link" was suggested and the connection to the beam, at both ends, was achieved by means of pre-tensioned rods.In the second work, the employment of a set of rods going through the entire segment of the beam and attached in the joint section was proposed.
Self-centering base connections have also been developed in [41] using post-tensioned rods anchored to the column foundation.The aim is to ensure the possibility of movement, prestressing the rods within their elastic capacity.However, the proposed solutions, based on anchoring the rods to the foundation, can be less effective in a replacement situation.Friction systems can also show a self-centering ability when employed with an asymmetric configuration of the damper [21].However, such capability is usually limited, and additional components are normally needed to restore the connection itself or the structure.A significant practical implementation of the damage avoidance design strategy is described in [57].In this project, the building is designed in the transverse direction with tension limited rocking shear walls and in the opposite direction with Sliding Hinge Joint MRFs.In this application, the rocking shear walls are equipped with Ringfederer springs to obtain the selfcentering ensuring hinge formation under a stable rocking mechanism.Conversely, the MRF bays are equipped with conventional SHJs without self-centering devices.The similarities between the solutions adopted in [57] and the application described in this paper are related to the adoption of heavy-load springs to adjust the capacity of the structure and the introduction of friction dampers in the column base.Nevertheless, as a difference, the connection hereinafter presented proposes to introduce in the column base a simple system of threaded bars with sets of Belleville washers acting as a spring to provide the needed selfcentering action.This proposal wants to keep the layout of the connection as simple as possible providing, other than the self-centering capacity, additional benefits such as the absence of interaction with the concrete foundation and the limited size of the connection which is, overall, similar or lower than the size of the cover plates employed to realize a traditional column splice connection.
Proposed Solution
The proposed connection consists in a slotted column splice equipped with friction pads located above a traditional full-strength base plate joint (Fig. 2a) [38].In particular, symmetrical friction dampers are realized slotting the upper part of the column above the splice, adding cover plates and friction pads pre-stressed with high strength pre-loadable bolts on both web and flanges.To allow the gap opening, the slotted holes are designed to accommodate a minimum rotation of 40 mrad [60], which is the benchmark rotation established by AISC 341-16 for Special Moment Frames (SMFs).Similar provisions are given in EC8 [3], which requires for Ductility Class High frames a rotation of 35 mrad.Between the steel plates and the column, friction pads are inserted.It is worth noting that the layout presented in this paper is not explicitly considering the possibility to accommodate a similar rotation also in the weak direction, which is, instead, a situation rather common in practice.
Nevertheless, to provide a biaxial rotation capacity to the connection it would be sufficient to To provide a self-centering capability, pre-loaded threaded bars are introduced (Fig. 2b).
Additionally, to provide a sufficient deformability to the bar, a system of disk springs arranged in series and parallel is installed in the assembly.
To assess the overall response of the connection (sub-assembly of Fig. 3a), the behavior of the whole system (connection, flange and web friction pads, re-centering bars and column) can be idealized by means of the simplified mechanical model delivered in Fig. 3b.The rotational spring Cb accounts for the flexural stiffness of the cantilever column of length equal to l0 (Fig. 3a), given by: where Es is the steel modulus of elasticity, l0 is the column length up to the splice section and Ic is the moment of inertia of the column profile.The translational spring Ff models the friction pads on the column flanges.The stiffness of this component can be assumed infinite up to the achievement of the slip force and equal to zero when this value is achieved.Similarly, Fw models the friction pads on the column web.The translational spring Ftb models the axial behaviour of the threaded bars which work in series with the system of disk springs, whose resistance is defined as Fds.The stiffness of the threaded bars is given by: and the stiffness of the disk springs is expressed as: where nb is the number of bars employed in the connection symmetrically with respect to the centroid of the column, npar is the number of disk springs in parallel, nser is the number of disk springs in series and Kds1 is the stiffness of the single disk spring.Considering this mechanical model, it is easy to verify that the typical moment-rotation behaviour of the connection can be represented by a flag shape (Fig. 3c).The moment M2 represents the decompression moment which corresponds to attainment of the slippage force in all the friction pads.The first branch of the moment-rotation curve is characterized by an infinite stiffness of the connection and, therefore, the rotational stiffness of the whole system is equal to KCb.The second branch, corresponds to the gap opening.In this phase, the slippage of the friction pads occurs, and the rotational stiffness of the system is due to the stiffness of the threaded bars, disk springs and column in bending, namely: where hc is the column height.The branches 3 and 4 are characterized by the same stiffness of the branches 1 and 2, respectively.The bending moment M0 represents the decompression moment due to the sum of the axial load in the column and to the pre-stress of the threaded bars: The bending moment M1 represents the contribution to the bending moment due to friction pads, equal to: where tfc is the thickness of the column flange.Considering these equations, it is easy to verify that, from design point of view, the re-centering of the connection can be guaranteed imposing that:
DESIGN OF THE SPECIMENS FOR EXPERIMENTAL TESTS
With the set of equation previously reported, starting from the definition of the design actions, a column base connection has been designed.Owing to reasons of compatibility of the specimen capacity with the available equipment, the axial load has been limited to the 25% of the squash load, while the bending moment acting in the splice has been set equal to the 95% of the plastic bending moment of the column.The shear load derives from the testing scheme which is a cantilever representing, approximately, half column of the first storey of the building.Therefore, starting from a column profile HEB240, steel class S275, the following design values have been calculated: where l0=1,55 m is the distance between the force at the top of the column and the splice (Fig. 3a), Npl is the column squash load, ν is the axial load ratio, Md is the assumed design bending moment for the column base connection and Vd is the design value of the shear force.
Based on the shear design load Vd, firstly, the web component has been designed imposing that the slippage force on the web has to resist the applied shear load.All plates are of S275 steel class.The friction pads have been chosen according to the results of previous tests on friction materials [62].Basing on these results a friction coefficient μ=0,6, has been assumed.
Considering four bolts for both the upper and lower sides of the web connection, the pre-load Fwp, for each bolt, has been determined as: where Fw is the slip resistance of the web friction dampers, µ is the design value of the friction coefficient, Fwp is the preloading force of the web bolts, nb is the number of web bolts and ns is the number of friction interfaces (in this case, considering the symmetrical configuration, this is equal to two).Considering the design resistance, M14 HV bolts of 10.9 class have been selected (HV stands for "Hochfeste Bolzen mit Vorspannung", which in English means "high resistance bolts for pretension").In order to design the re-centering threaded bars, according to Eq. (7), it has to be considered that the force in the bars depends on the slippage force of the flange friction pads.Therefore, imposing the global equilibrium between the internal and external bending moment in correspondence of the splice, the following system can be written to design Ftb (the preloading force of the threaded bar) and Ff (the slip resistance of the flange dampers) where hc is the column depth and tfc is the column flange thickness.For the sake of simplicity, if the lever arm of the friction force of the column flange friction dampers is approximated with hc, the system of equations ( 12) leads to the following simple design formulation: Considering the design actions, for the specimens, two M20 threaded bars, having a maximum capacity of 171,5 kN of pre-loading, have been adopted for the re-centering system.
Considering this capacity, the bar preload has been fixed equal to 100 kN.Therefore, system (12), provides the following value of the design slippage force of the column flange friction pads: Considering four bolts for both the upper and lower sides of the column flange connection, the necessary pre-load Ffp, for each bolt, is: In this case, M20 HV bolts of 10.9 class have been selected.The last step of the design procedure consists in the design of the disk springs.Assuming a maximum rotation for the joint equal to 40mrad, the maximum gap-opening at the level of the re-centering bar is 4,8mm (0.04x120 mm).Adopting standard disk springs with diameter equal to 45 mm, thickness equal to 5 mm and height of the internal cone equal to 1.4 mm, three disk springs in parallel are necessary to resist to the bar yielding force.The resistance of each disk spring is about 80 kN, while the stiffness (Kds1) is about 80 kN/mm.Considering the previously defined maximum displacement, Eq. ( 3) provides a minimum number of 21 disk springs to be arranged in sets of 3 springs in parallel (so-called "nested" configuration), 7 times in series (so-called "back-to-back" configuration), leading to an overall stiffness equal to Kds=35,36 kN/mm.
CYCLIC AND PSEUDO-DYNAMIC TESTS
The testing equipment is depicted in Fig. Regarding the measurement devices, a torque sensor Futek TAT430 has been used to measure the initial torque applied to the bolts with the torque wrench, while four load cells Futek LTH500 (capacity equal to 222kN) have been installed in the connection to monitor the tensile forces in the threaded bars and in two bolts of one of the flange friction dampers (Fig. 5c).Additionally, LDT displacement transducers (max.50mm) have been adopted in order to measure the vertical displacements in both column sides (Fig. 5c).Regarding the bolt tightening procedure the initial pre-load, according to EN 1090-2 specifications, was increased of the 10% of the preload was added to the bolt loads to account for random variability of the bolt tightening and initial installation loss.Thus, a torque of 180 Nm was [60,63].The adoption of a constant axial force is clearly not reproducing the real loading situation of all the columns of a moment resisting frame.In fact, due to overturning bending moments, especially the external columns of MRFs usually undergo axial force variations during the earthquake.The choice to adopt a constant axial force was done only to simplify the equipment used, as it is normally done in literature in similar tests [64].From the practical point of view, this situation better reproduces the behavior of internal columns which, typically, undergo lower axial load fluctuations during the seismic event.In the tests with lower values of the column axial load, the total axial load in the re-centering bars has been increased to 280 kN, which is still compatible with the preloading capacity of the threaded bars but not sufficient to respect Eq. ( 7) for guaranteeing the flag shape behavior.In Table 1, a summary of the main values related to the loading condition of the specimens is given.In Figs. 6, the hysteretic curves of the experimental tests are reported.In particular, in Fig. 6a the experimental tests with higher axial load are depicted, while Fig. 6b show the tests with lower axial load ratio.The response of the connections reflected the expected behaviour, highlighting in the different cases, the effect of the re-centering bar.In fact, from the results presented in Figs.6, it can be clearly observed that the threaded bars have played an important role.Tests 1 and 2 were carried out with the higher value of the axial load ratio (25%), while test 3 and 4 were carried out with a reduced axial force (12.5%).In the first two tests the self-centering behavior was expected (because the size of the threaded bar was defined considering an axial load ratio equal to the 25%), while in the third and fourth test the self-centering could not be achieved because the initial tension in the bars was about half the preload needed to achieve the theoretical self-centering condition.The 3 rd and 4 th tests were carried out mainly to highlight the role of the re-centering threaded bar, even though in these cases to obtain a full self-centering, as already evidenced, higher capacity re-centering systems should have been employed.The cyclic moment-rotation curve of test 1 (Fig. 6a, red line) highlights that the connection, with the axial force considered in the design phase, was able to return almost to the initial position with a very low value of the residual rotation (2.1 mrad), while in case of test 3 (Fig. 6b, red line) the residual rotation was higher (31 mrad) and well beyond the constructional drift normally accepted in the execution of steel structures (usually lower than 5 mrad, depending on the number of columns and height of the building) or the tolerance limit to be accepted accounting for the issues related to the building functionality (which can be assumed accounting for the existing literature as equal to 5 mrad as suggested in [65]).In any case, comparing Fig. 6a with Fig. 6b, the role of the re-centering bars can be clearly noticed in both cases.with experimental data regarding restoring forces and corresponding displacements due to quasi-static application of loads, to provide realistic dynamic response histories even in case of non-linear behavior of severely damaged structures [66].Its main advantage is that it adopts essentially the same equipment of a conventional quasi-static test, in which prescribed load or displacement histories are imposed on the specimen by means of servo-hydraulic actuators (Fig. 4).The structure to test has been idealized as a discrete-parameter system consisting of one degree of freedom, controlled by the actuator.
The classical equation of motion is solved by means of a direct step-by-step integration scheme in which the mass and the viscous damping properties of the structures are modelled analytically while the displacements, and consequently the restoring forces developed by the structure, are measured with the external transducers positioned on a reference frame.
In the experimental test the MTS hydraulic actuator was used to apply the displacement history to a system with a fictitious mass equal to 74t.The test was carried out neglecting any additional viscous damping and applying a loading velocity equal to 0.1 mm/s.
In order to perform the tests, the Kobe (Japan, 1995) (record of the 16.1.1995,N-S direction) and Spitak (Armenia, 1988) (record of the 12.7.1988,N-S direction)earthquake records were selected as ground motions.Record scale factors equal to 1.4 (PGA=0,35 g) for the Kobe earthquake and equal to 1 (PGA=0.199g) for the Spitak earthquake, were considered.The selection of few earthquake records for a limited number of pseudo-dynamic tests is always, under many point of views, arbitrary and cannot be representative of all the possible real cases.In this activity, these two specific records were selected to compare earthquakes with different features.In fact, as it can be noticed also from the response of the specimens, while Kobe is a seismic event inducing a high number of large amplitude cycles, Spitak is characterized mainly by two large reversal and many low amplitude cycles.The scale factor of the seismic events was selected in order to achieve in the connection, approximately, a rotation of 40 mrad.
In Fig. 7 the moment-rotation plots of the pseudo-dynamic tests are reported.These pictures confirm the improved performance of the proposed column base connections in terms of reduction of the residual rotations (Table 1).Also in this case, the comparison between the moment-rotation curve of the column base connection with and without the re-centering threaded bars (Fig. 8a) evidences the improvement obtained with the adoption of recentering bars.This effect is also evidenced by the reduction of the residual displacement after the simulated earthquake (Table 1).In Fig. 8 the time-history of the displacements at the top of the column are shown for the three pseudo-dynamic tests.It can be observed that the column with the proposed base connection with re-centering bars is characterized by residual displacement after the earthquake always lower than 5 mrad [65].
SIMULATIONS OF MRFs
In order to assess the effect of the adoption of the proposed re-centering column base connections over a structure, a preliminary time-history analysis of a MRF has been carried out.The case study structure regards a four bays-six storeys scheme designed according to the Theory of plastic mechanism control [67].This methodology allows to select the column size applying the upper bound theorem and the concept of mechanism equilibrium curve The considered layout has inter-storey heights equal to 3200 mm except for the first level whose height is equal to 3500 mm, while the bays have all a span of 6000 mm.Regarding the loads, a uniform dead load I J = 4 >) A ⁄ and a uniform live load M J = 2 >) A ⁄ (value given by the code for residential buildings) have been considered.Since the analyzed frame is the perimeter frame of the building and the assumed transversal bay span Lt is equal to 6000 mm, a uniform dead load N J = I J • P != 12.00 >) A ⁄ and a uniform live load R J = M J • P != 6 >) A ⁄ have been considered, so that the design gravity load distribution has been determined, in accordance with EC8, i.e.M S = 1.35NJ + 1.5R J = 25.20 >) A ⁄ .With reference to the seismic combination the load is determined as M T = N J + U R J + (where U is the coefficient for the quasi-permanent value of the variable actions, equal to 0.3 for residential buildings) and, as a consequence, the applied reduced gravity load is M T = 12 + 0.3 • 6 = 13.8 >) A ⁄ .To assess the influence of the proposed connection over the global response, a first comparison has been performed, modelling the frame with the software for dynamic analysis Seismostruct [68] and analyzing the same MRF two times: once assuming a fixed base and, another time, introducing a set of springs able to accurately reproduce the typical behavior of the proposed self-centering connection.In both cases, the assumed beam-to-column connections are the bolted joints with friction dampers already tested in the European research project FREEDAM, whose response is described in [28] (Fig. 9).The hysteretic Table 2. Model parameters for the re-centering connection (notation explained in Fig. 11b) With the calibrated parameters, a time-history analysis of the MRF has been performed, considering the first accelerogram of a set of eight natural records selected to match the EC8 reference elastic pseudo-acceleration spectrum (Fig. 14).The fundamental natural period of the structure has been assessed through a modal analysis determining a value of 1.6 seconds.
The two structures (fixed bases and self-centering connections) have the same period as the proposed connection, due to the high initial stiffness, is nominally rigid.A damping ratio of the 5% was considered.The results given in Fig. 15 highlight the enhanced response showing that with the proposed column base joints it is possible to obtain a significant improvement in terms of residual drifts.In fact, while with traditional full-strength column base connections the residual sway displacement at the top of the building is equal to 350 mm (corresponding to 18 mrad of average inclination of the column), with the employment of the proposed connections, it reduces of about the 85%, achieving a residual top displacement at the end of the simulated seismic event of 60 mm, corresponding to an average inclination of the building of about 3 mrad.This points out that, while with a traditional solution, the actual reparability of the building would be compromised (18 mrad > 5 mrad), with the proposed self-centering connections the residual drift reduces significantly falling within the prescribed limits [65].
Fig. 2. Concept of the proposed solution
applied in the flanges and 60 Fig. 5 .
Fig. 5. (a) Experimental layout; (b) Connection during the assembly; (c) view of the joint before the test In the different tests axial load ratios equal to 25% and 12,5% have been applied.The axial loads were selected in a reasonable range of variation considering the typical size of MRFs designed according to EC8. Specific values, in general, obviously depend on the building plan and frame configuration.Nevertheless, values ranging from 10% to 30% seem representative of MRFs designed in DCH [60,63].The adoption of a constant axial force is clearly not
Fig. 10
Fig. 10.a) Beam-to-column connection with friction dampers; b) hysteretic response and calibrated model
Fig. 12 .
Fig. 12. Comparisons between experiments and numerical model for different values of the axial load
Table 1 .
Main test data | 7,981.4 | 2019-01-01T00:00:00.000 | [
"Engineering"
] |
Effective formulation and processing of nanofilled carbon fiber reinforced composites
This work describes a successful approach toward the development of a carbon fiber-reinforced composite based on an optimized nanofilled resin for industrial applications. The epoxy matrix is prepared by mixing a tetrafunctional epoxy precursor with a reactive diluent which allows reduction of the viscosity of the epoxy precursor and facilitation of the dispersion of 0.5%wt multiwall carbon nanotubes. The proper choice of the viscosity value and the infusion technique allow improvement of the electrical properties of the panels. The obtained in-plane electrical conductivity is about 20 kSm , whereas a value of 3.9 Sm 1 is achieved for the out of plane value. Such results confirm that the fibers govern the conduction mechanisms in the direction parallel to the fibers, whereas the percolating path created by the effective distribution of carbon nanotubes achieved by resin formulation and adopted processing approach lead to a significant enhancement of the overall electrical performance of the composites.
Introduction
In recent years the use of carbon ber-reinforced composites (CFRC) has continuously expanded, particularly in weightsensitive applications, such as aircra and space vehicles. In particular, the increasing application of epoxy-based thermosetting composite materials in the aircra industry was driven by the possibility to attain a signicant weight reduction with respect to traditional metallic materials in the fabrication of structural parts. However, composites based on epoxy resins exhibit some rather inherent unsatisfactory characteristics, such as poor electrical conductivity. Epoxy resins are known, in fact, for their good or excellent properties covering an extensive range of applications, 1-3 but at same time for their undesired electrical insulating behavior which limits their applicability as aerospace and aeronautical materials and, in general, where antistatic properties are required. One attempt to increase their application range is to incorporate nanoscale conductive llers that are characterized by intrinsically high electrical conductivity. [4][5][6][7][8][9][10][11][12] In order to choose an effective epoxy mixture, the intended application and consequently the properties required for the nished product have to be carefully considered. In the case of epoxy matrix it is known that the structure of the resin strongly governs its chemical and some of the physical properties. The number of reactive sites in the epoxy precursors controls the functionality directly acting on the cross-linking density. This, combined with the nature of the hardener agent, the functionality, the stoichiometry and the curing cycle determines the nished properties of the cured resin especially in terms of mechanical and thermal properties. In order to obtain high mechanical and electrical performance, in this work, multi-wall carbon nanotubes (MWCNTs) were embedded inside an epoxy resin based on a mixture of tetraglycidylmethylenedianiline (TGMDA) and 1,4-butandioldiglycidylether (BDE). This particular epoxy formulation has proven to be very effective for improving nanoller dispersion due to a decrease in the viscosity [13][14][15][16] and, in addition, it has been found to reduce the moisture content which is a very critical characteristic for aeronautic materials. 17 The chemical composition of this epoxy formulation reduces the sorption at equilibrium of liquid water (C eq ) of about 35%. This percentage is very relevant for epoxy mixtures to be applied as structural materials in the aeronautics; in fact, absorbed moisture reduces the matrixdominated mechanical properties. Absorbed moisture also causes the material to swell. In addition, during freeze-thaw cycles, the absorbed moisture expands during freezing and can crack the material. The amount of MWCNTs inside the epoxy mixture used to impregnate plies of carbon ber (CF) cloths was chosen by studying the electrical behavior of the nanolled resin alone (without CFs). The electrical percolation threshold (EPT), i.e. the value of ller content ensuring the transition from insulating to conducting behaviour of the composite, was found to be in the range [0.1; 0.32]% wt. Also the AC measurements conrmed that the EPT ranges between [0.1; 0.32]% wt.
An amount of 0.5% wt, beyond the EPT was, then, adopted to prepare the nanolled epoxy mixture used to manufacture carbon-ber reinforced panels. An economic and efficient mean of producing high performance ber-reinforced panels containing nanolled resin which impregnates CFs is adopted which is especially useful when, as the considered case, the initial viscosity of epoxy precursors/mixture appears to be very high for an infusion process. This, indeed, is a non-trivial problem in the case of aeronautic epoxy mixtures containing a percentage of conductive nanollers able to obtain samples in the nanoller concentration range beyond the EPT.
The implemented technique to infuse the nano-lled resin into a carbon ber dry preform allows to obtain remarkable electrical properties for the manufactured panels. A detailed morphological analysis is also presented allowing to explain the mechanisms which lead to the achievement of the very good values found for the electrical conductivity.
Experimental
2.1. Materials 2.1.1. Nanolled resin. The epoxy matrix was prepared by mixing an epoxy precursor, tetraglycidylmethylenedianiline (TGMDA) (epoxy equivalent weight [117; 133] g per eq.), with an epoxy reactive monomer 1,4-butanedioldiglycidyl ether (BDE) that acts as a reactive diluent. The curing agent investigated for this study is 4,4-diaminodiphenyl sulfone (DDS). The epoxy mixture was obtained by mixing TGMDA with BDE monomer at a concentration of 80 : 20% (by wt) epoxide to exibilizer. The hardener agent was added at a stoichiometric concentration with respect to all the epoxy rings (TGMDA and BDE), this mixture will be named hereunder T20BD formulation. This epoxy formulation hardened with DDS has shown to be characterized by a good ame resistance with a limiting oxygen index of 27%, even without addition of antiame compounds. 18 The MWCNTs (3100 Grade) were obtained from Nanocyl S.A. Transmission electron microscopy (TEM) investigation has shown for MWCNTs an outer diameter ranging from 10 nm to 30 nm. The length of MWCNTs is from hundreds of nm to some micrometer. The number of walls varies from 4 to 20 in most nanotubes. The specic surface area of MWCNTs determined with the BET method is around 250-300 m 2 g À1 ; the carbon purity is >95% with a metal oxide impurity <5% as it results by thermogravimetric analysis. Epoxy blend and DDS were mixed at 120 C and the MWCNTs were added and incorporated into the matrix by using an ultrasonication for 20 min. An ultrasonic device, Hielscher model UP200S (200 W, 24 kHz) was used. The epoxy mixture used to manufacture the panels was lled with 0.5% wt of MWCNTs. This nanolled sample will be named hereunder T20BDCNTs formulations. This concentration was chosen because the curves of DC volume conductivity vs. MWCNTs concentration highlight that the electrical percolation threshold (EPT) is lower than 0.32% wt, therefore for this amount of MWCNTs the nanolled formulation is beyond the EPT. 17,19 This formulation is also characterized by good dynamic mechanical properties. 20 2.1.2. CFRCsmanufacturing process. CFRCs were manufactured by Resin Film Infusion (RFI) using a non-usual technique to infuse a nano-lled resin into a carbon ber dry perform. 21 This new technique allows to overcome drawbacks related to non optimal values of the viscosity for a RFI process and is therefore particularly advantageous for nanolled resins where the presence of nanollers increases the viscosity values hindering the injection of the nanolled epoxy. A well-known process for the manufacturing of high performance resinbased composite materials is the resin lm infusion. In this process, a dry carbon ber preform is placed in a vacuum bag and the resin is injected from an edge of preform while in the other is vent the vacuum so the resin ow throw the length of the preform. The scheme can be seen in Fig. 1.
Optimum value of viscosity required for this process is lower than 0.3 Pa s. In literature, it is possible to nd a theoretical maximum limit of 0.8 Pa s. 22 Oen a nanolled resin exceeds the limit of 0.8 Pa s; then the usual liquid infusion becomes unfeasible. To overcome this critical point, a thin layer of liquid epoxy mixture containing MWCNTs (0.5% wt) was spread on a release lms (Release Ease 234 TFP-HP Airtech); then a dry preform (400 mm  400 mm) made laminating 7 plies of carbon ber cloths (SIGMATEX (UK) LDT 193GSM (grams square meter)/PW (plain wave)/HTA40 E13 3K (3000 bers each tow)) was placed on mixture forcing it to ow throw the thickness of the preform using an external supplementary pressure inside an autoclave. In this way the length of the impregnation path is considerably reduced and the process can be forced by means of the pressure application. A further advantage of this technology is associated to the smaller length of the inltration path which reduces the effects of inltration through the preform, ensuring in this way a more uniform distribution of the nanoller through the panel thickness. In fact, the edges of the preform were sealed to force the resin to ow only through the thickness (see Fig. 2A). The laminate was covered by a porous release lm and a distribution media to allow to the resin to escape from the upper side and a breather media to receive the excesses of the resin (see Fig. 2B). Finally it was placed in a vacuum bag and the laminate was transferred into the autoclave (see Fig. 2C).
Morphological investigation
Nanolled resin T20BDCNTs. Fracture surfaces of the composite specimens T20BDCNTs were investigated aer etching procedure to remove a fraction of resin and better Panels of CFRCs. Strips of CFRCs were cut out from the panels and analyzed in direction parallel (i.e. in plane) and perpendicular (i.e. out of plane) to the panel plane. Some of the samples were treated with a strong etching reagent and observed using conventional techniques of Field Emission Scanning Electron Microscope (FESEM, mod. LEO 1525, Carl Zeiss SMT AG, Oberkochen, Germany). The etching reagent was prepared by stirring 1.0 g potassium permanganate in a solution mixture of 95 ml sulphuric acid (95-97%) and 48 ml orthophosphoric acid (85%). The lled resins were immersed into the fresh etching reagent at room temperature and held under agitation for 36 hours. Subsequent washings were done using a cold mixture of 2 parts by volume of concentrated sulphuric acid and 7 parts of water. Aerwards the samples were washed again with 30% aqueous hydrogen peroxide to remove any manganese dioxide. The samples were nally washed with distilled water and kept under vacuum for 5 days at room temperature. The samples were placed on a carbon tab previously stuck to an aluminum stub (Agar Scientic, Stansted, UK). The samples were covered with a 250Å-thick gold lm using a sputter coater (Agarmod. 108 A).
2.2.2. Panels of CFRCselectrical measurements. Carbon ber composites can be classied on the basis of the length (short or continuous) of the used bers. Continuous carbon bers, aligned unidirectionally or forming a woven fabric have stronger effect on the mechanical, electrical, and thermal properties and give rise to composites characterized by higher anisotropy than that of found with short bers. 23 Therefore, for the conductivity of the continuous CFRCs here analyzed, two types of measurements of the volumetric DC conductivity can be obtained: the in plane, s vk , and the out of plane, s vt . The measurement of s vk is performed by using a strip sample, named DST07, of about 1.7 Â 4.0 Â 0.17 cm 3 whereas s vt is carried out on a square sample, named DLS01, of about 6 Â 6 Â 0.17 cm 3 .
Before performing the electrical measurements, the samples are cleaned with acetone and thermally pretreated at 80 C for 24 h. Then, contacts made with silver paint (Alpha Silver Coated Copper Compound Screening, with a thickness of about 50 mm and a resistivity of 0.7 U square) are deposited. In order to obtain the conductivity values, two multimeters (HP 34401A, HP 3408A) and an electrometer (Keithley 6514A) are used. A BINDER climatic chamber Model FP 53 has been used to analyze the temperature dependence of the conductivities in the range The measurement of s vk , is performed by using the 4-probe method as schematically shown in Fig. 3.
In Table 1 the geometrical dimensions of the electrode conguration are reported together with the relative uncertainty in the geometric dimensions in the form x AE Dx. In this case the conductivity of the material samples under test is calculated using the following expression: where R c ¼ V 2 /I H is the measured resistance of the sample and a, b and c the geometric quantities of Fig. 3. The measurement of s vt is performed by using the 3-probe method with the conguration given in Fig. 4. This technique is adopted in order to avoid any supercial leakage currents.
In Table 2 the geometrical dimensions of the electrode conguration for the square case DLS01 are reported together with the relative uncertainty in the geometric dimensions in the form x AE Dx.
In Fig. 4, D 1 and D 2 are the diameter of the inner and outer electrode of the top side, respectively, D 3 is the electrode diameter of the bottom side and s is the thickness of the CFRC sample.
By setting g and K v equal to: the out of plane conductivity of the CFRC is given by: where R c ¼ V m /I 1 is the measured resistance of the sample. 2.2.3. Determination of the volumetric fraction of carbon ber. The volumetric fraction of carbon ber is a fundamental value to evaluate the nal proprieties in a carbon ber article. In aeronautic application the mechanical performance of a composite is maximized at optimal values of volumetric fraction of carbon ber very close to 60%. The volume fraction was calculated using the following formula: in which n is the number of plies composite conguration, gsm is the actual weight of carbon fabric before the impregnation (grams per square meter), s is the thickness (cm) of composite, r is the density of the ber (IM7), r ¼ 1.78 g cm À3 .
2.2.4. Rheology. TA Instruments AR 2000 dual head Rheometer was used to analyze the viscoelastic behavior in oscillation tests. Parallel plate a ¼ 40 mm was selected as appropriate geometry and the gap was set at a value of 300 mm. Strain sweep were carried out at constant frequency of 1 Hz, in order to nd the linear viscoelastic region. Temperature Sweep, varying the temperature from [60; 120] C a rate of 3 C min À1 , was performed at the frequency of 1 Hz and the strain% of 5%, determined from the strain sweeps, within the linear viscoelastic region.
Results and discussion
3.1. Nanolled epoxy mixture 3.1.1. Morphological investigation. In order to analyze the homogeneity of the nanoller dispersion in the polymeric matrix, the sample with MWCNTs at 0.32% wt was investigated by means of scanning electron microscopy (see Fig. 5). A careful observation evidences a good level of MWCNTs interconnections able to form conductive paths also at the low nanoller percentage of 0.32% wt. This assertion is conrmed by the data related to the electrical conductivity of the nano-lled samples.
3.1.2. Electrical behavior. In a previous paper 19 the authors have shown that for the proposed epoxy mixture at different ller loading f, the DC conductivity vs. the ller concentration shows the typical behaviour predicted by the percolation theory: 12,19 where f c is the electrical percolation threshold (EPT), s 0 is the theoretical conductivity at high ller concentrations and t is an exponent linked to the structural dimensionality of the ller distribution inside the matrix. In particular, the measured data have been used to extract n c ¼ 100 Â f c ¼ 0.22%, that is the minimum weight percentage wt% ensuring that the percolation regime has been achieved within the nanolled resin. This means that the electrical behaviour of the nanolled epoxy mixture changes from that of an insulating material, similar to that of the T20BD matrix hosting the ller, to that of a conducting material whose electromagnetic behaviour is governed by the paths formed by the MWCNTs inside the matrix. From the same data also an estimation of the morphological parameter is obtained, whose high value (t ¼ 2.2) indicates that the MWCNTs, as also conrmed by the morphological investigations illustrated by the SEM image of Fig. 5, are organized in a three-dimensional network. The volume DC conductivity of the nanolled epoxy mixture is s v ¼ 0.142 S m À1 for the sample at a loading of 0.5 wt%. The obtained value conrms that the mixture is above the EPT. Also the measurements of the electrical properties in the frequency range [0.1; 1000] kHz, not shown here, conrm that the percolation threshold is in the range [0.1; 0.32]% wt and that at 0.5% wt the composite performs as a conducting material. In fact, the AC conductivity is constant in the analysed frequency range and close to the DC value. Moreover, the measured real part of the relative permittivity is about two order of magnitude greater (3 r y 1000) than the value obtained for the unlled T20BD matrix, with a frequency dependence showing the dispersion typical of a loss material.
Carbon ber reinforced panels
3.2.1. Morphological investigation. Table 3 shows the values of the viscosity h* of the unlled and nanolled formulation at different temperatures.
Data shown in Table 3 highlight that, although the viscosities of the unlled and nanolled resins have been strongly lowered with the mixing of the epoxy precursor with the reactive diluent, the usual liquid infusion process is not feasible. The technique described in the section "CFRCs -Manufacturing Process" allows to overcome these criticalities. Impregnation conditions (curing cycle and pressure) of CFRC panels is shown in Fig. 6.
An optical picture of the manufactured panels is shown in Fig. 7.
All the manufactured panels are characterized by a calculated volume ber fraction V f between 0.55 and 0.56. Fig. 8a and b show the SEM images at two different magnication of the some cross sectional areas of the manufactured panel (not etched sample). Fig. 8a also clearly shows the number of plies.
Because the investigated region is comprised between two nodes of the fabric mesh and relates to a sectional area (where the sample was punched out), we can deduce that a good interfacial bonding between carbon ber fabric and epoxy resin has been achieved. FESEM image on the top side of the CFRC panel surface was acquired aer a strong etching procedure and . 6 Curing cycle and pressure for the Impregnation process. it is shown in Fig. 9. In this gure the dry preform fabric is also shown on the top to allow a comparison. The image highlights that the carbon bers are well impregnated by the nanolled resin and therefore part of the surrounding resin is retained even also aer the strong etching procedure. From the micrographs in Fig. 10a-d at lager magnication, it is possible to observe that the single bers are completely impregnated by resin layers.
This effect is also well observable in Fig. 10 showing a comparison between the single ber of the fabric (without resin) (Fig. 10a) with the impregnated bers (Fig. 10b-d).
The observation that the single bers are well coated by the nanolled resin, even aer a strong etching procedure, highlights strong attractive interactions between the carbon bers and the nanolled resins. The carbon nanotubes embedded in the epoxy resin can be observed when high magnication, as those chosen for the micrographs of Fig. 11 and 12, are set up. Fig. 11 evidences the presence of some carbon nanotubes arranged preferentially through the section of the panel in the direction perpendicular to the plane. This particular morphological feature may be the key factor causing the high out of plane electrical conductivity. The reason why this peculiar morphology is obtained is most likely due to the adopted inltration process where the nanolled resin is forced to ow through the thickness of the preform. Fig. 12 shows successively higher magnications (from le to right) of the etched surface on the top side of the CFR panel surface in the region which has not been broken by the cutting. The images of Fig. 12 highlight a noteworthy result: the carbon nanotubes, dragged by the resin in which they are embedded, are able to pass through all the plies with no observable sieving effect. This result is certainly due the non-usual technique to infuse the nano-lled resin into the carbon ber dry preform.
Although the image of the nal composite is different from the nanolled resin of Fig. 5 (the composite is much more compact due to the presence of the carbon bers), it is possible to observe CNTs bridging all the epoxy-rich regions between the plies, giving rise to an efficient network as for the nanolled resin (without CFs). The presence of this network between CNTs is conrmed by the electrical measurements. In fact, the out of plane conductivity (across the sample thickness) reaches the value of 3.9 S m À1 . This value is among the highest value reached until now for nanolled resins impregnating carbon bers of panels obtainable through the proposed simple manufacturing process which can be scaled up to industrial levels.
Electrical properties.
The performance of the developed Carbon ber/epoxy laminates composites as material able to be used in structural aeronautic application has been veried through the electrical characterization. Carbon ber/epoxy laminate composites are heterogeneous materials and physical properties such as electrical conductivity depend on carbon ber ply orientation. If no strategies are used to improve the transversal conductivity across the composite section, the thickness of the inter-laminar epoxy layer could be enough to electrically insulate the successive carbon ber layer. Therefore, electrical performance of the adopted impregnating epoxy system and the effectiveness of the implemented manufacturing process was evaluated by measuring the electrical conductivity of the anisotropic manufactured panels. In particular, the in plane volume DC conductivity (s vk ) and the out of plane conductivity (s vt ) have been determined and compared with data from literature (Table 4). In the observed range of temperature a s vk going from 19.5 kS m À1 to 19.7 kS m À1 is detected by performing measurements on the strip sample. A volume conductivity for the CFR panel of around 20 kS m À1 is a typical value for aeronautic composite materials. 24 Moreover as assumed in, 25,26 the in plane volume DC conductivity is equal to where s f and V f are the conductivity and volume fraction of the ber, respectively. At room temperature s f ¼ 39 kS m À1 and this value is consistent with that reported in literature. 25,27 The variation of s vk with the temperature is in the range of the measurement accuracy (Ds vk ¼ 2.8% calculated by considering a condence interval of 95%, that correspond to AE0.5 kS m À1 ), showing that temperature does not affect the electrical behaviour of the CFRC in the direction parallel to the bers plane. These results are in good agreement with those by Keiji and Yoshihiro 28 who have found that the conductivity along the ber direction (in angle) at 0 and 45 for the composites reinforced with continuous carbon bers is almost independent of temperature. In the same range of temperature also the out of plane DC volume conductivity s vt shows a very small variation comparable with that associated to the measurement error. In fact, the conductivity assumes a value of 3.9 S m À1 at T ¼ 30 C and a maximum of 4.1 S m À1 at T ¼ 60 C, whereas the measurement accuracy of about 4% leads to a tolerance of less than 0.2 S m À1 . Moreover, it is possible to observe that s vt is higher than the value found for the unlled composite resin TG20BDCNTs, 19 i.e. 0.03 S m À1 at T ¼ 30 C. This means that, as shown in Fig. 12, the CNTs are effective in establishing conductive paths along the bers plane. As already evidenced, a s vt ¼ 3.9 S m À1 is among the highest values reached until now for very simple manufacturing processes. A value of the same order of magnitude has been reached only by adopting the electrophoresis technique. 29 In fact, in the approach of ref. 29, a rst stage involving an electrophoresis process was carried out for the selective deposition of MWCNTs or single-walled carbon nanotubes (SWCNTs) on woven carbon fabric (Magnamite IM7 -10 cm  15 cm). The carbon fabric panels were subsequently inltrated with epoxy resin using vacuum-assisted resin transfer molding (VARTM) to fabricate multiscale hybrid composites in which the nanotubes were completely integrated into the ber bundles. In our case, a processing technique more similar to those currently used by aircra manufacturers was employed and no pre-treated carbon bers coated with CNTs were employed. In Table 4 CNTs epoxies as matrix for conventional glass bre-reinforced panel (GFRP), manufactured via Resin Transfer Moulding process (RTM). The GFRP containing 0.3% wt of aminofunctionalized double-wall carbon nanotubes (DWCNT-NH 2 ) were found to exhibit an anisotropic electrical conductivity, with the in plane conductivity of ber fabric of one order of magnitude higher than the transversal conductivity. Piche et al. in ref. 24 presented different experimental approaches to characterize the electric behaviour of carbon ber composites used in aeronautic industry and a numerical model to support test denition and material characterization. The validation of the models with experimental tests allowed to extract values of electrical conductivities of ber reinforced composite samples, once the appropriate method to perform the test is selected. The developed approach on a composite having 12 plies with 0/90 sequence of plies, allowed to obtain an out of plane volumetric conductivity of composites s vt ¼ 1.3 S m À1 , and in plane values of s xk ¼ 40.5 kS m À1 and s yk ¼ 0.20 kS m À1 . In our case (see column 2 in Table 4), the type of used fabric and the chosen layup direction allows to obtain an in plane value independent on the axis. Lonjon A. et al. 30 obtained an electrical conductivity improvement of aeronautical carbon ber reinforced polyepoxy composites by CNTs inclusion. Carbon ber composite were prepared by prepregs, with epoxy multicomponent resin alone and with epoxy multicomponent resin lled with a small CNT weight fraction. Palmitic acid was used as dispersing agent of CNTs in epoxy systems, which enables the realization of the percolation threshold at low rates (about 0.4% wt of CNTs). For CF composites, 8 prepregs were assembled with unidirectional carbon ber oriented in the same direction (0 ). Electrical measurements of the volumetric conductivity of unlled and lled were performed in the frequency domain [10 À2 ; 10 6 ] Hz, at room temperature. The results highlights a poor conductivity value (s vt y 7.1  10 À3 S m À1 ) in the out of plane measurement for the laminate obtained with unlled resin. The addition of CNTs in the epoxy resin, causes a large increase in the conductivity of the laminate composites (s vt y 0.2 S m À1 ). Summing up, by considering the values shown in Table 4, we can conclude that the values exhibited by our panels for the out of plane conductivity are among the highest values obtained up to now with the exception of ref. 29, where higher values were achieved using a more complex manufacturing process. Although it is very hard to directly obtain conclusions from the comparison between the different data above discussed and shown in Table 4, because of the relevant differences in the epoxy chemical composition, nanoller nature, concentration and manufacturing process, a comparison is the only potential way to address technological solutions towards the required targets in aeronautic eld. Experiments are in progress to better understand the effect of the nature of the components and process manufacturing on the electrical properties.
Conclusions
In this paper we have shown the rst results obtained using a non-usual technique to manufacture CFRCs. This technique is particularly advantageous to impregnate CFs with aeronautic resins lled with conductive nanollers. In particular, the impregnation can be obtained also using an epoxy mixture characterized by viscosity values higher than 0.3 Pa s.
The anisotropic volumetric DC conductivity of the CFRC is almost independent on temperature in the range [30; 90] C and is about 20 kS m À1 for the in plane value and 3.9 S m À1 for the out of plane at T ¼ 30 C. This last value is among the highest value found for CFRCs impregnated with resins loaded with carbon nanotubes and manufactured by means of very simple processes.
Moreover, the in plane value conrm that the conduction mechanisms is governed by the bers. In fact, the obtained volumetric conductivity of the strip sample is close to that of carbon bers alone whose conductivity is around 10 4 to 10 5 S m À1 . The value achieved for the out of plane conductivity which is almost one order of degree higher than that of the used nanolled resin alone, shows that the electrical conduction is improved by the contribution of the percolating paths created by the MWCNTs inside the resin. | 6,797 | 2015-01-01T00:00:00.000 | [
"Materials Science"
] |
Economic evaluations and their use in infection prevention and control: a narrative review
Background The objective of this review is to provide a comprehensive overview of the different types of economic evaluations that can be utilized by Infection Prevention and Control practitioners with a particular focus on the use of the quality adjusted life year, and its associated challenges. We also highlight existing economic evaluations published within Infection Prevention and Control, research gaps and future directions. Design Narrative Review. Conclusions To date the majority of economic evaluations within Infection Prevention and Control are considered partial economic evaluations. Acknowledging the challenges, which include variable utilities within infection prevention and control, a lack of randomized controlled trials, and difficulty in modelling infectious diseases in general, future economic evaluation studies should strive to be consistent with published guidelines for economic evaluations. This includes the use of quality adjusted life years. Further research is required to estimate utility scores of relevance within Infection Prevention and Control.
Background
Health-care associated infections (HAI) are common. In Canada approximately 200,000 patients will develop a HAI each year with 8000 associated deaths [1]. In the United States (US), there are over two million HAI annually [2]. HAIs are also extremely costly, with the overall annual direct costs to hospitals in the US ranging from $35.7 to $45 billion [3,4]. There are numerous Infection Prevention and Control (IPC) interventions that can be utilized in hospitals to prevent HAI and their spread [5]. IPC activities include programs such as surveillance, hospital investigations when there are outbreaks, measures to prevent spread of contagious organisms, education for healthcare employees, patients and family members, and reporting of HAI to national organizations [6].
While many IPC programs and activities can be justified given that HAIs result in patient morbidity and lengthy hospital admissions [7], not every activity or program within IPC should be funded. When determining which programs should be implemented, the efficiency and effectiveness of such programs must be considered. Economic evaluations can determine which IPC strategies are cost-effective and provide reasonable value for money [7]. We sought to conduct a narrative review of the various types of economic evaluations, recommendations by national institutions, and factors to consider for economic evaluations, particularly the use of quality adjusted life years (QALYs) within IPC and their associated challenges.
Economic evaluations
All economic evaluations assess value for money by comparing the impact of competing interventions on both costs and consequences simultaneously [8]. There are a variety of types of economic evaluations described briefly below.
Cost-minimization analysis
A basic form of analysis is cost-minimization analysis which is used when the clinical effectiveness of two interventions are the same -so the choice between them relates to their relative costs [8]. For example, if two programs to promote hand hygiene were identical in terms of efficiency than only the comparisons of costs would be relevant.
Cost-effectiveness analysis
A cost-effectiveness analysis (CEA) compares consequences using natural clinical units such as life years gained, or infections avoided. The advantage to CEAs is that they are easier to conduct and the consequences are simple to understand clinically, however it can be difficult to compare the results of different evaluations if the same measure of clinical outcome is not used (for instance, how to compare a study reporting a cost per pneumonia avoided with a study reporting a cost per heart attack prevented) [8]. An example of a CEA is a recent study which compared universal and targeted decolonization for methicillin resistant Staphylococcus aureus (MRSA) in patients in the intensive care unit, to screening and then isolating patients who were colonized with MRSA [9]. It was more costeffective (e.g. lower cost and prevented more MRSA infections) to complete universal and targeted decolonization compared to screening and isolation [9].
Cost-utility analysis
A cost-utility analysis (CUA) is an extension of a CEA, where the measure of health benefit is a measure that considers both length and quality of life. This is often represented by the QALY, calculated by multiplying the utility (a measure of preference for a person's overall quality of life) of a given health state by the time spent in that health state [8]. The QALY allows for direct comparison between economic evaluations comparing different types of interventions and health conditions, rendering cost per QALY estimates more comparable. For example, a study compared the cost per QALY of rectal culture-guided antibiotic prophylaxis with standard ciprofloxacin prophylaxis [10]. The culture guided group saved 0.0002 QALYs and $24 per patient, by preventing more infections [10].
Cost benefit analysis
Cost-benefit analysis (CBA) is another form of economic evaluation where all consequences, including clinical outcomes, are expressed in monetary terms. CBAs can be a challenge in health care evaluations because many of the health benefits can be difficult to quantify in monetary terms, though they can be helpful when there are non-health benefits of interventions that require inclusion (for instance, the benefits of receiving health care locally vs travelling or when there are other process benefits that wouldn't be quantified through the usual QALY rubric) [8]. In a recent cost-benefit analysis of a supplementary measles immunization program in a highly immunized population [11], the authors noted that the management of 187 cases of measles cost $864,000 (hospitalization costs, case management and earnings lost). In order for supplemental vaccination to be considered cost-effective, they estimated that the vaccination would need to cost less than $66 to $1877 per patient (depending on different scenarios) [11]. As the authors concluded that such a program would be unlikely to exceed these costs, supplementary measles immunization was considered cost-effective [11].
Guidelines for conducting economic evaluations
The most recent guidelines for the conduct of economic evaluations from the Canadian Agency for Drug and Technologies in Health were published in 2017 [12]. These guidelines are meant to improve the quality of economic evaluations for health technologies, and to ensure comparability across different analyses. They recommend that a CUA be utilized with the outcomes expressed as QALYs. Any other type of economic evaluation must be justified.
The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) also has guidelines on the reporting of economic evaluations which suggest the use of QALYs whenever possible, and are used widely in Canada, the United States and internationally [13].
Estimating and using the quality adjusted life year As described above, a CUA is frequently the recommended type of economic evaluation and the outcome that is suggested for use by all contemporary guidelines is the QALY. A QALY is calculated by multiplying the utility (a measure of preference for a person's overall quality of lifeusually varying on a scale from 0 to 1, where 1 represents perfect health and 0 is equivalent to death) for a given health state by the time spent in that health state [14]. Utility scores can be determined in a variety of ways. They can be measured directlythrough use of the visual analogue scale, the time trade-off and the standard gamble [14], or they can be measured indirectly, through questionnaires like the Euroqol EQ-5D, or the Health Utilities Index.
In the visual analogue scale participants with a certain health condition are presented with a scale ranging from worst to best imaginable health state and on that scale they place where they feel their current health state is [15]. This provides subjective weights and an ordinal ranking of health outcomes, but it does not invoke the notion of trade-off [15]. In the time trade-off method individuals have a choice between living the rest of their life (t) in a particular health state (i) or living for a shorter time period (x) but in perfect health. The time is varied until the participant feels ambivalent about the two options and then the preference score for i is x/t [15]. In the standard gamble individuals have to choose between the certainty of remaining in a given health state i and an alternative with two outcomes of perfect health with the probability p and death with the probability 1-p. The p is varied until the participant is indifferent between the two choices and then the score for state i for time t is p [15]. If a participant places a higher value on state i then a higher probability of perfect health will be needed for the individual to be indifferent between i and the gamble of having perfect health [15]. In both cases a score between 0 and 1 results, with higher scores reflecting better overall quality of life.
As the above direct methods can be time consuming and challenging for patients, several instruments have been developed that can generate utilities from scores on a variety of domains, for instance the EQ-5D, the SF-6D and the health utilities index [14]. These generic measures are used for valuing health related quality of life based on health status within certain areas [14]. The responses to these questionnaires are then converted into a single utility value.
We illustrate the use of QALYs with an example from the IPC literature which compared different IPC programs designed to prevent surgical site infections [16]. The authors used hypothetical data to estimate how surgical site infections would impact morbidity and mortality and the influence of these infections could be measured in QALYs. Two different scenarios were represented, in the first scenario patients either did or did not develop an infection and those who did develop an infection died shortly after surgery. The patients who did not develop an infection had 7.575 more QALYs than those who did acquire an infection. In another scenario, a patient develops an infection but recovers and after several months improves to the same health state as a patient who never develops an infection. This patient has 7.475 QALYs after surgery [16].
Economic evaluations within infection prevention and control
Positive and negative economic evaluations Health care systems universally work within the confines of cost containment and it can be difficult to reconcile programs such as IPC interventions that improve patient quality of care with their associated costs. Economic evaluations can enhance the evidence base demonstrating how certain IPC programs, despite their expense, improve care for patients in a cost-effective manner and in some cases can result in cost savings. For example, a Canadian study demonstrated that after an IPC program became regionalized with standardized policies and procedures, with a budget of $6.7 million over the four years, the program resulted in cost savings of $9.1 million with a reduction of 4739 HAI cases [17]. It is evidence such as this that justifies the existence and continued funding for health care quality improvement programs such as IPC. Alternatively, economic evaluations will also demonstrate when IPC programs are not cost-effective. A study from the United States looked at the effectiveness of universal screening for MRSA to prevent hospital acquired MRSA infections [18]. The authors determined that while the rate of detection of MRSA was higher in universal screening, it was also more costly and did not significantly reduce the rate of hospital acquired MRSA infections compared to targeted screening. Therefore, no implementation of a universal screening program was advised [18].
Is a full economic evaluation required?
A systematic review from 2005 examining 70 different studies which performed economic analyses of HAI [2], a systematic audit of economic evidence linking HAI and IPC interventions from 1990 to 2000 [19], and a recent systematic review from 2016 [20], all found that frequently only partial economic evaluations were completed. Simple cost analyses of infection were commonly utilized, guidelines for economic evaluations were not followed, and the quality of reporting according to ISPOR was low [2,19,20]. This likely reflects that health economics in IPC is still a relatively new area.
If a strategy that improves outcomes also saves money in the short-term, then there is potentially no need for a complete economic evaluation assuming proper methodology was utilized. However, if there is additional cost associated with the intervention then other factors do need to be considered, such as infections prevented, or the impact on QALYs, allowing for comparison between different interventions [8].
Use of quality adjusted life years in economic evaluations within infection prevention and control
With respect to the important question of whether to invest in IPC programs, a question to be asked is: what are the benefits in terms of improvements to patient's quality of life and survival, as well as the impact on overall costs? The purpose of IPC interventions is to prevent infections thus improving patient outcomes and resulting in additional QALYs [7]. Without randomized controlled trials (RCTs), this can be difficult to assess as the economic evaluation needs to be able to estimate what care for patients would have cost if they had not developed any infection [21].
The study used previously to describe the calculation of QALYs examined a model with six different IPC programs designed to prevent surgical site infections using infection related costs and QALYs [16]. The authors considered the outcomes of the cost of the IPC program, the cost of infection to the hospital, the cost to the community services and the patient-borne costs of the infection. The different IPC programs had varied upfront costs but some had more cost-savings, due to more infections preventedresulting in different incremental QALY estimates. This reinforces that just assessing the change in costs related to preventing infection does not fully answer the question posed by their study. Changes in health benefits must be considered [16]. The authors then determined which IPC strategy had the lowest cost per QALY, and their conclusions contrast with the decisions that might be made when only considering the evaluation from the viewpoint of costs spent and saved [16].
In another recent study, the authors created a HAI cost-effectiveness policy model simulating elderly patients admitted to the intensive care unit (ICU). The objective was to determine the cost-effectiveness of hospitals' continuing investments in HAI prevention in ICUs. Subsequently, multiple health states following the ICU admission were considered including bloodstream infection related to an access line, ventilator associated pneumonia, the conditional probability of inpatient deaths due to each specific HAI type, as well as incremental hospital costs associated with each infection [22]. A five year time horizon was used, which included QALYs and healthcare costs (dependent on whether or not a HAI developed) [22]. Published literature was used to determine the costs and rates of HAI and Medicare data was used to estimate the monthly conditional probabilities of health states for those who had or had not developed a HAI [22]. The authors determined that by continuing to use an IPC program to prevent HAI in ICUs, it would cost $14,250 per life year gained and $23,277 per QALY gained (US dollars) compared to not using an IPC program [22]. While not cost-saving, the IPC program to prevent HAI was still considered costeffective [22].
Challenges with quality adjusted life years and economic evaluations in infection prevention and control
A systematic review of CUA related to Infectious Diseases including IPC was published in 2005 [23], noting only 122 CUA publications over a 21 year time period. There are many reasons for this, including a focus by payers on costs, a lack of RCTs in IPC, the complexity of modelling in infectious diseases, and few estimates for utility measures in IPC. In general, hospitals implementing IPC programs care about the costs spent and saved, and considering costs on their own is a simple accounting exercise.
There are few RCTs in IPC, and utilities as well as costs and benefits are frequently determined alongside these trials. This has likely impacted the number of informative CUAs in IPC. The lack of RCTs may in part be due to the difficulty in comparing the multitude of different IPC interventions, rendering it extremely difficult to complete one succinct RCT [16]. Therefore, in order to model the costs and benefits of IPC programs, multiple sources of information may need to be funneled into one model, creating a more difficult modelling scenario [16].
In addition to the issues just described, modelling for infectious diseases in general can be quite complex and requires specific expertise [23]. For example, the lack of an IPC program may lead to a patient developing an infection with antimicrobial resistant bacteria via transmission from another patient causing adverse consequences and increased costs. However, it can be difficult to model this continued transmission.
The estimation of utilities in infectious diseases appears to be challenging for several reasons. Indeed, studies that have estimated utilities in infectious diseases have noted a very wide interquartile range compared to 13 other disease categories [23]. The broad range of utilities available may at first glance render a CEA, which examines costs per life year gained or per infection avoided, more appealing to conduct.
Another limitation of using a QALY in HAI, which are typically transient [24], is that it can be difficult to determine a trade-off between quality and quantity of life [25]. Transient health states are those that last for a specific brief time, often less than one year, followed by a return to full health [26]. While traditional methods for measuring quality of life may not be appropriate for these transient health states, there are techniques adapted from the conventional methods which can be utilized [26]. While it could be argued that with such a short health state any influence on QALYs would be minimal, there may be longer lasting effects from even a short-term impact on health such as infection. In studies looking at health valuation specifically in transient health states such as dentistry and infection following hip arthroplasty, valuation of the health state can still be accomplished which subsequently can be turned into QALYs allowing for comparison between programs [27,28].
Conclusions
The majority of economic evaluations in IPC are partial evaluations only considering costs. Given finite resources and fixed budgets, conducting rigorous economic evaluations of IPC programs will help policy makers understand where and how to spend scarce healthcare dollars. Utilizing a CUA and QALYs will allow comparison to other programs where healthcare dollars might be spent and ensure that resources are being used in sustainable and cost-effective programs.
The utilization of the QALY is not without its difficulties. This necessitates the use of more complex models and many of the necessary pieces of information, including utilities and time spent in particular health states lack reliable information within the realm of IPC.
Despite these challenges, given that IPC interventions were designed to improve patient safety and quality of care it is appropriate to not only consider costs saved from the hospital standpoint during admission but also the benefit to health through the use of QALYs.
Future economic evaluations in the area of IPC interventions should aim to follow rigorous guidelines for economic evaluations and justify when they are not used. CEAs that demonstrate cost savings and improved outcomes can inform funding decisions, but in most situations, use of the QALY when comparing different IPC interventions should be carefully considered.
Finally, the difficulty with estimating utilities in infectious diseases, in the context of this review, and in particular IPC should be addressed through additional studies that collect utilities for different health states related to HAI [23].
As more health economists work alongside infectious diseases and IPC specialists, it is likely that the outcomes relevant to health economics will be included, making economic evaluations more common. Collaborating with health economists can help in addressing the difficulties in uncertainty and amalgamating data when there are multiple sources of information and a lack of RCTs. Additionally, they can aid in creating valid economic models that take into account the transient nature of HAI and encourage the use of proper techniques to obtain utilities and subsequent QALYs. | 4,586.4 | 2018-02-27T00:00:00.000 | [
"Economics",
"Medicine"
] |
Regular Bulk Solutions in Brane-worlds with Inhomogeneous Dust and Generalized Dark Radiation
From the dynamics of a brane-world with matter fields present in the bulk, the bulk metric and the black string solution near the brane are generalized, when both the dynamics of inhomogeneous dust/generalized dark radiation on the brane-world and inhomogeneous dark radiation in the bulk as well are considered -- as exact dynamical collapse solutions. Based on the analysis on the inhomogeneous static exterior of a collapsing sphere of homogeneous dark radiation on the brane, the associated black string warped horizon is studied, as well as the 5D bulk metric near the brane. Moreover, the black string and the bulk are shown to be more regular upon time evolution, for suitable values for the dark radiation parameter in the model, by analyzing the physical soft singularities.
I. INTRODUCTION
Brane-world models with a single extra dimension [1,2] are decidedly a 5D phenomenological realization of Hořava-Witten supergravity solutions [3], if the moduli effects from compact extra dimensions can be ignored (for a review, see e. g. [4]). The Hořava-Witten solution [3] can be thought of as being effectively 5D, with an extra dimension that can be large, when collated to the fundamental scale. They provide the basis for the well-known Randall-Sundrum (RS) braneworld models [1,2], that comprise the mirror symmetry and a brane with tension as well, which counter balances the leverage of the negative bulk cosmological constant on the brane, encompassing furthermore the branes self-gravity [4]. In RS brane-world scenarios, our Universe is embedded in a 5D bulk of type AdS 5 [2]. The formalism to be used hereon employs a general metric for the brane-world, instead of the Minkowski metric in the standard RS model [2].
Brane-world black holes were comprehensively studied in Randall-Sundrum like brane-world cosmologies [5][6][7], where the dynamical equations on the brane are different from the general relativity ones. In fact, the brane-world framework presents terms that handle both the effects of the free gravitational field in the bulk and of the brane embedding in the bulk as well. The imprint of the nonlocal gravitational field in the bulk on the brane provides a splitting into anisotropic stress, flux, and nonlocal energy density, where this last determines the tidal acceleration out of the brane, possibly opposing the formation of singularities [4]. Unlike the nonlocal energy density and flux, the nonlocal anisotropic stress is not ascertained by any evolution equation on the brane. In particular, isotropy of the cosmic microwave background make the existence of the FRW background be under risk. Adiabatic density perturbations are furthermore coupled to perturbations in the bulk field, making an open system on the brane [7].
Moreover, consequences of the gravitational collapse were proposed in the context of brane-world scenarios in, e. g., [24][25][26][27][28]. In addition, dark matter was investigated already in [29] as a bulk effect on the brane.
Black strings can be thought as being extended objects endowed with an event horizon, in low energy string theory [30]. The bulk metric near the brane and the black string warped horizon along the extra dimension are here reviewed, based on previous developments [4,31]. Originally, a Schwarzschild black hole on the brane-world was shown to be a black string in a higher dimensional spacetime, what leads to the usual astrophysical properties of black holes to be recovered in this scenario [31]. In this prototypical context, the Kretschmann curvature invariants diverge when the black string event horizon is approached along the axis of the black string. Several generalizations provide attempts to preclude singularities both in the bulk and on the brane-world as well.
When variable brane tension scenarios are taken into account, the brane tension can gauge the Kretschmann scalars involved. For instance, regular bulk solutions and black strings were obtained in Friedmann-Robertson-Walker brane-worlds under the Eötvös law [32], where the singularities related to the McVittie metric can be partially controlled as the cosmological time elapses. Indeed, for this type of metric the 5D physical soft singularities in the bulk are alleviated as time elapses, providing a regular 5D bulk solution, as the 5D Kretschmann invariants do not diverge. When other metrics are taken into account, for instance the Casadio-Fabbri-Mazzacurati metric one [33], black strings can be still emulated [34], however the related singularities in the bulk remain, regardless. In order to accomplish it, effective/perturbative approaches are usually employed, where the black string is made to evolve from the brane-world [4,31,35].
Brane solutions of static black hole exteriors with 5D corrections to the Schwarzschild metric have been found, for instance, in [36][37][38], and furthermore in the context where the bulk singularities can be removed [39]. The (Schwarzchild) black string is unstable near the AdS 5 horizon, defining the so called Gregory-Laflamme instability [40,41]. This scenario might be drastically altered by the inhomogeneous dust and the dark radiation. In order to accomplish this effect, we use a procedure to calculate both the metric near the brane and the 5D black string horizon [15], uniquely from a brane-world black hole metric and the associated Weyl tensor. Based on the knowledge of both the Sasaki-Shiromizu-Maeda effective field equations on the brane and upon the 5D Einstein and Bianchi equations [4,7,35,42,43], both the bulk metric near the brane and in particular the black string warped horizon can be designed, by using a Taylor expansion along the extra dimension. Such procedure provides information about all the bulk metric components [15].
Indeed, the bulk spacetime may be either given, by solving the full 5D equations or alternatively obtained by evolving the brane-world black hole metric off the brane, what encompasses the imprint from the bulk via the Weyl tensor. Numerical methods have been employed to find black hole solutions in the context of black strings and fluid/gravity correspondence [41]. Similar methods involving expansions of the metric have been used in the context of black strings [44], disposing the black string metric as the leading order solution in a Taylor expansion.
The bulk shape of the black string horizon has been merely investigated in very particular cases [4,35], and latterly the standard black string was studied in the context of a brane-world with variable tension [15]. Moreover, realistic models that take into account a post-Newtonian parameter on the Casadio-Fabbri-Mazzacurati black string [45], and the black string in a Friedmann-Robertson-Walker Eötvös brane-world [34], also represent interesting applications.
Recently regular black strings solutions associated to a dynamical brane-world have been obtained in the context of a variable brane tension [32]. The analysis of the 5D Kretschmann invariants makes us capable of attenuating the bulk physical singularities along some eras of the evolution of the Universe for the McVittie metric on an Eötvös fluid brane-world. This paper is devoted to encompass a framework with dark radiation and inhomogeneous dust. The 5D physical singularities in the bulk are shown to be inherited from the 4D brane-world and no additional singularity appears in the bulk, for some range of parameters in our model. Nevertheless, the bulk physical soft singularities can be unexpectedly controlled in the bulk upon time evolution, what makes a regular bulk 5D solution in most ranges of the dark radiation parameter. This paper is organized as follows: in Section II the dark radiation dynamics on the brane is analyzed and reviewed. Starting with the Lemaître-Tolman-Bondi (LTB) metric on the brane, the effective field equations for dark radiation on the brane are solved. The dynamical radiation model is shown to mimic a 4D cosmological constant on the brane. Both the black string solution and the bulk metric are obtained thereon. After obtaining the standard dark radiation model, a generalized framework is proposed. Both associated metrics are derived. In Section III the 5D bulk metric near the brane and the generalized black string are derived and studied, and in Section IV the black string warped horizon in the context of inhomogeneous dust and generalized dark radiation is studied. Moreover, the black string physical singularities are analyzed from the Kretschmann invariants. The 5D physical singularities in the bulk reflect the 4D brane-world physical singularities. We analyze further the Kretschmann scalars generated by higher order derivatives of the Riemann tensor, and the respective physical soft singularities show that the bulk 5D solution is regular, in some ranges of the dark radiation parameter.
In order to fix the notation hereupon µ, ν = 0, 1, 2, 3 and M, N = 0, 1, 2, 3, 5, and let n be a time-like covector field normal to the brane and y the associated Gaussian coordinate. The brane metric components g µν and the corresponding components of the bulk metricǧ µν are related by g µν + n µ n ν =ǧ µν [4]. With these choices we can write g 55 = 1 and g µ5 = 0, and thus the 5D bulk metric readsǧ where M, N = 0, 1, 2, 3 effectively.
The initial paradigm concerning a perturbative method for obtaining the black string solution consisting in assuming the Schwarzschild form for the induced brane metric, on a RS brane-world.
Subsequently a sheaf of such solutions are disposed into the extra dimension [31]: where = −6 Λ 5 denotes the curvature radius of the bulk AdS 5 , wherein the RS brane-world is embedded. Each space of constant y is a 4D Schwarzschild spacetime with a singularity along r = 0 for all y, the well-known (Schwarzschild) black string.
As it is going to be clearer in Section III, the areal radius of the sheaf of such solutions along the extra dimension is called the black string warp horizon, that shall be precisely defined in Section III.
II. DARK RADIATION DYNAMICS ON THE BRANE
Henceforward some results concerning the dark radiation dynamics on the brane will be shortly revisited [46][47][48], in order to briefly introduce the framework to get both the bulk metric near the brane and the black string encompassing the dark radiation parameter and the effective cosmological constant. New black strings solutions are here derived in the scenario provided in [46][47][48] [49][50][51]. A solution for the black string that has as limit a tidal Reissner-Nordström black hole solution on the brane was obtained in [46], in the Randall-Sundrum scenario. In order to work with the effective Einstein equations on the brane, some conditions on the projected Weyl tensor are usually assumed, in order to provide a closed system. Besides, by taking into account a system of equations where a specific state equation leads to a inhomogeneous density that has precisely the dark radiation form [52][53][54] (and its generalizations for thick branes [13,55]), and by solving the effective Einstein equations, the LTB metric can be derived. Subsequently both the bulk near the brane and the black string solution is obtained are obtained in this context.
From the dynamics of a brane-world with matter fields present in the bulk [47], the associated black string solution will be shown to present a generalized dark radiation form. The way to obtain the LTB metric is essentially different from that acquired in [46]. The inhomogeneous density is associated with conformal bulk fields, instead of being related to the electric part of the Weyl tensor. The black string solution is similarly obtained by a change in the coordinate system and its final form generalizes the first case. Thereafter two new black string solutions will be presented and subsequently used in the construction of the horizon profile in the bulk. metrics. An alternative approach for the LTB space-time is based in evolution equations of covariant objects, as the density, expansion scalar, electric Weyl tensor, shear tensor and spatial curvature. The dynamics is reduced to scalar equations, and the FRW spacetime is achieved when two of these scalars associated with the shear tensor and electric Weyl tensor are zero. This formulation is based on a 1 + 3 covariant description [56], which can be further applied to the LTB model [57,58]. In general, the applications of these models involve black holes, galaxy clusters, superclusters, cosmic voids, supernovas and redshift drift, for instance [59]. Initially found by Lemaître [49], the LTB metric describes a spherically symmetric inhomogeneous fluid with anisotropic pressurecosmological constant are present for instance in the Tolman model [50]. To derive the LTB solution, in comoving coordinates the general form for the line element is given by: where the 2-sphere area element is denoted by dΩ 2 , the energy-momentum tensor written as where ρ denotes the energy density, Λ 4 denotes the brane cosmological constant with associated energy density ρ Λ = κ −2 4 Λ 4 , and κ 4 is the 4D gravitational coupling constant. The Einstein equations, for each one of the space diagonal components, are given by the following expressions: The function A(r, t) = g(r)∂ r R satisfies Eq.(5). By setting g(r) = (1 + f (r)) −1/2 , the usual form of the LTB metric is hence obtained: where f (r) > −1. The function f can be interpreted as the energy density shell f (r) = 2E(r). The function g(r) is a geometric factor such that when g(r) = 1 the spatial sections are flat. Eqs. (4) and (5) are not independent, leading to the expression where M = M (r) is an arbitrary function of integration that gives the gravitational mass within each comoving shell of coordinate radius r. By definition of mass in [51], one can write 2dm/dr = The first term is interpreted as kinetic energy, the second stands for the Newtonian potential term and f is twice the energy of the system, when Λ 4 = 0. Hence M is the relativistic generalization of the Newtonian mass. When f is negligible, namely in the non-relativistic limit, the spatial sections are flat when g = 1. The function g provides the energy in each spatial section, and thus carries the information of curvature for each section.
where t N is known as the "bang time". Eq. (8) can be used to classify the LTB models into three classes. When Λ 4 = 0 it reads: When Λ 4 = 0 the potential V (R) = 2M R + Λ 4 3 R 2 leads to a different classification, depending on the sign of Λ 4 .
B. LTB Solution on the Brane
In this Subsection the LTB solution associated with dark radiation on the brane is reviewed, starting with the projected Einstein equations on the brane in vacuum. Unlike the Reissner-Nordström black hole, this new solution has a specific dark radiation tidal charge Q. The 4D and 5D coupling constants are related by κ 2 4 = 1 6 λκ 4 5 . The field equations in 5D Einstein theory lead to the projected equations [42,60] where Π µν is a term quadratic in the energy-momentum T µν and provides high-energy corrections arising from the extrinsic curvature of the brane, what increases the pressure and effective density of collapsing matter. The term E µν is the projection of the bulk Weyl tensor and provide Kaluza-Klein corrections originated from 5D graviton stresses [28], as the massive modes for the graviton in the linearized regime. For observers on the brane such stresses are nonlocal, in the sense that they are local density inhomogeneities on the brane generate Weyl curvature in the bulk backreacting nonlocally on the brane [4,7,16,17,53,54]. Therefore the vacuum equations are where the Weyl projected tensor can be identified with a trace-free energy- where U is the effective energy, P µν is an anisotropic stress tensor, Q µ is the effective energy flux, v µ is a 4D velocity vector satisfying v µ v µ = −1, and h µν is such that v µ h µν = 0, being thus possible A non-static spherically symmetric brane-world with P µν = 0 can be described by the line element with Q µ = 0. The anisotropic stress tensor P µν can be represented by P µν = P r µ r ν + 1 3 h µν , where P = P (r, t) is a scalar field and r µ is the unit radial vector. With these assumptions the electric part of the Weyl tensor yields with ρ = U , 3p r = U + 2P and 3p T = U − P .
By assuming the brane field equation ∇ µ E µν = 0 [4] and by considering the state equation where the constant Q is the dark radiation tidal charge [46][47][48]. As ρ = U , the energy density in this case is related to the inhomogeneous density. Thus the 4D Einstein equations (10) read It follows, by solving the above equations, that the component G tr is obtained by the expression ∂ t B∂ r R − B∂ t ∂ r R = 0, and therefore the function satisfies this relation with H = H(r). Considering thus such expression for B in the trace equation [46] it is possible to write the expression as follows which is similar to Eq.(8). Integrating Eq. (17), it reads which is analogous to Eq. (9). It is thus possible to write (12) in the LTB form given by (7).
Making the transformation of the LTB coordinates (t, r) to curvature coordinates (T, R) as the following 4D metric is finally obtained: This metric is known as the inhomogeneous static exterior of a collapsing sphere of homogeneous dark radiation [28,62]. Note that when Λ 4 = 0, the solution (20) is formally analogous to the Reissner-Nordström solution, when one identifies the electric charge to the dark radiation tidal charge.
In what follows this solution will be generalized, by considering a generalized dark radiation term with dark radiation charge Q η where η is a parameter characterizing the model of the dark radiation [47]. The dynamics of a spherically symmetric brane-world is also analyzed, when the bulk a) carries matter fields; and b) when its warp factor characterizes a global conformal transformation consistent with Z 2 symmetry. Finally, it is possible to study the bulk metric and the black string solutions with a term analogous to the black hole solution with cosmological constant on the brane.
In this framework, the energy-momentum tensor encompasses conformal bulk matter fields, whose dynamics provide a specific state equation [46][47][48].
Consider now a general conformal spherically symmetric metric ds 2 5 consistent with Z 2 symmetry along the extra dimension on the brane where z stands for the conformal extra dimensional coordinate, and A, B, R, and Ω are general functions of the coordinates (t, r, z). Ω denotes the conformal factor. The Einstein field equations are given byG whereg M N denotes the components of the is the induced metric andT µν stands for the components of the energy-momentum tensor representing the bulk fields. The brane is localized at z = z 0 .
Under the conformal transformationT M N = Ω s T M N , the energy-momentum tensor is assumed, as usual, to have weight s = −4 [46]. The conformal Einstein tensor is given bẙ By using the expressiong M N = Ω −1 g M N , Eq.(22) hence leads to the following equations: Eq. (24) Hence Eqs. (26) and (27) require that 2T z z = T µ µ . Now, by considering the energy-momentum the state equation ρ − p r − 2p T + 2p z = 0 holds. It implies that ∇ z T z z = 0 and subsequently that ∂ z p z = 0 Thus ρ, p r and p T must be independent of z.
The system of inhomogeneous dark radiation and an effective cosmological constant is defined by a conformal bulk matter with the following equations of state where η characterizes the dark radiation model and Λ is a bulk quantity and mimics a 4D cosmological constant on the brane. The components of the Einstein tensor can be thus evinced: By taking the divergence of T µ ν , it reads what consequently leads to where here Q η = const is interpreted as a generalized dark radiation tidal charge and behaves like (14) when η = −1. Regarding the following energy conditions [63] ρ ≥ 0 and ρ + p i ≥ 0, respectively known as weak, strong and dominant energy conditions, for ρ = 0 (by taking the weak energy condition) the equality ρ DR = −κ −2 5 Λ holds, such that Λ = −Q η R 2η−2 ≤ 0 for Q η ≥ 0. One can realize that ρ = −p r regards the weak condition and ρ + p T = (1 − η)ρ DR holds if η ≤ 1, where here ρ > 0 satisfies the first condition. By a similar analysis in 5D we see that η ≤ 0 (weak), and η ≤ 0 (4D and strong), η ≤ −1/3 (5D and strong), |η| ≤ 1 (4D and dominant) and what generalizes Eq. (17), being identical to it when η = −1. By performing one more integration it reads where ±t refers to expansion or collapse and τ corresponds to the evaluation of the function at t = 0. The condition R(0, r) = r is taken on the hypersurface t = 0. The radial equation leads to H = √ 1 + f [47], and thus the metric has the 4D LTB form. Hence the 5D conformal line element reads ds 2 (5) = Ω 2 RS −dt 2 + (∂ r R/H) 2 dr 2 + R 2 dΩ 2 + dz 2 , where Ω RS is the Randall-Sundrum warp factor [2]. From the dynamical dark radiation models the marginal bound f = 0 corresponds actually to static solutions. Finally, the transformation from The black string solution is therefore obtained: The models that describe the inhomogeneous static exterior of a collapsing sphere of homogeneous standard dark radiation require the value η = −1 [28,46,62]. In general the exterior spacetime is not static in the brane-world scenario [28], however the collapse of a homogeneous Kaluza-Klein energy density is static, and can be identified to the dark radiation. Hence in the case η = −1 the model of homogeneous dark radiation is recovered. It is worth to emphasize that the exterior is static solely when the system has tidal charge and cosmological constant, where the physical mass equals zero. When Λ = 0, the zero mass limit of the tidal Reissner-Nordström black hole is obtained [36]. The event horizon is determined by the solutions of the following equation: on the brane. Thus for Λ = 0 it implies that R 2η h = (2η + 1)/Q η . When Λ = 0, the exact location of the horizons can not be obtained, with exceptions for the values η = −1 and η = 1/2. Respectively, the horizons are given by: The conditions Λ < 0 and Q 1/2 > 0 imply two horizons, an inner and an outer one. For two specific values of Q 1/2 the singularity is naked, thus violating the cosmic censorship hypothesis [64]. Yet, if Λ > 0 and Q 1/2 ≷ 0, then there is a single horizon R (±) h .
III. BULK METRIC AND THE BLACK STRING
In this Section, the bulk metric near the brane as well as the black string associated to a black hole on a brane-world are briefly introduced [4,15,42]. Eq.(2) represents the black string metric.
By denoting (5) R µνσρ the components of the 5D Riemann tensor, as the 5D Kretschmann invariant e 4|y|/ is unbounded as y → ∞ [4,31], thus the Schwarzschild solution is not a good candidate neither for a brane-world black hole nor for trying to remove at least some of the bulk singularities. Hence a well-established perturbative method is employed to find both the bulk metric near the brane and, in particular, the black string warped horizon along the extra dimension. In what follows such framework is revisited. This shall be further accomplished in Section IV in the context of inhomogeneous dust and generalized dark radiation on the brane-world and inhomogeneous dark radiation in the bulk as well. There we shall evince that in such scenario the bulk (and the black string) can be regular for certain ranges of the dark radiation parameters.
In brane-worlds with Z 2 symmetry, the junction conditions imply that the extrinsic curvature of the brane is given by [4] K µν = κ 2 The trace-free and symmetric components of the bulk Weyl tensor C µνσρ are respectively given by B µνρ = g τ µ g σ ν C τ σρβ n β and E µν = C µνσρ n σ n ρ , where n α denote components of a vector field out of the braneworld. Hereupon we denote by R µνσρ = (5) R µνσρ (x α , 0) the components of the 5D Riemann tensor computed on the brane.
One thus finds the above expansion up to the fourth order is given by [15] g µν (x α , y) = g µν − κ 2 5 1 3 (λ − T )g µν + T µν |y| where H ≡ H µ µ and H 2 ≡ H ρσ H ρσ , for any tensor H of rank two.
The black string warped horizon [66] can be studied from g θθ (x α , y) in Eq. (52). In fact, regarding a spherically symmetric 4D metric modelling a brane black hole, the usual 4D areal radial coordinate r is related to the 5D metric in Eq.(52) by the expression r 2 = g θθ (x α , 0) [15,35]. The black string has a warped horizon on the brane that has radius r = g θθ (x α , 0), where r denotes the coordinate singularity which is solution of g −1 rr (r) = 0 (See Eq. (II.6) of Ref. [67]). The radius of the black string warped horizon is hence r(y) = g θθ (x α , y). The term g θθ (x α , y) in Eq. (52) for µ = ν = θ corresponds to the bulk (squared) areal radius, and includes both the black string horizon for r = r, and in particular the brane black hole horizon for y = 0.
IV. BLACK STRINGS AND DARK DUST
In this Section we investigate the black string related to the induced black hole on the brane given by Eq. (41). The black string warped horizon is well known to be the component g θθ (x α , y) in Eq. (52), evaluated at event horizons in (44). At first let us calculate such component for an arbitrary x α . In order to accomplish it, the 4D energy-momentum tensor on the brane is given by the brane-world components in Eq.(28), where and the expressions for the energy-momentum, the projected Weyl tensor components, the extrinsic curvature, and the Riemann tensor respectively given respectively by (A1a-A4c) in Appendix are thus obtained. Hence the component θθ of the metric, corresponding to the black string horizon along the bulk (52), can be written as: The last expression does not explicitly include the terms of order y 4 /4! as they are extensive and awkward, although we shall consider such terms in our subsequent analysis.
Our first analysis consists in regarding the variable T = T (r, t), which appears solely as the term exp(−αT ) at the order |y| 3 /3! in Eq. (56). The graphic below in Fig. 1 shows the dependence of the component g θθ (x α , y) on T = T (r, t). Hence, near the brane, our results can be taken to be Besides the black string warped horizon, Eq.(52) provides more generally the bulk metric near the brane. In order to check whether the bulk is regular, we aim to study the 4D and the 5D Kretschmann invariants, related to the black hole on the brane and to the black string in the bulk.
Our analysis of the bulk physical singularities is independent on the perturbative method (52), as the curvature invariants are independent of it.
The Gauss equation which relates the 4D and 5D Riemann curvature tensors according to (5) and the 4D and 5D Kretschmann invariants are given by For the generalized dark radiation model they are related by: The physical singularities are provided by R = 0 and by the solutions R h of the Eq. (42) for the values η = −1 and η = 1/2 given by Eqs. (44). Neither new singularities are introduced nor the existing ones (R = 0 and R = R h in Eqs. (43) and (44) are removed from the bulk.
We are going to show in what follows, by the analysis of the physical soft singularities, that the bulk can be regular. Indeed, the 4D invariant (here ∇ µ denotes the covariant derivative on the brane) is very soft, since it takes invariants involving at least two derivatives of the curvature to detect it. Consequently, the 5D version of the invariant ξ reads where D a denotes the 5D covariant derivative. It is worth to point that a, b are effectively 4D spacetime indexes, as the 5D covariant derivative can be realized as D a = ∇ µ and D a = ∇ 5 , when the extra dimension y is taken into account. It implies that the 5D Kretschmann invariant (5) ξ is given by [45] Based on the values of the metric (41) and the extrinsic curvature components given in (45), the 5D Kretschmann invariants (5) ξ for the bulk can calculated. Due to the awkwardness of the expression for these invariants, we opt to analyze our results by the graphics in Figs. 8, 10, 12, and 14. Fig. 7 has to be compared to Fig. 8, that describes the 5D invariant (5) ξ. In Fig. 7 since η ∼ 0 then the 4D invariant ξ diverges, independently of the value for R. On the other hand, Fig. 8 depicts that for η ∼ 0, the 5D Kretschmann scale (5) ξ goes to infinity for most values of R, but when 0.94 R 1.06 the 5D Kretschmann invariant (5) ξ does not diverge. Hence for this range of physical soft singularities present on the brane the bulk is regular. Moreover, Fig. 7 shows that ξ → +∞ for R → 0, however Fig. 8 evinces that for 0 η 0.5 the limit ξ → +∞ on the brane alters to (5) ξ → −∞, for values of R ∼ 0. Still, for −1 η −0.5 the 5D Kretschmann invariant Now the case Λ = 1 is regarded. Fig. 9, regarding the 4D invariant ξ, must be compared respectively to Fig. 10, describing the 5D invariant (5) ξ. A similar pattern is realized in this case, where now Fig. 9 shows that ξ → +∞ for R → 0. We regard the Λ = −1 case in Figs
V. CONCLUDING REMARKS
The bulk metric near the brane was obtained and in particular the black string warped horizon, for inhomogeneous dust and generalized dark radiation on the brane-world and inhomogeneous dark radiation in the bulk are regarded. The standard dark radiation [46,[52][53][54] is a particular case analyzed, where the dark radiation parameter η = −1 and the mimicked cosmological constant Λ equals zero, provided by the 5D pressure also equal to zero.
By analyzing the 4D and 5D standard Kretschmann invariants, respectively defined by Eqs. (58) and (59), the Gauss equation is shown to imply that the bulk associated to brane-world models with inhomogeneous dust and generalized dark radiation inherits the brane-world physical singularities at R = 0 and at R = R h is the solution of Eq. (42). Although the black string warped horizon is obtained as a particular case of the 5D bulk metric near the brane via an effective method, the analysis of the 5D bulk singularities relies on an exact method provided by the Gauss equation (57).
The 4D and 5D Kretschmann invariants, respectively given by (61) and (62) For the extrinsic curvature, by computing (45) we obtain Finally, the necessary components of the Riemann tensor are given by | 7,499.8 | 2015-01-29T00:00:00.000 | [
"Physics"
] |
A Techno-Economic Optimization and Performance Assessment of a 10 kW P Photovoltaic Grid-Connected System
: The system under consideration in this paper consists of a photovoltaic (PV) array, described as having a 10 kWp capacity, battery storage, and connection to the grid via a university grid network. It is stated that the system meets a local load of 4–5 kVA. The system is in Ethiopia, and the authors give details of the location and solar resource to provide information to assess its performance. However, the performance assessment will be specific to the details of the installation and the operational rules, including the variable nature of the load profile, charging and discharging the battery storage, and importing from and exporting to the university grid. The nearby load is mostly supplied from PV and grid sources, and hence the battery installed is found to be idle, showing that the PV together with storage battery system was not utilized in an e ffi cient and optimized way. This in turn resulted in ine ffi cient utilization of sources, increased dependency of the load on the grid, and hence unnecessary operational expenses. Therefore, to alleviate these problems, this paper proposes a means for techno-economic optimization and performance analysis of an existing photovoltaic grid-connected system (PVGCS) by using collected data from a plant data logger for one year (2018) with a model-based Matlab / Simulink simulation and a hybrid optimization model for electric renewables (HOMER) software. According to the simulation result, the PVGCS with 5 kWp PV array optimized system was recommended, which provides a net present cost (NPC) of 5770 ( € / kWh), and a cost of energy (COE) of 0.087 ( € / kWh) compared to an existing 10 kWp PV system, which results in a NPC value of 6047 ( € / kWh) and COE of 0.098 ( € / kWh). Therefore, the resulting 5 kWp PV system connected with a storage battery was found to be more e ffi cient and techno-economically viable as compared to the existing 10 kWp PVGCS plant.
Introduction
The need for sustained energy demand and availability of solar resources led countries to establish large distributed energy systems to secure energy requirements.The photovoltaic grid-connected system (PVGCS) under study was the demonstration project in Ethiopia installed in October 2010/11 GC at Bahir Dar University located 578 km northwest of the capital city Addis Ababa.Ethiopia has already implemented a PVGCS at Bahir Dar University and different standalone systems in rural areas, where it is located in the eastern part of Africa between 3 • to 15 • north and 33 • to 48 • east, has abundant solar energy resources.
Since Ethiopia is located near the equator, it has significant potential to use solar energy.The national annual average irradiance is estimated to be 5.2 kWh/m 2 /day with seasonal variations that range between the minimum of 4.5 kWh/m 2 /day in July to a maximum of 5.9 kWh/m 2 /day in March.Though its size is smaller than average, the installed 10kWp PVGCS system under study was the first installed photovoltaic (PV) plant in the country used as a milestone with objectives of establishing a small scale center of excellence in renewable technologies, organizing and providing teaching facilities for undergraduate (UG) and postgraduate (PG) students, and supporting researchers in data registry and system analysis.Besides, this plant aimed to encourage the national strategy for the country to begin the launching and utilization of renewable energy technologies and energy efficiency programs planning for the installation of thousands of megawatt photovoltaic (PV) plants [1].The country is currently trying to establish a vast system of 100-Megawatt PV plants in one of the regions called "Methehara" with the company group called Enel [2].The grid-connected photovoltaic system under study was installed and has been functional since October 2010.The PVGCS plant was mounted to supply power for the local load and fed excess energy to the grid, providing 4.5-5 kW of power to the commercial loads, and the remaining 5kW is fed to a nearby university's utility network.The existing PVGCS consists of a total of 56 Webel Solar W1750 PV modules (each representing 180 W) [3], 24 cells connected in series with a total of 48 V of battery storage, and a charge controller.Renewable energy sources connected to the utility network need proper evaluation of their performance, as they impose perturbations on the grid due to their intermittent nature.Researchers mentioned in Section 2 assessed the performance of grid-connected photovoltaic plants and therefore their negative and positive impacts on the grid, while factors which affect their performance were also studied.In addition to the technical performance analysis of the PV system, the plants' economics was also studied.The economic performance evaluation of a building-integrated PV (BIPV) system installed with battery energy storage was conducted in a south Norwegian house for evaluating its contribution to minimizing the annualized energy cost and showed that a BIPV system with energy storage is cost effective with a levelized cost of energy (LCOE) of 0.439 NOK/kWh [4].A feasibility assessment of PVGCS systems for residential buildings in Saudi Arabia was conducted while considering its techno-economic viability, and found that the systems were feasible with a levelized cost of energy of 0.0382 $/kWh and a net present value of $4378 [5].A techno-economic evaluation of PVGCS for households was performed with a feed-in tariff and time of day tariff technique using a hybrid optimization model for electric renewables (HOMER) [6], and the result indicates that the NPC and COE approach zero for low range household consumption with a BIPV application.In [7], Chiacchio et al. proposed a model-based approach to analyze the techno-economic feasibility of grid-connected PV power plants using ambient variables and stochastic hybrid fault tree automation as input variables and the system was evaluated using the net present value and the payback time.Hoppmann et al. [8] studied the economic viability of integrating battery storage with small scale PV plants for residential applications.Techno-economic based simulation was utilized for evaluating the profitability of battery storage integrated with PV systems.The net present value as a function of storage and PV system size for different electricity scenarios was evaluated, which showed the use of battery energy storage was economically viable.Nge et al. [9] proposed a method for maximizing revenue over a given period.A real-time based energy management system connected with the smart grid was used and focused on a reactive real-time control mechanism.The simulation of the system considered the days and solar irradiance profiles as input variables.Liu et al. [10] conducted an optimization of photovoltaic system connected with battery energy storage and electric vehicle charging stations.The method utilized a cumulative prospect theory and particle swarm optimization algorithm to determine an optimal photovoltaic/battery energy storage/electric vehicle charging stations (PBES) system.Besides the optimization of PV plants, D'Adamo et al. [11] studied the economic viability of PV plants installed in public buildings using a discounted cash flow methodology.With this method, input variables of the insolation level, plant size, self-consumption share, and electricity purchase price were identified and accordingly the net present value and payback time periods were estimated.In addition to these variables, the grid-connected PV systems operation performance also depended on the climatic condition, orientation, and inclination of the installed PV array, load profile, and inverter efficiency [12][13][14] where these factors were considered in the current study.
In this paper, the performance and techno-economic of the PVGCS integrated with the battery storage system were assessed.The main contribution of this paper includes use of the techno-economic model for evaluating the performance status of the existing PVGCS plant and performing a comparative analysis of this plant with the result of the proposed 5 kWp PV plant in terms of their respective net present cost and cost of energy values.This, in turn, enables the identification of the optimal and economical sizing of entire plant components and efficient utilization of the storage battery to minimize excess electricity.Using the available solar irradiance and dynamic load profiles as an input to the model, the optimization and sensitivity analysis results give the most optimal combination of battery storage, PV array, converter, and grid network usage.To conduct the existing operation performance analysis of the plant, different performance indicators related to energy yield of the entire plant and the performance ratio (Pr) were used.The techno-economic analysis of the PVGCS plant was conducted using a model-based simulation with a hybrid optimization model for electric renewables (HOMER Pro) software.This software is a professional micro-grid analysis tool having Pro edition features, HOMER Pro 3.10.6499.25914version, and manufactured by HOMER Energy LLC, Denver Colorado, United States of America.In addition to the techno-economic study, the entire PVGCS system was modeled using the Matlab simulation environment to analyze the dynamic response of the plant to the grid network.The results found from this study can be used as valuable input for other photovoltaic plants to be installed and connect with the grid system and hence, enables efficient utilization of PV plants in Ethiopia.
The rest of the sections in the paper are organized as follows: Section 2 discusses a state-of-the-art review on techno-economic assessments of grid-connected PV plants.Section 3 describes the proposed model-based simulation methodology for techno-economic analysis and optimization.An overview of the existing infrastructure of the PVGCS is also discussed.Section 4 describes the existing performance status and techno-economic result of the existing 10 kWp plant compared with the proposed 5 kWp PVGCS system.Section 5 presents the discussion section.Finally, the conclusion, limitations, and recommendations are offered in Section 6.
State of the Art Review on Techno-Economic Studies of Grid-Connected Photovoltaic Systems
Photovoltaic based renewable energy systems can be found in two ways: as standalone and grid-connected systems.The PV systems are the primary source of energy for the load, while PV grid-connected systems are utilized when a shortage of energy occurs in the grid network [15].However, PV energy systems connected to the grid can impose instability problems due to their intermittent nature and high penetration [16].Therefore, technical and economic aspects need to be studied before connection of PV systems to the grid network and after they are connected to the grid and on their operational period.So far, scholars mentioned in this section studied the performance assessment, modeling, and techno-economics advantage of photovoltaic plants and their impacts on the grid network.Zhou et al. [17] developed a hybrid optimization model for electric renewables (HOMER) based simulation and optimization technique for evaluating the techno-economic aspects of a rooftop solar PV system in different climate zone conditions.Based on the analysis made, the result showed that grid-connected PV systems are technically and economically feasible for all climate zones considered.The excess electricity, net present cost, and cost of energy values of the grid/PV systems increased with an increase of PV penetration.This paper lacks an evaluation the plant performance in terms of energy production and impact assessment to the grid, and this was identified as a gap.In addition to the techno-economic study, Lau et al. [18] analyzed the effects of individual component costs and feed-in tariffs on grid-connected PV systems in a Malaysian residential application.Based on the analysis made, the results indicate that grid-connected PV systems were feasible for a PV system with costs of $1120/kW or lower.Though the authors provide details of the PV plant's economic aspects, analysis of the storage battery and its impact on reducing grid dependency is not well addressed.In contrast to the focus of the above authors, Emmanuel et al. [19] presented a performance and economic analysis of a 10 kWp PVGCS system installed in Wellington, New Zealand.The economic analysis of the PV system was performed by considering the net present cost (NPC), cost of energy (COE), and payback period and this gives promising values for the plant.In line with the techno-economic study of PV plants, Irwan et al. [20] conducted optimization of a photovoltaic grid-connected system, which was performed by considering two scenarios.The first one was considered by investigating the maximum yield factor as a technical matter, and the second involved maximizing the net present cost value for the economic case.For solving the sizing and optimization issues, a method called the evolutionary programming sizing algorithm (EPSA) was applied.This algorithm mainly uses the PV module and inverter model as the decision variables with an objective function of maximizing the technical or economic performance of the system.In addition, the operational optimization of grid-connected PV plants and the evaluation of small scale PVGCS was studied (Table 1).Sidrach et al. [21] assessed the performance of a 2 kWp grid-connected PV system without an energy storage system in Spain in 1998.A single phase-based inverter is used to convert the DC output to AC and connect with the grid network, which is also used for maximum power point tracking (MPPT) purposes.Based on the study made, the system supplied 2678 kWh to the grid within one year, and 7.4 kWh of average daily energy with a monthly average value of daily system efficiency between 6.1% and 8% was attained.For an annual final yield of 1361 kWh/kWp, the daily final yield ranged between 2.2 kWh/kWp in January to 4.8 kWh/kWp in March.The annual performance ratio of these systems was 64.5%.A lack of evaluation on battery storage's contribution to the system is the gap observed in this paper.Sabounchi et al. [22] investigated the performance evaluation of a 36 kWp installed capacity photovoltaic based distributed generation system connected with a 400 V low voltage side of a distribution network based on the actual weather conditions of temperature and solar radiation availability.Considering the plant's one-year performance data, it was concluded that the system performance and efficiency were affected by dust particles deposition on PV modules' surfaces.Therefore, to attain a reasonable power production, it was recommended that a monthly cleaning of the PV modules surface on regular bases is deduced as a solution.Though the power system performance status and the factors affecting it were presented in detail, the grid's positive and negative impacts were not mentioned.Common performance evaluation indices were used by Sharma et al. [23], who also conducted a performance assessment of a 190 kWp grid integrated solar photovoltaic power plant installed at Khatkar-Kalan, India.Based on the study, the final yield, reference yield, and performance ratio vary from 1.45 to 2.84 kWh/kWp/day, 2.29 to 3.53 kWh/kWp/day, and 55% to 83%, respectively.The average annual energy yield of the plant was found to be 812.76kWh/kWp with a system efficiency of 8.3%.The study also showed that March, September, and October are the months where the maximum energy is generated, and minimum energy is attained in January.Besides, the study suggested that installing solar panels at an optimized tilt angle is very important in providing economic benefits, but the dynamic response of the plant related to transient effect to the grid was not included in the paper.
Kymakis et al. [24] conducted a performance analysis of the grid-connected photovoltaic park found at Crete, Greece.The photovoltaic park has a peak power of 171.36 kWp.It considers a one-year operation data of the plant for estimation of the performance ratio and the various causes of power losses including due to temperature, soiling, internal network, and power electronics.The PV park injected energy of 229 MWh to the grid during 2007, ranging from 335.48 to 869.68 kWh throughout the year.The final yield (Y F ) ranged from 1.96 to 5.07 h/d and provided an annual performance ratio of 67.36%.Similarly, the dynamic response of the plant was not included in the paper.Edalati et al. [25] performed a comparative analysis between mono and polycrystal based PV modules performance under semi moderate and dry climate conditions.The study was conducted on an existing 11.04 kWp PVGCS system.Besides, an experimental based investigation of this plant was done using one year meteorological and performance data of the plant.Accordingly, the average daily final yield (Y F ) of 5.24 kWh/kWp/day and Performance ratio (Pr) of 80.81% was found.Ayompe et al. [26] conducted a performance assessment of 1.72 kWp building integrated system and the temperature and solar radiation data used for analysis of the final yield, array yield and performance ratio results.Based on the results found, the average daily final yield and performance ratio were found to be 2.41 kWh/kWp/day and 81.5%, respectively.The annual total energy generated was also found to be 885.1 kWh/kWp.The author utilized experimental and metrological data for analysis purposes but lacked verification of the result by comparing it with the standard or previous studies.Province [27] analyzed a photovoltaic grid-connected system with 500 kWp capacity that was investigated for its actual performance.Based on the assessment done, 383,274 kWh of system energy was generated for the first eight months and the daily average energy production was 1695.9 kWh.The final yield was found to be in the range of 2.91 to 3.98 h/d, and the performance ratio (Pr) ranged from 70% to 90%.Mondol et al. [28] assessed a 13 kWp roof mounted grid-connected photovoltaic system performance found in north Ireland that was analyzed on an hourly, daily, and monthly basis using parameters of yield and loss indices.Based on the analysis result, the monthly average performance ratio was consistent which is found to be 70% for DC and 61% for AC system.Comparative analysis between two different PVGCS plants was performed by Micheli et al. [29].Different module technologies consideration and their actual performance assessment were done using temperature and irradiance as an effect.The two types of plants investigated confirmed a good performance with a value of 89.1% CTB (Centrale Tecnologica di Basovizza) found in campus of Basovizza, northern Italy and Q2 building of 82.7% performance ratio.Accordingly, the best technology type was recommended but which type external factors can affect each technology type were not explicitly stated.The combination of module technologies provides an array used in the PV plants, and the significance of different PV arrays was compared and investigated by El et al. [30] for 15 MWp grid-connected installed PV system found in Nouakchott, Mauritania.Analysis and monitoring of results between arrays was compared with PV systems installed in other locations.Besides, performance indices were used for evaluation purposes and a mean value of 67.96% performance ratio was achieved.Drif et al. [31] analyzed a 200 kWp PVGCS system using production data at the plant monitored through 2000-2003.According to the study, the annual average energy production was 168.12 MWh per year, representing 6.40% of the university campus's total consumption.An average daily annual performance value of 65% and an average energy yield of 3.91 was found.Shukla et al. [32] conducted a feasibility study of 110 kWp grid-connected photovoltaic plant to be applied for water pumping, lightening, and other electrical appliances of the selected hospital building.Its feasibility was mainly proved with the help of energy production and performance ratio outputs.The PV modules have been modeled and simulated to determine performance ratios and Energy yield and it was found that the performance ratio (PR) of the PV systems varied from 70% to 88% and their energy yields ranged from 2.67 kWh/kWp to 3.36 kWh/kWp.The authors tried to address the feasibility and performance of the plant in terms of yield factors.But, the economic viability of the system was not addressed for the dedicated application.
In addition to the technical aspects summarized above, the economics of the PV hybrid energy supply systems was assessed by different authors.Some of the studies conducted are briefly presented as follows.Peerapong [33] analyzed the increased utilization of photovoltaic resources on diesel-based hybrid energy systems to reduce the cost of electricity generation and decrease the harmful emissions from fossil fuels.The study used a method of net present cost (NPC) estimation to evaluate the optimum hybrid system.The result shows that the hybrid system reduces NPC and COE.The hybrid system can also reduce all air pollutants for sustainable electricity in rural islands.COE decreases from $0.429/kWh to $0.374/kWh compared to the existing diesel-based system and can decrease the emissions of both carbon dioxide by 796.61 tons/year and other gasses by 21.47 tons/year.The hybrid PV/diesel system also reduces the diesel fuel consumption of 302,510 L per year due to an optimal 41% PV resource sharing in this system.The authors describe the significance of using a hybrid system compared to diesel.But the contribution of each component, including the storage battery, was not mentioned.Mamaghani et al. [34] presented an evaluation of an off-grid energy supply system that consisted of a diesel generator, solar panels, and wind turbine units.A dynamic model of the plant was developed with HOMER software to perform a complete parametric analysis of the system configurations and select the most convenient one based on the economic perspectives.Accordingly, the net present cost (NPC), initial capital cost, and cost of energy (COE) were used as economic indicators to define the techno-economic feasibility of the hybrid energy supply system.The gap observed in this paper is that the lack of verification of grid extension distance results compared with standards or references was not included.A comparative study of the hybrid energy system with standalone diesel was conducted by Dursun et al. [35] considering the techno-economic feasibility of a hybrid renewable energy sources with a battery over that of a standalone diesel system to supply a load at a remote location in Turkey.HOMER software was used for the analysis by considering solar and wind data sources of the hybrid system over the diesel system utilizing different solar global irradiances, wind speeds, and diesel prices.The result suggested that the hybrid system reduces the total NPC and COE and the diesel system's dependency.Using this hybrid system, the COE decreased by almost 25%.Though the authors presented the advantage of using the hybrid system to reduce diesel dependency, it lacks the result of the contribution of the storage battery system and payback period of the system.
From the literature reviewed above, the following points were summarized:
•
The use of numerical analysis and simulation is essential to optimize the techno-economic benefits of the PVGCS system, which in turn reduces costs.
•
The utilization of external factors like temperature, GHI, and tilt angle helps to determine the correct simulation output of the system.
•
This hybrid system provides a better output based on the net present cost and cost of energy.
Different researchers studied the techno-economic analysis of grid-connected photovoltaic plants under different weather conditions in different countries.But the performance evaluation parameters and indices utilized in the above reviewed papers were not all-inclusive.
In this paper, all primary performance evaluation indices mentioned as a gap in some of the above reviewed papers was considered to conduct a performance assessment and study the techno-economic optimization of the PVGCS system.
Conducting the techno-economic optimization of the PVGCS integrated with a storage battery system provides major significance in terms of:
•
Evaluating the techno-economic aspect and performance status of the existing 10 kWp grid-connected PV system.
•
Proposing a techno-economically viable combination for the new PVGCS system.
•
Identifying the contribution of the PV plant to the grid network.
•
Using it further as an input for PV plants connected with the country's grid network in the future.
This paper considers the features of the above-reviewed articles and proposes an analytical and model-based techno-economic evaluation method for the PVGCS plant under study.
Methodology and Overview of the Study
The PVGCS system installed on a 160 m 2 area near to postgraduate laboratory rooms is shown in Figure 1.The PV system is composed of 56 Webel Solar W1750 PV modules.The PV array is installed at an optimal inclination angle of 16
Methodology and Overview of the Study
The PVGCS system installed on a 160 m 2 area near to postgraduate laboratory rooms is shown in Figure 1.The PV system is composed of 56 Webel Solar W1750 PV modules.The PV array is installed at an optimal inclination angle of 16° and an azimuth angle of 0°.Each electrical module provides a short circuit current of 5.35 A, open circuit voltage of 44.5 V, a current of maximum power point (IMPP) of 4.96 A, and a voltage of maximum power point (VMPP) of 36.3V [3].Under these conditions, the nominal power of the PV array is approximately 10 kW at its peak.The PV plant was connected to the grid and the electrical energy produced by the PV system is fed to the nearby load and the remaining 4.8 to 5 kW is injected into the grid.The storage battery connected to the plant has a voltage of 48 V with 24 cells series connected with a capacity of 800 Ah.From the overall datasheet shown below, a Hoppecke OPzS 800 lead-acid battery is installed at the plant with a nominal discharge capacity of 800 Ah (Appendix B, Table A3).The discharge capacity with a 10 h discharge is considered as a maximum of 915 Ah with a respective voltage of 2 V.The cycling capability of 1500 cycles with 80% depth of discharge (DOD) and 20 years of life expectancy is provided.As shown in Figure 2, there are two arrays each provides a 5 kWp capacity and when the array outputs exceed the local load, the excess energy will be dedicated to charging the battery.However, excess energy is fed to the grid after satisfying the local load and battery charging condition.The second array always gives priority to charging the battery whenever it is fully discharged.The battery was used to be discharged whenever there is no power from the PV system.
The solar radiation availability and potential of different cities of the country was presented in Figure 3.The plant is installed at a geographic location of 11°35′46″ N, 37°23′39″ E at an elevation of 1789 m The PV plant was connected to the grid and the electrical energy produced by the PV system is fed to the nearby load and the remaining 4.8 to 5 kW is injected into the grid.The storage battery connected to the plant has a voltage of 48 V with 24 cells series connected with a capacity of 800 Ah.From the overall datasheet shown below, a Hoppecke OPzS 800 lead-acid battery is installed at the plant with a nominal discharge capacity of 800 Ah (Appendix B, Table A3).The discharge capacity with a 10 h discharge is considered as a maximum of 915 Ah with a respective voltage of 2 V.The cycling capability of 1500 cycles with 80% depth of discharge (DOD) and 20 years of life expectancy is provided.As shown in Figure 2, there are two arrays each provides a 5 kWp capacity and when the array outputs exceed the local load, the excess energy will be dedicated to charging the battery.However, excess energy is fed to the grid after satisfying the local load and battery charging condition.The second array always gives priority to charging the battery whenever it is fully discharged.The battery was used to be discharged whenever there is no power from the PV system.The solar radiation availability and potential of different cities of the country was presented in Figure 3.The plant is installed at a geographic location of 11 •
Performance Parameters Formulation
The characterization of the PV plant production and entire yield was evaluated using performance indices defined by the IEC61724-3 standards [38] for evaluating the plant performance status.This was achieved through analysis of the energy produced by the PV plant, the energy output
Performance Parameters Formulation
The characterization of the PV plant production and entire yield was evaluated using performance indices defined by the IEC61724-3 standards [38] for evaluating the plant performance In addition to the above map, the data presented in Appendix A (Table A1) shows the average daily global radiation on the horizontal surface of different cities, including Bahir Dar, which is the site under study.The daily sun hour data of Bahir Dar was found to be around 5.8 kWh/m 2 as indicated.For the sake of comparative analysis, the percentage difference of radiation data between data centers is also provided.The data shown in Appendix A (Table A2) was also collected for the city of Bahir dar at a latitude of 11 • 35 46" (11.60 • ) north and longitude of 37 • 23 39" (37.40 • ) east.This includes the sunlight potential of the city and the clearness index, which shows the clearness of the respective ambient temperature.
Performance Parameters Formulation
The characterization of the PV plant production and entire yield was evaluated using performance indices defined by the IEC61724-3 standards [38] for evaluating the plant performance status.This was achieved through analysis of the energy produced by the PV plant, the energy output of the battery, and energy injected into the grid with the help of the performance evaluation parameters.Accordingly, a daily basis and monthly mean values are considered and then results are provided in terms of the evaluation parameters summarized in Table 2. Reference Yield (Yr) The ratio between horizontal inclined irradiance (G inc ) or H to irradiance at a nominal condition (G o ).
Final Yield (Y F ) Defined as the ratio of energy output injected to the grid E inj or E ac to Nominal power of the PV array Capture Losses (Lc) Represents losses occurred during the operation of PV modules and indicates the amount of time the array is required to operate at its nominal power to provide the losses.
System Losses (Ls) Obtained by the difference between PV array productivity and the overall productivity Performance Ratio (P r ) Ratio of generated energy versus the energy which should be generated by a lossless PV plant and shows the performance status of PV plant.
Modeling of Grid-Connected PV System
In order to model the PVGCS using Matlab, a PV Module datasheet and model parameters at the plant were considered at standard temperature conditions 1000 W/m 2 and 25 • C and necessary parameters are defined.The daily radiation and load profile data have been considered as an input for the PV and nearby load for modeling the overall system.The main purpose of modeling the existing PVGCS system using the Matlab software is to study the plant's dynamics response and characteristics.The datasheet of the PV module used for system modeling is presented in Table 3.
PV Array Performances
Before analyzing the production aspect of the PV plant, it is recommended to analyze the dynamics characteristics of PV array in terms of its current, voltage, and power output.Together with the production and manufacturer data of modules provided, important parameters of the module's outputs were estimated which in turn were used to evaluate the plant performance together with the performance indices formulated.Accordingly, the array voltage, current, and power output were checked for two different scenarios.The first case is when the temperature is fixed at 25 • C, while irradiance is varied at 1000 W/m 2 , 500 W/m 2 , and 100 W/m 2 .The second case is when the irradiance is fixed at 1000 W/m 2 and temperature is varied by 25 • C, 45 • C, and 50 • C.
As shown from Figure 4, the output current (varied from 19 to 150 A) is more affected by irradiance than by temperature.In contrast, the voltage is slightly affected by temperature and has an inverse relation.The higher the temperature, the lower the voltage will be and vice versa.The current versus voltage (I-V) and the power versus voltage (P-V) output curves of the PV array are shown in Figures 4 and 5.
Sustainability 2020, 12, x FOR PEER REVIEW 10 of 28 checked for two different scenarios.The first case is when the temperature is fixed at 25 °C, while irradiance is varied at 1000 W/m 2 , 500 W/m 2 , and 100 W/m 2 .The second case is when the irradiance is fixed at 1000 W/m 2 and temperature is varied by 25 °C, 45 °C, and 50 °C.As shown from Figure 4, the output current (varied from 19 to 150 A) is more affected by irradiance than by temperature.In contrast, the voltage is slightly affected by temperature and has an inverse relation.The higher the temperature, the lower the voltage will be and vice versa.The current versus voltage (I-V) and the power versus voltage (P-V) output curves of the PV array are shown in Figures 4 and 5. Figure 5 shows that the power output (which varies from 1100 to 10,000 W) is affected by both irradiance and temperature and the voltage is slightly affected by temperature and has an inverse relation.The higher the temperature, the lower the voltage will be and vice versa.Hence, the I-V and P-V simulation output of the PV array is coherent with the manufacturer datasheet curve.Figure 5 shows that the power output (which varies from 1100 to 10,000 W) is affected by both irradiance and temperature and the voltage is slightly affected by temperature and has an inverse relation.The higher the temperature, the lower the voltage will be and vice versa.Hence, the I-V and P-V simulation output of the PV array is coherent with the manufacturer datasheet curve.Figure 5 shows that the power output (which varies from 1100 to 10,000 W) is affected by both irradiance and temperature and the voltage is slightly affected by temperature and has an inverse relation.The higher the temperature, the lower the voltage will be and vice versa.Hence, the I-V and P-V simulation output of the PV array is coherent with the manufacturer datasheet curve.
Methodological Steps for the Proposed Study
A summary of the methodology utilized for conducting the overall performance analysis of the PVGCS is shown in Figure 6.Modeling, simulation, optimization, and sensitivity analysis of the PVGCS for the selected site was done using Matlab and HOMER Pro, providing the typical data as input varaibles.
Methodological Steps for the Proposed Study
A summary of the methodology utilized for conducting the overall performance analysis of the PVGCS is shown in Figure 6.Modeling, simulation, optimization, and sensitivity analysis of the PVGCS for the selected site was done using Matlab and HOMER Pro, providing the typical data as input varaibles.
PVGCS System Modeling
The grid-connected photovoltaic system was modeled using HOMER software, which consists of a 10 kWp PV system, a grid-connected converter of 5 kW capacity with Hoppecke 8OPzS800 leadacid battery storage, and approximately 5 kWp of load.
The microgrid system consisting of load, a PV array, grid, battery, and converter components
PVGCS System Modeling
The grid-connected photovoltaic system was modeled using HOMER software, which consists of a 10 kWp PV system, a grid-connected converter of 5 kW capacity with Hoppecke 8OPzS800 lead-acid battery storage, and approximately 5 kWp of load.
The microgrid system consisting of load, a PV array, grid, battery, and converter components modeled with their respective actual ratings as installed at the plant are stated below.The schematic representation of the overall PVGCS system is shown in Figure 7.
Daily Load Profile
The area under study's daily electrical load profile is based on the consumption of nearby computer laboratory installed for the students' facility.According to the collected data at the plant, the load demand of the selected primary load has a maximum of 16 kWh/day energy consumption with 4.97 kW of peak demand.Based on the input data of the site under study's daily load profile, the average load profile data found from HOMER is shown in Figure 8.
Daily Load Profile
The area under study's daily electrical load profile is based on the consumption of nearby computer laboratory installed for the students' facility.According to the collected data at the plant, the load demand of the selected primary load has a maximum of 16 kWh/day energy consumption with 4.97 kW of peak demand.Based on the input data of the site under study's daily load profile, the average load profile data found from HOMER is shown in Figure 8.
Solar PV Modules
As was described in the above sections, the solar radiation data were taken from the plant and NASA [43].Based on the data collected from the plant and analysis made, the average solar radiation of 5.8 kWh/m 2 /day and clearness index of 0.6 was obtained as shown in Figure 9.The clearness index is a measure of the clearness of the atmosphere, which is expressed by the fraction of the solar
Solar PV Modules
As was described in the above sections, the solar radiation data were taken from the plant and NASA [43].Based on the data collected from the plant and analysis made, the average solar radiation of 5.8 kWh/m 2 /day and clearness index of 0.6 was obtained as shown in Figure 9.The clearness index is a measure of the clearness of the atmosphere, which is expressed by the fraction of the solar radiation that is transmitted through the atmosphere to strike the surface of the Earth.The power output of this photovoltaic plant is dependent on the atmospheric and geographical conditions.The PV array power output (PPV), can be calculated as: where IT is the solar radiation incident on the PV array in kW/m 2 , GPV stands for the PV rated capacity in kW, fPV represents the derating factor of PV, IT,STC represents the solar incident radiation at standard temperature conditions (STC) which is considered as 1 kW/m 2 , TC is the cell temperature of the PV in °C, αP is the temperature coefficient of power (%/°C), and TC,STC is the cell temperature under STC (25 °C) of the PV [44].
Grid System
The photovoltaic grid-connected system (PVGCS) under study operates with a grid to increase the reliability of the supply system.Therefore, the load is supposed to get its power whenever necessary.The electricity prices of the grid were considered as being $0.1/kWh and the grid net excess price considered was $0.05/kWh in the simulation for the case under study.
Battery
The Hoppecke 8OPzS800 lead-acid battery system existing at the plant is considered to supply the required energy when a power shortage occurs in the system.The excess electricity production in the microgrid is stored with a battery energy storage system and provides power to the load when an energy shortage occurs in the system.The energy stored in the battery is given by [45]
𝐵 = 𝐵 . + 𝑉 𝐼 𝑑𝑡
(2) where, Bbat.0 denotes the initial battery charge while Vbat and Ibat are the voltage and current of the battery, respectively.The number of battery cells connected in series to attain the required voltage [45] is where Vbus is the bus voltage of the microgrid system and Vbat is the voltage rating of a single battery cell.In addition, the maximum charge/discharge power of a single battery is described as: The power output of this photovoltaic plant is dependent on the atmospheric and geographical conditions.The PV array power output (P PV ), can be calculated as: where I T is the solar radiation incident on the PV array in kW/m 2 , G PV stands for the PV rated capacity in kW, f PV represents the derating factor of PV, I T,STC represents the solar incident radiation at standard temperature conditions (STC) which is considered as 1 kW/m 2 , T C is the cell temperature of the PV in • C, α P is the temperature coefficient of power (%/ • C), and T C,STC is the cell temperature under STC (25 • C) of the PV [44].
Grid System
The photovoltaic grid-connected system (PVGCS) under study operates with a grid to increase the reliability of the supply system.Therefore, the load is supposed to get its power whenever necessary.The electricity prices of the grid were considered as being $0.1/kWh and the grid net excess price considered was $0.05/kWh in the simulation for the case under study.
Battery
The Hoppecke 8OPzS800 lead-acid battery system existing at the plant is considered to supply the required energy when a power shortage occurs in the system.The excess electricity production in the microgrid is stored with a battery energy storage system and provides power to the load when an energy shortage occurs in the system.The energy stored in the battery is given by [45] where, B bat.0 denotes the initial battery charge while V bat and I bat are the voltage and current of the battery, respectively.The number of battery cells connected in series to attain the required voltage [45] is where V bus is the bus voltage of the microgrid system and V bat is the voltage rating of a single battery cell.In addition, the maximum charge/discharge power of a single battery is described as: where I max bat represents the maximum charging current of the battery in amperes [46].The battery consists of 24 strings connected in series, with each having 2 V and a total of 48 V.
Converter
The converter considered in this system is used as a bi-directional converter changing the direct current (DC) to alternating current (AC) electric power and vice versa.Its efficiency was assumed to be 95% for the size considered and the lifetime is up to 15 years.The size of this converter mainly depends on peak load demand (P max L ) [45].The rating of this converter is given as: where n inv stands for inverter efficiency [46].
Parameter Formulation of System Economics
The economic viability of batteries and microgrid systems was evaluated based on the levelized cost of energy (LCOE) and the total net present cost (NPC) of the system.
Estimation of Net Present Cost (NPC)
The total sum of initial, replacement, operation, and maintenance costs including fuel cost minus the revenue provides the value of the total net present cost (NPC).The total NPC was calculated as where CRF(i,n) represents the capital recovery factor while n and i represent the number of years and the real annual interest rate, respectively [45].The capital recovery factor is also defined as: where i = i nom − f 1 + f , i nom stands for the nominal interest rate, and f represents the yearly inflation rate.
Estimation of Cost of Energy (COE)
The cost of energy (COE) is among the main parameters used for evaluating the economic effectiveness of a given energy system.The COE is defined as the annual cost of all system components divided by total of generated energy [47].The COE is given by E S is the yearly energy supplied and C A is the total annual cost [45].The total annual cost is the sum of operation and maintenance cost, capital cost, and replacement cost.
Results
After collecting the necessary radiation, temperature, and energy production data from an existing plant, performance analysis was conducted by considering the yield, performance ratio, and loss indices.This helps to check existing operation status and the contribution to the grid, including both positive and negative impacts.For the performance evaluation of the grid-connected plant, one-year data of 2018 was collected from the plant data logger and the analysis was performed based on monthly average values, as shown in Figures 10-14.Relevant data, including the radiation and temperature, were collected and needed to be prepared for interpretation.The result of monthly global inclined radiation (H_Ginc) at an optimal angle of 16 degrees versus mean air temperature (Ta) is presented in Figure 10.During the summer season (usually from June to September), the radiation is higher and the air quality are worse compared to the other months.,It is also often very cloudy and rainy during this time in Ethiopia.In this time, the cell temperature rises above 60 °C and the average inclined radiation also reaches 1000 W/m 2 , with an annual average ambient temperature of around 25 °C.In addition to the inclined radiation and temperature relationship presented in Figure 10, the measure of clearness index and radiation relationship are also shown in Figure 9 and were used to analyze the effect of the atmosphere on the distribution of radiation.
Performance Analysis Result of the PV Plant
The energy delivered by the PV array (Ea) is the monthly PV array energy.Einj is the energy injected into the grid.These two energies in Figure 11 vary with the seasons proportionally to the monthly global radiation.Relevant data, including the radiation and temperature, were collected and needed to be prepared for interpretation.The result of monthly global inclined radiation (H_Ginc) at an optimal angle of 16 degrees versus mean air temperature (Ta) is presented in Figure 10.During the summer season (usually from June to September), the radiation is higher and the air quality are worse compared to the other months.,It is also often very cloudy and rainy during this time in Ethiopia.In this time, the cell temperature rises above 60 °C and the average inclined radiation also reaches 1000 W/m 2 , with an annual average ambient temperature of around 25 °C.In addition to the inclined radiation and temperature relationship presented in Figure 10, the measure of clearness index and radiation relationship are also shown in Figure 9 and were used to analyze the effect of the atmosphere on the distribution of radiation.
Performance Analysis Result of the PV Plant
The energy delivered by the PV array (Ea) is the monthly PV array energy.Einj is the energy injected into the grid.These two energies in Figure 11 vary with the seasons proportionally to the monthly global radiation.Illustration of the plant output was presented using common yield factors.The variation of annual yield is also shown in line with respective radiation availability at the site (Figure 12).In order to find the maximum and optimal output from the plant, the final yield (YF), should be equal to the number of hours that the PV plant must operate at a nominal power of P0 = 10 kWp to produce the daily energy injected into the grid.The number of hours in July was found to be 58 h/month and 96 h/month or (kWh/kWp) in March.The final yield is higher in March and lower in July and August.This was caused by the higher and lower availability of radiation effects for the respective months of the year 2018.
Capture Loss and System Losses
The type of losses presented in the PVGCS plant is shown in Figure 13 categorized as system loss and capture loss.These parameters enable us to identify the net production of the plant by taking the system losses into account.In order to find the maximum and optimal output from the plant, the final yield (YF), should be equal to the number of hours that the PV plant must operate at a nominal power of P0 = 10 kWp to produce the daily energy injected into the grid.The number of hours in July was found to be 58 h/month and 96 h/month or (kWh/kWp) in March.The final yield is higher in March and lower in July and August.This was caused by the higher and lower availability of radiation effects for the respective months of the year 2018.
Capture Loss and System Losses
The type of losses presented in the PVGCS plant is shown in Figure 13 categorized as system loss and capture loss.These parameters enable us to identify the net production of the plant by taking the system losses into account.The system losses (Ls), which affect the plant's final yield significantly compared to the losses caused by ambient temperature and excessive long wires across the entire plant, are more commonly observed in December, March, and August with a loss of 60, 61, and 67 h/month, respectively.The capture losses in August is smaller as compared to other months due to very low ambient temperature effect that occurs.
Performance Ratio (Pr) of the PV System
Among different performance evaluation indices, the performance ratio is the determinant factor in monitoring the plant's operational status (Figure 14).The result of the performance ratio (Pr) was further interpreted and related to the PV system efficiency using the efficiency of the PV module considered at standard temperature conditions of 25 °C.The efficiency of the PV system is the product of the performance ratio (Pr) and efficiency of the PV module at standard temperature conditions (STC), as indicated in Table 3.
From the performance assessment of the plant conducted, it was found that the performance ratio value is 74%, which is of promising value compared to the result of other countries' experiences indicated in Table 1.
Meteorological Data at the Site
Relevant data, including the radiation and temperature, were collected and needed to be prepared for interpretation.The result of monthly global inclined radiation (H_Ginc) at an optimal angle of 16 degrees versus mean air temperature (Ta) is presented in Figure 10.
During the summer season (usually from June to September), the radiation is higher and the air quality are worse compared to the other months.It is also often very cloudy and rainy during this time in Ethiopia.In this time, the cell temperature rises above 60 • C and the average inclined radiation also reaches 1000 W/m 2 , with an annual average ambient temperature of around 25 • C. In addition to the inclined radiation and temperature relationship presented in Figure 10, the measure of clearness index and radiation relationship are also shown in Figure 9 and were used to analyze the effect of the atmosphere on the distribution of radiation.
Performance Analysis Result of the PV Plant
The energy delivered by the PV array (Ea) is the monthly PV array energy.E inj is the energy injected into the grid.These two energies in Figure 11 vary with the seasons proportionally to the monthly global radiation.
The difference between the two columns in Figure 11 gives an approximation of the energy consumed by the load, which ranges from 236 kWh per month in August to 616 kWh per month in March.
Array Yield (Y a ), Reference Yield (Y r ), and Final Yield (Y F ) of PV Plant
Illustration of the plant output was presented using common yield factors.The variation of annual yield is also shown in line with respective radiation availability at the site (Figure 12).
In order to find the maximum and optimal output from the plant, the final yield (Y F ) , should be equal to the number of hours that the PV plant must operate at a nominal power of P 0 = 10 kWp to produce the daily energy injected into the grid.The number of hours in July was found to be 58 h/month and 96 h/month or (kWh/kWp) in March.The final yield is higher in March and lower in July and August.This was caused by the higher and lower availability of radiation effects for the respective months of the year 2018.
Capture Loss and System Losses
The type of losses presented in the PVGCS plant is shown in Figure 13 categorized as system loss and capture loss.These parameters enable us to identify the net production of the plant by taking the system losses into account.
The system losses (Ls), which affect the plant's final yield significantly compared to the losses caused by ambient temperature and excessive long wires across the entire plant, are more commonly observed in December, March, and August with a loss of 60, 61, and 67 h/month, respectively.The capture losses in August is smaller as compared to other months due to very low ambient temperature effect that occurs.
Performance Ratio (P r ) of the PV System
Among different performance evaluation indices, the performance ratio is the determinant factor in monitoring the plant's operational status (Figure 14).
The result of the performance ratio (P r ) was further interpreted and related to the PV system efficiency using the efficiency of the PV module considered at standard temperature conditions of 25 • C. The efficiency of the PV system is the product of the performance ratio (P r ) and efficiency of the PV module at standard temperature conditions (STC), as indicated in Table 3.
From the performance assessment of the plant conducted, it was found that the performance ratio value is 74%, which is of promising value compared to the result of other countries' experiences indicated in Table 1.
Simulation Result of the Entire PVGCS Using Matlab
Using a simulink model of the entire grid-connected PV system under study, the dynamic characteristics and simulation output of the overall micro-grid system (the PV plant, the grid network, the battery power, the state of charge (SOC), and the load profile) are illustrated in Figure 15.
Sustainability 2020, 12, x FOR PEER REVIEW 18 of 28 battery power is 0 W and therefore, the load should get its power from the PV array or from the grid (power_secondary) system, as indicated on the relevant curves in Figure 15.At 12 h, the battery controller shifts the battery SOC value to a constant and the battery power becomes zero.Therefore, the grid (Power_secondary) output curve should rise in line with the battery output curve, and the load consumption should increase as well.The battery output shows that the SOC of the battery decreased down to 75% as it discharged power to the load.The battery supplied power to the load peak hour consumption of 5000 W around 20 h and was decreased to 230 W at around 2 h.The last chart in the figure below represents the time period of 24 h of a day.The analytical and simulation results presented above cover the technical results of the existing 10 kWp grid-connected photovoltaic system.However, to evaluate the economic effectiveness and optimization of the plant, HOMER software based economic analysis was performed in the following sections.
Techno-Economic Optimization and Sensitivity Analysis Result
From the optimization result, the best optimal system configuration type was identified with the plant components including a 5 kW PV, a 5 kW converter, and a battery with 24 strings of cells and a total of 48 V.The result shows that in the case of an energy shortage, the battery bank provides necessary backup.The simulation results for both the existing plant with the 10 kWp PV system and the optimized 5 kWp PV system are summarized in Tables 4 and 5. HOMER Pro software provides the optimized results of the microgrid system assuming that when there is a shortage of power from the PV and the battery, the load gets its power from the grid (power_secondary).From the sensitivity analysis result, the most optimal system type is identified and ranked on the bases of the total net present cost (NPC) and the levelized cost of energy (LCOE).The load demand, grid sales and purchase output, PV array power output and grid annual power generation, and state of charge (SOC) of the battery of the existing PVGCS system are shown in Figure 16.To identify the maximum and minimum energy production of the system, the months of maximum energy production (March) and minimum energy production (August) of the year with a sample of one-week profiles are presented.Figure 15 shows that, within 24 h of the day, the solar PV power generation is 0 W from 0 h to 6 h and from 20 h to 24 h and then reaches its peak power production of approximately 10,000 W (10 kW) from 14 h to 15 h.The maximum power output of the PV plant was found at the knee point where the current and voltage values reach their maximum value.From the total power production, around 5 kW of power is fed to the load/battery and the rest (5 kW) is fed to the grid network.The storage battery, that involves an 800 Ah capacity lead-acid battery, supplies power when the power of the PV plant is insufficient, and it absorbs surplus power from the micro-grid when its power is surpassing the electric load.With the provision of power to the loads, the battery is intended to be used as energy storage and as a peak shaving application.The load consumption gets its maximum capacity of 5 kW at around 20 h, supplied by the battery storage system.From 0 h to 12 h and from 18 h to 24 h, battery control is performed by the battery controller.The battery control performs tracking control of the current.From 12 h to 18 h, battery control is not performed, as during this time the State of Charge (SOC) of the storage battery is fixed to a constant.Between 12 h and 18 h, the battery power is 0 W and therefore, the load should get its power from the PV array or from the grid (power_secondary) system, as indicated on the relevant curves in Figure 15.At 12 h, the battery controller shifts the battery SOC value to a constant and the battery power becomes zero.Therefore, the grid (Power_secondary) output curve should rise in line with the battery output curve, and the load consumption should increase as well.The battery output shows that the SOC of the battery decreased down to 75% as it discharged power to the load.The battery supplied power to the load peak hour consumption of 5000 W around 20 h and was decreased to 230 W at around 2 h.The last chart in the figure below represents the time period of 24 h of a day.The analytical and simulation results presented above cover the technical results of the existing 10 kWp grid-connected photovoltaic system.However, to evaluate the economic effectiveness and optimization of the plant, HOMER software based economic analysis was performed in the following sections.
Techno-Economic Optimization and Sensitivity Analysis Result
From the optimization result, the best optimal system configuration type was identified with the plant components including a 5 kW PV, a 5 kW converter, and a battery with 24 strings of cells and a total of 48 V.The result shows that in the case of an energy shortage, the battery bank provides necessary backup.The simulation results for both the existing plant with the 10 kWp PV system and the optimized 5 kWp PV system are summarized in Tables 4 and 5. HOMER Pro software provides the optimized results of the microgrid system assuming that when there is a shortage of power from the PV and the battery, the load gets its power from the grid (power_secondary).From the sensitivity analysis result, the most optimal system type is identified and ranked on the bases of the total net present cost (NPC) and the levelized cost of energy (LCOE).The load demand, grid sales and purchase output, PV array power output and grid annual power generation, and state of charge (SOC) of the battery of the existing PVGCS system are shown in Figure 16.To identify the maximum and minimum energy production of the system, the months of maximum energy production (March) and minimum energy production (August) of the year with a sample of one-week profiles are presented.The power output of the PV array, grid sales and purchases, and demand requirements are shown in Figure 16.In addition, the storage battery state of charge conditions are presented.
Photovoltaic and Grid Electricity Production
A comparison between the PV and grid energy contribution to meeting the load's demand is presented in Figure 17.Accordingly, it is shown that the PV system provides the highest share of electricity production.
Photovoltaic and Grid Electricity Production
A comparison between the PV and grid energy contribution to meeting the load's demand is presented in Figure 17.Accordingly, it is shown that the PV system provides the highest share of electricity production.
Battery State of Charge
As it can be seen in Figure 18, the battery state of charge status is illustrated interms of hour of the day, day of the year and month of the year.The results show that the SOC is kept within the specified limit of 20-100%.The maximum utilization of the battery storage is also observed in the month of August
Battery State of Charge
As it can be seen in Figure 18, the battery state of charge status is illustrated interms of hour of the day, day of the year and month of the year.The results show that the SOC is kept within the specified limit of 20-100%.The maximum utilization of the battery storage is also observed in the month of August
Photovoltaic and Grid Electricity Production
A comparison between the PV and grid energy contribution to meeting the load's demand is presented in Figure 17.Accordingly, it is shown that the PV system provides the highest share of electricity production.
Battery State of Charge
As it can be seen in Figure 18, the battery state of charge status is illustrated interms of hour of the day, day of the year and month of the year.The results show that the SOC is kept within the specified limit of 20-100%.The maximum utilization of the battery storage is also observed in the month of August
Sensitivity Analysis Results
The effect of changing multiple variables, such as solar radiation, load demand, PV capital costs, grid outages, and related parameters that affect the economic output of the optimized PVGCS system can be identified through the result of sensitivity analysis.The effect of these variables can be observed in terms of the NPC and COE values variation of the PVGCS system.
Effect of Solar Radiation Variations
A sensitivity analysis of the PVGCS system was performed with consideration of solar radiation variation, which ranges from 5.02 kWh/m 2 /day to 6.59 kWh/m 2 /day for the respective load demand variation of 9.2 kWh/day to 23 kWh/day and an average of 16 kWh/day with grid power price of $0.05/kWh.From Figure 19 it is observed that the NPC value decreased as the solar radiation increased.On the other hand, the renewable fraction (RF) value increased while the solar radiation increased.
Effect of Solar Radiation Variations
A sensitivity analysis of the PVGCS system was performed with consideration of solar radiation variation, which ranges from 5.02 kWh/m 2 /day to 6.59 kWh/m 2 /day for the respective load demand variation of 9.2 kWh/day to 23 kWh/day and an average of 16 kWh/day with grid power price of $0.05/kWh.From Figure 19 it is observed that the NPC value decreased as the solar radiation increased.On the other hand, the renewable fraction (RF) value increased while the solar radiation increased.
Effect of Load Change
As shown in Figure 20, the NPC and COE values were measured for the respective daily average load demand variations.The load demand variation ranged from 9.2 kWh/day to 23 kWh/day.From the result it was found that, as the load demand increased, the NPC values also increased, whereas the COE decreased as the demand increased.
Effect of Load Change
As shown in Figure 20, the NPC and COE values were measured for the respective daily average load demand variations.The load demand variation ranged from 9.2 kWh/day to 23 kWh/day.From the result it was found that, as the load demand increased, the NPC values also increased, whereas the COE decreased as the demand increased.
A sensitivity analysis of the PVGCS system was performed with consideration of solar radiation variation, which ranges from 5.02 kWh/m 2 /day to 6.59 kWh/m 2 /day for the respective load demand variation of 9.2 kWh/day to 23 kWh/day and an average of 16 kWh/day with grid power price of $0.05/kWh.From Figure 19 it is observed that the NPC value decreased as the solar radiation increased.On the other hand, the renewable fraction (RF) value increased while the solar radiation increased.
Effect of Load Change
As shown in Figure 20, the NPC and COE values were measured for the respective daily average load demand variations.The load demand variation ranged from 9.2 kWh/day to 23 kWh/day.From the result it was found that, as the load demand increased, the NPC values also increased, whereas the COE decreased as the demand increased.
Discussion
In this paper, a techno-economic optimization study of the PVGCS system was conducted based on the data collected at the existing plant, together with meteorology sources and software-based simulation outputs.The dynamic response of the plant was studied using the Matlab/Simulink environment and furthermore, the techno-economic aspects were analyzed using hybrid optimization model for electric renewables (HOMER) software.As indicated in Figure 10, the performance assessment of the existing plant was analyzed using the radiation, temperature data, and corresponding performance evaluation indices.Accordingly, the meteorological data gathered were found to be coherent with that of the plant data acquisition system and this made the performance evaluation of the PV plant more effective.Figure 11 illustrates that the total annual energy produced by the PV array is 11,805 kWh, which varies from a minimum production of 561.8 kWh in July to a maximum of 1283 kWh in March.From the total energy produced, the yearly average energy fed into the grid in one year was approximately 5277 kWh and the rest was supplied to the load.The difference between the two columns in Figure 11 gives an approximation of the energy consumed by the load, which ranges from 236 kWh to 616 kWh per month.
After analysis of the energy production by the source and respective load consumptions, the months of highest and lowest operating hours of energy production were identified as illustrated in Figure 12.Hence, it was found that the PV array yield (Y a ) operated for 56 h/month or kWh/kWp in June and about 128 h/month during March, when the temperature average was around 25 • C. For the reference yield (Yr), the maximum hours of operation at standard conditions were observed during the months of December, January, and March with 142, 140, and 144 h, respectively.The final yield (Y F ) should be equal to the number of hours that the PV plant must operate at a nominal power of P 0 = 10 kWp in order to produce the daily energy quantity injected into the grid.The number of hours of operation in the month of July was 58 h/month and the equivalent was 96 h/month (kWh/kWp) in March.
In addition to the production aspects, the system's losses were investigated as indicated in Figure 13.For the capture losses (Lc), it can be observed that May, June, and July were found to be the months where the highest losses were attained with values of 46, 54, and 49 h/month, respectively.This was commonly caused by the ambient temperature and excessively long wires across the entire plant.On the other hand, system losses (Ls) were more commonly observed in December, March, and August with losses of 60, 61, and 67 h/month, respectively.Such losses mainly occurred due to the higher overheating of inverters.
Evaluation of the PV plant performance ratio was also performed to identify the plant's operational efficiency; the ratio was found to be between 64% in June and 74% in the month of March, as indicated in Figure 14.This shows that the system is energy efficient with a system efficiency range of 9.3% to 10.7%.The results of the performance ratio and efficiency indicate that the PVGCS plant can be categorized as a well performing plant, as its overall performance ratio values were in the range of permissible limits as compared to the values observed in other countries that are summarized in Table 1.The plant dynamics response was also evaluated using Matlab software, as indicated in Figure 15.It showed the results of power production and battery response to be coherent with the plant output data.Hence, it showed that the plant dynamic response is good enough.In addition to charging the battery, the excess power from the PV plant is also sold to the grid based on an annual net metering system, which in turn reduces the average cost of electricity.The demand and output power delivered by the PV is shown in Figure 16 for typical months.In addition, the power supplied by the PV and the battery state of charge are also illustrated.The PV power generation, the grid purchases and sales, demand profile, and battery state of charge were assessed for the months of August and March to see the minimum and maximum production in line with the effect of resource availability.The energy production of the PV and grid components considered in the PVGCS are also presented at Figure 17.Due to the high irradiance availability at the site where the plant is installed, the PV array took the lead for electricity production throughout the year compared to the grid.The average state of charge (SOC) output of the lead-acid battery is illustrated in Figure 18.In addition to the technical scenario, the economic aspect of the PVGCS system was analyzed using hybrid optimization model for electric renewables (HOMER) software.As observed from Table 4, for the existing 10 kWp PV system, the yearly electricity production of 11,812 kWh/Year from PV and 521 kWh/Year from grid (power_secondary) purchases provides a total supply of 12,333 kWh/Year.On the other hand, the electricity consumption of 5840 kWh/Year from AC primary load and 5292 kWh/Year of consumption from grid sales provides a total consumption of 11,132 kWh/Year.Furthermore, for the case of an optimized 5 kWp PV System, yearly electricity production of 5906 kWh/Year from PV and 976 kWh/Year from grid purchase provides a total supply of 6882 kWh/Year.On the other hand, the electricity consumption of 5840 kWh/Year from AC primary load and 1012 kWh/Year from grid sales provides a total consumption of 6852 kWh/Year, as indicated in Table 5.Here it is found that the production and consumption of energy is more balanced for the case of a 5 kWp PV array compared to a 10 kWp PV array system.Therefore, excess electricity is reduced from 14.47% to 1.04%, which is almost negligible for the optimized 5 kWp system.Based on the results, the total electricity production and consumption are more balanced and adequately optimized for the 5 kWp PV system than that of the 10 kWp PV for the same load condition.In addition to the plant's optimization output, sensitivity analysis was also performed to indicate the effect of changing multiple values for input variables on the optimal system design.These variables can be the solar radiation, load demand, PV capital cost, and related parameters that might affect the optimal system's economic output.As shown from Figure 19, the effect of solar radiation variation was noticed on the NPC and RF value changes.The net present cost found to be decreased as the solar radiation increased.On the other hand, the effect of load demand changes was presented in Figure 20, which in turn resulted in the NPC and COE value changes of the optimized system.As the load increased, the NPC value increased but the COE decreased.The PVGCS system with a 5 kWp PV array provides an average NPC value of 5770 (€/kWh) and COE of 0.087 (€/kWh) compared to a 10 kWp PV system, which results in a NPC of 6047 (€/kWh) and COE of 0.098 (€/kWh).Moreover, the utilization time and production of the storage battery was also increased from 36.8 kWh to 76.5 kWh with the new optimized system.Therefore, from the study result, the entire plant was found to be more efficient and optimal both technically and economically with an optimized 5 kWp PV system compared to the existing 10 kWp PV system.According to the HOMER result, the optimal 5 kWp PV array configuration provides a system with efficient energy production satisfying the local load, providing a balanced system, negligible losses of electricity, and hence an economically viable system with better NPC and COE values.The methods of techno-economic analysis used in this paper can further be used for commercial sectors while considering the respective resource and demand profiles available at the site to be investigated.
Conclusions and Recommendation
In this paper, a techno-economics and operational performance assessment of the PVGCS plant was conducted using collected data, simulation, and optimization of the entire plant.From the measured data at the plant, it was found that, for a yearly average radiation rate of 2378 kWh/m 2 /year, the PV plant produces an average energy rate of 11,805 kWh/year with consideration of losses.The results from the analytical performance evaluation technique show that the performance ratio value was found to be in the range of a minimum value of 64% and a maximum value of 74%.Accordingly, the system efficiency was found to be in the range of 9.3% to 10.7%.In addition, December, January, and March were the months where the PV system became most efficient and produced the highest amount of energy.The performance status of the plant under study was compared with previous studies and verified as shown in Table 1, resulting in promising output levels.To evaluate the dynamic characteristics and response, the plant was modeled using Matlab, provided that each component's output was within permissible limits and met the demand requirement as needed.Though technically the plant is performing well, according to the optimization and sensitivity results found from HOMER Pro, in terms of economic aspects, the existing 10 kWp PV plant was not well optimized, leading to idle periods for the battery storage and resulting excess electricity.Therefore, based on the simulation output, a 5 kWp PV system was found to be economically more optimal than the existing 10kWp PVGCS system.Accordingly, the system with a 5 kWp PV array provides a net present cost (NPC) value of 5770 (€/kWh) and cost of energy (COE) of 0.087 (€/kWh) compared to the 10 kWp PV system, which results in a NPC of 6047 (€/kWh) and COE of 0.098 (€/kWh).The energy production of the storage battery also increased from 37 kWh to 77 kWh with a 5 kWp PV array system.Therefore, the entire plant was found to be more efficient and optimal both technically and economically with an optimized 5 kWp PV array system, demonstrating that the PV together with the battery storage unit needs to be properly sized for an efficient operation of the system satisfying the variable load demand.In general, the results found from the plant performance assessment and techno-economic analysis could be used as an input for intensive installation and effective utilization of PVGCS systems in Ethiopia.
In the future, the plant's overall dynamic property and stability impact on the grid can be further investigated by directly collecting solar radiation data at the plant for a couple of years and therefore, the plant performance and economics can be evaluated more efficiently using a feed in tariff estimation mechanism of electricity prices.In addition, the second-life battery energy storage system's impact on reducing the levelized cost of energy for a grid-connected photovoltaic system could be investigated by comparing it with a fresh battery storage system.
• and an azimuth angle of 0 • .Each electrical module provides a short circuit current of 5.35 A, open circuit voltage of 44.5 V, a current of maximum power point (IMPP) of 4.96 A, and a voltage of maximum power point (VMPP) of 36.3V [3].Under these conditions, the nominal power of the PV array is approximately 10 kW at its peak.Sustainability 2020, 12, x FOR PEER REVIEW 7 of 28
Figure 4 .
Figure 4. I-V curve of the PV array.
Figure 4 .
Figure 4. I-V curve of the PV array.
Figure 4 .
Figure 4. I-V curve of the PV array.
Figure 5 .
Figure 5. P-V curve of the PV array.
Figure 5 .
Figure 5. P-V curve of the PV array.
Figure 6 .
Figure 6.Flowchart for a performance assessment of a photovoltaic grid-connected system (PVGCS).
Figure 7 .
Figure 7. Schematic diagram of the PVGCS system.
Figure 9 .
Figure 9.The global horizontal radiation HOMER result.
Figure 10 .
Figure 10.Monthly global inclined radiation and temperature.
Figure 10 .
Figure 10.Monthly global inclined radiation and temperature.
Figure 10 .
Figure 10.Monthly global inclined radiation and temperature.
Figure 11 .
Figure 11.PV array energy (Ea) and the energy injected to the grid (Eout).Figure 11.PV array energy (Ea) and the energy injected to the grid (Eout).
Figure 11 .
Figure 11.PV array energy (Ea) and the energy injected to the grid (Eout).Figure 11.PV array energy (Ea) and the energy injected to the grid (Eout).
consumed by the load, which ranges from 236 kWh per month in August to 616 kWh per month in March.4.2.1.Array Yield (Ya), Reference Yield (Yr), and Final Yield (YF) of PV Plant
Figure 12 .
Figure 12.Variation of Array yields (Ya), Reference yield (Yr), and Final yield (YF) in terms of hours per month.
Figure 13 .
Figure 13.Comparison between final yield and losses.
Figure 12 .
Figure 12.Variation of Array yields (Y a ), Reference yield (Y r ), and Final yield (Y F ) in terms of hours per month.
Figure 12 .
Figure 12.Variation of Array yields (Ya), Reference yield (Yr), and Final yield (YF) in terms of hours per month.
Figure 13 .
Figure 13.Comparison between final yield and losses.Figure 13.Comparison between final yield and losses.
Figure 13 .
Figure 13.Comparison between final yield and losses.Figure 13.Comparison between final yield and losses.
Figure 14 .
Figure 14.Performance Ratio of the PV system.
Figure 14 .
Figure 14.Performance Ratio of the PV system.
Figure 15 .
Figure 15.Power output curve of the PV, grid, load, battery, and battery SOC.
Figure 15 .
Figure 15.Power output curve of the PV, grid, load, battery, and battery SOC.
Figure 16 .
Figure 16.PV Power Generation and SOC of the battery for load during: (a) The first week of August, (b) The third week of March.4.4.1.Optimization Results PV Power Generation and State of Charge (SOC) of the Battery
Figure 17 .
Figure 17.Monthly average electricity generation of the PV and the grid.
Figure 18 .
Figure 18.State of charge (SOC) output of a Lead-acid battery (a) Hours of the day vs. day of the year, (b) SOC vs. month of the year.
Figure 17 .
Figure 17.Monthly average electricity generation of the PV and the grid.
Figure 17 .
Figure 17.Monthly average electricity generation of the PV and the grid.
Figure 18 .
Figure 18.State of charge (SOC) output of a Lead-acid battery (a) Hours of the day vs. day of the year, (b) SOC vs. month of the year.
Figure 18 .
Figure 18.State of charge (SOC) output of a Lead-acid battery (a) Hours of the day vs. day of the year, (b) SOC vs. month of the year.
Figure 19 .
Figure 19.Effect of solar radiation variation on the NPC and RF of the PVGCS.
Figure 20 .
Figure 20.Effect of load demand variation on the NPC and COE of PVGCS.
Figure 19 .
Figure 19.Effect of solar radiation variation on the NPC and RF of the PVGCS.
Figure 19 .
Figure 19.Effect of solar radiation variation on the NPC and RF of the PVGCS.
Figure 20 .
Figure 20.Effect of load demand variation on the NPC and COE of PVGCS.Figure 20.Effect of load demand variation on the NPC and COE of PVGCS.
Figure 20 .
Figure 20.Effect of load demand variation on the NPC and COE of PVGCS.Figure 20.Effect of load demand variation on the NPC and COE of PVGCS.
Table 1 .
Summary of the state-of-the-art review.
Table 2 .
Summary of the performance evaluation parameters.
Table 4 .
Optimization result summary of a PVGCS with a 10 kWp PV System.
Table 4 .
Optimization result summary of a PVGCS with a 10 kWp PV System.
Table 5 .
Optimization result summary of a PVGCS with a 5 kWp PV System. | 18,223.8 | 2020-09-16T00:00:00.000 | [
"Engineering"
] |
Kernel-Free Quadratic Surface Minimax Probability Machine for a Binary Classification Problem
: In this paper, we propose a novel binary classification method called the kernel-free quadratic surface minimax probability machine (QSMPM), that makes use of the kernel-free techniques of the quadratic surface support vector machine (QSSVM) and inherits the advantage of the minimax probability machine (MPM) without any parameters. Specifically, it attempts to find a quadratic hypersurface that separates two classes of samples with maximum probability. How-ever, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. It should be pointed out that our method is both kernel-free and parameter-free, making it easy to use. In addition, the quadratic hypersurface obtained by our method was allowed to be any general form of quadratic hypersurface. It has better interpretability than the methods with the kernel function. Finally, in order to demonstrate the geometric interpretation of our QSMPM, five artificial datasets were implemented, including showing the ability to obtain a linear separating hyperplane. Furthermore, numerical experiments on benchmark datasets confirmed that the proposed method had better accuracy and less CPU time than corresponding methods.
Introduction
Machine learning is an important branch in the field of artificial intelligence, which has a wide range of applications in various fields of contemporary science [1]. With the development of machine learning, the classification problem has been widely concerned and studied in the fields of pattern recognition [2], text classification [3], image processing [4], financial time series prediction [5], skin disease [6], intrusion detection systems [7], etc. The classification problem is a vital task in supervised learning that learns a classification rule from a training set with known labels and then uses it to assign a new sample to a class.
At present, there are many famous classification methods. Among these existing methods, Lanckriet et al. [8,9] proposed an excellent classifier, called the minimax probability machine (MPM). For a given binary classification problem, the MPM not only deals with it in the linear case, but also in the nonlinear case by the kernel trick. It is worth noting that the MPM does not have any parameters, which is an important advantage. Therefore, it has been widely used in computer vision [10], engineering technology [11,12], agriculture [13], and novelty detection [14]. Moreover, many researchers have proposed a variety of improved versions of the MPM from different perspectives [14][15][16][17][18][19][20][21][22][23][24][25]. The representative works can be briefly reviewed as follows. In [15], Thomas and Gregory proposed MPM regression (MPMR), which transformed the regression problem into a classification problem, and then used the classifier MPM to obtain a regression function. To further exploit the structural information of the training set, Gu et al. [17] proposed the structural MPM (SMPM) by combining the finite mixture models with the MPM. In addition, Yoshiyama et al. [21] proposed the Laplacian MPM (Lap-MPM), which improved the performance of the MPM in semisupervised learning. However, the nonlinear MPM using kernel techniques lacks interpretability and usually depends heavily on the choice of a proper kernel function and the corresponding kernel parameters. Furthermore, choosing the appropriate kernel function and adjusting its parameters may require much computational time and effort. Therefore, it naturally occurs to us that the study of a kernel-free nonlinear MPM is of great significance.
For the first time, Dagher [26] proposed a kernel-free nonlinear classifier, namely the quadratic surface support vector machine (QSSVM), in 2008. It was based on the maximum margin idea, and the training points were separated by a quadratic hypersurface without a kernel function, avoiding the time-consuming process of selecting the appropriate kernel function and its corresponding parameters. Furthermore, in order to improve the classification accuracy and robustness, Luo et al. [27] proposed the soft-margin quadratic surface support vector machine (SQSSVC). After that, Bai et al. [28] proposed the quadratic kernel-free least-squares support vector machine for target diseases' classification. Following these leading works, some scholars performed further studies, e.g., see [29][30][31][32][33][34] for the classification problem, [35] for the regression problem, and [36] for the cluster problem. The good performance of these methods demonstrates that the quadratic hypersurface is an effective method to flexibly capture the nonlinear structure of data. Thus, it can be seen that it is very interesting to study the kernel-free nonlinear MPM using the above kernel-free technique.
In this paper, for the binary classification problem, a new kernel-free nonlinear method is proposed, which is called the kernel-free quadratic surface minimax probability machine (QSMPM). It was constructed on the basics of the MPM by using the kernel-free techniques of the QSSVM. Specifically, it tries to seek a quadratic hypersurface that separates two classes of samples with maximum probability. However, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. It is important to point out that our QSMPM addresses the following key issues. First, our method directly generates a nonlinear (quadratic) hypersurface without the kernel function, so there is no need to select the appropriate kernel. Second, our method does not need to choose any parameters. Third, the quadratic hypersurface obtained by our method has better interpretability than the one by the methods with the kernel function. Fourth, it is rather flexible because the quadratic hypersurface obtained by our method can be any general form of the quadratic hypersurface. In our experiment, the results of five artificial datasets showed that the proposed method can find the general form of the quadratic surface and has also the ability to obtain the linear separating hyperplane. Numerical experiments on 14 benchmark datasets verified that the proposed method was superior to corresponding methods in both accuracy and CPU time. What is more gratifying is that when the number of samples or the dimension is relatively large, our method can obtain good classification performance quickly. In addition, the results of the Friedman test and Nemenyi post-hoc test indicated that our QSMPM was statistically the best one compared to other methods.
The rest of this paper is organized as follows. Section 2 briefly reviews the related works, the QSSVM, and the MPM. Section 3 presents our method QSMPM, gives its algorithm, and analyzes the computational complexity of the QSMPM. In Section 4, we show the interpretability of our method. In Section 5, the results of the numerical experiments on the artificial datasets and benchmark datasets are presented, and a further statistical analysis is performed. Finally, Section 6 gives the conclusion and future work of this paper.
Throughout this paper, we use lower case letters to represent scalars, lower case bold letters to represent vectors, and upper case bold letters to represent matrices. R denotes the set of real numbers. R d denotes the space of d-dimensional vectors. R d×d denotes the space of d × d matrices. S d denotes the set of d × d symmetric matrices. S d + denotes the set of d × d symmetric positive semidefinite matrices. I d denotes the d × d identity matrix.
x 2 denotes the two-norm of the vector x.
Related Work
In this section, we briefly introduce the QSSVM and the MPM. For a binary classification problem, the training set is given as: where x i ∈ R d is the i-th sample and y i ∈ {+1, −1} corresponds to the class label, i = 1, 2, . . . , m + + m − . The number of samples in class +1 and class −1 is m + and m − , respectively. For the training set (1), we want to find a hyperplane or quadratic hypersurface: and then use a decision function: to determine whether a new sample x ∈ R d is assigned to class +1 or class −1.
Quadratic Surface Support Vector Machine
We first shortly outline the quadratic surface support vector machine (QSSVM) [26]. For the given training set (1), the goal of the QSSVM is to seek a quadratic separating hypersurface: where A ∈ S d , b ∈ R d , c ∈ R, which separates the samples into two classes with the largest margin. In order to obtain the quadratic hypersurface (4), the QSSVM establishes the following optimization problem: The optimization problem (5) is a convex quadratic programming problem. After obtaining the optimal solution A * , b * , and c * to the optimization problem (5), for a given new sample x ∈ R d , its label is assigned to either class +1 or class −1 by the decision function: To allow some samples in the training set (1) to be misclassified, Luo et al. further proposed the soft-margin quadratic surface support vector machine (SQSSVM); please refer to [27].
Minimax Probability Machine
Now, we briefly review the minimax probability machine (MPM) [8,9]. Let us leave the training set (1) aside for a moment and suppose that these samples have some distribution. Specifically, assume that the samples in class +1 are drawn from a distribution with the mean vector µ + ∈ R d and the covariance matrix Σ + ∈ S d + , without making other specific distributional assumptions. A similar assumption is also given for the samples in class −1 with the mean vector µ − ∈ R d and the covariance matrix Σ − ∈ S d + . Denote the two distributions as x + ∼ (µ + , Σ + ) and x − ∼ (µ − , Σ − ), respectively. Based on the above assumptions, the MPM attempts to obtain a separating hyperplane: where w ∈ R d , b ∈ R, which separates the two classes of samples with maximal probability with respect to all distributions having these mean vectors and covariance matrices. This is expressed as: where α ∈ (0, 1) represents the lower bound of the accuracy for future data, namely the worst-case accuracy. The infimum "inf" is taken over all distributions having these mean vectors µ ± ∈ R d and covariance matrices Σ ± ∈ S d + . The constraint condition of the above optimization problem (8) is the probabilistic constraint, which is difficult to solve. In order to convert the probabilistic constraints to easy, tractable constraints, the following lemma [9] is given: . Let x be a d-dimensional random vector with mean vector µ and covariance matrix Σ, where Σ ∈ S d + . Given w ∈ R d , b ∈ R, such that w T x ≤ b and α ∈ (0, 1), the condition: holds if and only if: where κ(α) = α 1−α .
Using the above Lemma 1, the optimization problem (8) is equivalent to: Then, through a series of algebraic operations (see Theorem 2 in [9], for the details), the above optimization problem (11) leads to: When its optimal solution w * is obtained, for the optimization problem (11), the optimal solution with respect to b is given by: Now, let us return to the training set (1). It is easy to see that these required mean vectors µ ± ∈ R d and covariance matrices Σ ± ∈ S d + are able to be estimated by the training set (1) as follows:μ Therefore, in practice, these mean vectors µ ± and covariance matrices Σ ± in (12)-(14) should be replaced byμ ± andΣ ± , and the optimal solutions of w and b thus obtained are denoted asŵ * andb * . Then, for a given new sample x ∈ R d , its label is assigned to either class +1 or class −1 by the decision function: In addition, for nonlinear cases and more details, please refer to [8,9].
Kernel-Free Quadratic Surface Minimax Probability Machine
In this section, we first formulate the kernel-free quadratic surface minimax probability machine (QSMPM). Then, its algorithm is given.
Optimization Problem
For the binary classification problem with the training set (1), we attempt to find a quadratic separating hypersurface: where A ∈ S d , b ∈ R d , c ∈ R, which separates the two classes of the samples. Inspired by the MPM, we construct the following optimization problem: where α ∈ (0, 1) represents the lower bound of the accuracy for future data, namely the worst-case accuracy. The notation x + ∼ (µ + , Σ + ) refers to the class distribution that has the prescribed mean vector µ + ∈ R d and covariance matrix Σ + ∈ S d + , but otherwise arbitrary, and likewise for x − .
The above optimization problem (18) corresponds to the optimization problem (8), which is used to derive the optimization problem (11). Analogically, the optimization problem (18) should be used to derive the required optimization problem. Unfortunately, it does not have a counterpart when the functions in curly braces in the optimization problem (18) are quadratic because of the lack of the corresponding Lemma 1. In order to overcome this difficulty, we change the quadratic functions as a linear function by introducing a nonlinear transformation from By representing the upper triangular entries of the symmetric matrix: as a vector: a = (a 11 , a 12 , . . . , a 1d , a 22 , . . . , a 2d , . . . , and defining: the quadratic function (17) of x in d-dimensional space yields the linear function of z in 2 -dimensional space as follows: Following the transformation (19), the training set (1) in the d-dimensional space correspondingly becomes: where (24), it is naturally assumed that the samples of the two classes are sampled from z + ∼ (µ z + , Σ z + ) and z − ∼ (µ z − , Σ z − ), respectively, where these mean vectors µ z ± ∈ R d 2 +3d 2 and covariance matrices Σ z ± ∈ S d 2 +3d 2 + can be estimated as: Based on the transformation (19), the optimization problem (18) is replaced by: Now, Lemma 1 [9] is applicable to the optimization problem (26). Thus, we have: where κ(α) = α 1−α . Moreover, a series of algebraic operation shows that the above optimization problem (27) is equivalent to the following second-order cone programming problem: When its optimal solution w * is obtained, for the optimization problem (27), the optimal solution with respect to c is given by: or: In the next subsection, we show how to solve the optimization problem (28).
Algorithm
Now, we present the solving process of the optimization problem (28), which is achieved by referring to [9]. By constructing an orthogonal matrix F ∈ R d 2 +3d 2 × d 2 +3d−2 2 whose columns span the subspace of vectors orthogonal toμ z + −μ z − ∈ R d 2 +3d 2 , the un- ; the optimization problem (28) is transferred to the unconstrained optimization problem: In order to solve the above optimization problem (31), Lanckriet et al. [9] introduced two extra variables β and η and considered the following optimization problem: This optimization problem (32) is solved by an alternative iteration. The variables are divided into two sets: one is β and η, and the other is u. At the t-th iteration, first by fixing β and η to take the derivative of the optimization problem (32) with respect to u, we have the following updated iteration formula of u t : where . To ensure the stability, the regularization term δI d 2 +3d−2 2 (δ > 0) is added. Therefore, the Equation (33) is replaced by: Next, by fixing u to take the derivative of the optimization problem (32) with respect to β and η, respectively, we have the following updated iteration formula of β t and η t : When the optimal solution u * is obtained by the above two updated iteration Formulas (34) and (35), the optimal solution w * of the optimization problem (28) is w * = w 0 + Fu * . Then, we summarize the process of finding the optimal solution A * , b * , c * of the optimization problem (18) in Algorithm 1.
After obtaining the optimal solution A * , b * and c * to the optimization problem (18), for a given new sample x ∈ R d , its label is assigned to either class +1 or class −1 by the decision function: It should be pointed out that our QSMPM is kernel-free, which avoids the timeconsuming task of selecting the appropriate kernel function and its corresponding parameters. What is more, it does not require any choice of parameter, which makes its use simpler and convenient. Furthermore, from the geometric point of view, the quadratic hypersurface (17) determined by our method is allowed to be any general form of quadratic hypersurface, including hyperplanes, hyperparaboloids, hyperspheres, hyperellipsoids, hyperhyperboloids, and so on, which is shown clearly by five artificial examples in Section 5.
Computational Complexity
Here, we analyze the computational complexity of our QSMPM. Suppose that the number and the dimension of the samples are N and d, respectively. Before reformulating the QSMPM as an SOCP problem, all d-dimensional samples need to be projected into the 2 -dimensional space. Therefore, the total computational complexity of the QSMPM is O(( d 2 +3d 2 ) 3 + N( d 2 +3d 2 ) 2 + Nd 2 ). In addition, we give the computational complexity of the MPM and the SVM. Their complexity is O(d 3 + Nd 2 ) [9] and O(N 3 ) [19], respectively. Then, by referencing the computational complexity of SVM, we obtain that the computational complexity of the QSSVM is O(N 3 + Nd 2 ). According to the above analysis, assuming that N is much larger than d, we can see that the computational complexity of the QSMPM is higher than that of the MPM, but lower than that of the SVM and the QSSVM.
The Interpretability
In this section, we discuss the interpretability of our method QSMPM. Suppose we have obtained the optimal solution A * , b * , c * to the optimization problem (18), then the quadratic hypersurface (17) has the following component form: where [x] i is the i-th component of the vector x ∈ R d , a * ij is the i-th row and the j-th column component of the matrix A * ∈ S d , and b * i is the i-th component of the vector b * ∈ R d . Each component of x produces the contribution of a quadratic polynomial function. Specifically, b * i is the linear effect coefficient of the i-th component, a * ii (i = j) is the quadratic effect coefficient of the i-th component, and a * ij (i = j) is the interaction coefficient between the i-th component and the j-th component. Therefore, for the i-th component of x, consider that the larger |a * ii | + |a * ij | + |b * i | (j = 1, 2, . . . , d, j = i), the greater the contribution of the i-th component is. In particular, when |a * ii | + |a * ij | + |b * i | = 0 (j = 1, 2, . . . , d, j = i), the i-th component of x would not work. Therefore, compared with the methods with the kernel function, the QSMPM has better interpretability.
Numerical Experiments
In this section, we provide some numerical experiments to verify the performance of our QSMPM. We In addition, we also compared it with the QSSVM and the SQSSVM. In all numerical experiments, the penalty parameter C in the S-SVM and the kernel parameter σ of the RBF kernel were selected from {2 −7 , 2 −6 , · · · , 2 7 } by the 10-fold cross-validation method. All numerical experiments were conducted using MATLAB R2016 (b) on a computer equipped with a 2.50 GHz (I5-4210U) CPU, and 4G available memory. 4], ξ i ∼ N(0, 1). Figure 1 illustrates the classification results of the MPM−L, the MPM−P, the MPM−R, and the QSMPM on Example 1, respectively. We can see from Figure 1 that our QSMPM can obtain classification results as good as the other three methods. In addition, the quadratic hypersurface found by our QSMPM is a straight line, that is a linear separating hyperplane.
Example 4.
[ Figure 4 shows the classification results on Example 4. We can observe in Figure 4 that the QSMPM can obtain the same classification performance as the MPM−P and the MPM−R and is better than the MPM−L. Our QSMPM can find an ellipse. Figure 5 that the classification performance of QSMPM is better than the MPM−L and is similar to the MPM−P and the MPM−R. In addition, our QSMPM finds a hyperbola.
In summary, from Figure 1 to Figure 5, we can see that our QSMPM can find any general form of the quadratic hypersurface, such as the line, parabola, circle, ellipse, and hyperbola found in sequence in the above numerical experiments. Moreover, our method can achieve as good classification performance as the MPM−P and the MPM−R. In addition, it can be seen from Figure 1d that our method can obtain the linear separating hyperplane.
Benchmark Datasets
To verify the classification performance and computational efficiency of our QSMPM, we performed the following numerical experiments on 14 benchmark datasets. Table 1 summarizes the basic information of the 14 benchmark datasets in the UCI Machine Learning Repository. It can be seen from Table 2 that compared with the other methods, our QSMPM obtained better accuracy on the first group of benchmark datasets, among which the accuracy was the best on four benchmark datasets. More specifically, except for Haberman and Bupa, the accuracy of our method was the best compared to the QSSVM and the SQSSVM. The accuracy of our QSMPM was the best compared to the three original kernel versions of the MPM except for Bupa. Furthermore, the accuracy of our method was the best compared to the H−SVM and the S−SVM with three kernel function except for Heart and Haberman. In addition, we can observe that QSMPM had a short CPU time.
Then, the classification results on the second group are reported in Table 3. The symbol "−" indicates that the corresponding method cannot obtain the classification results, because it cannot choose the optimal parameter in a limited amount of time or because the dimension and the number of the dataset are relatively large, resulting in insufficient memory. From Table 3, we can see that our QSMPM had good classification results on the second group of benchmark datasets. Especially on QSAR and Turkiye, the H−SVM−R, the three kernel versions of the S−SVM, the QSSVM, and the SQSSVM could not obtain the corresponding classification results, but our QSMPM could obtain good classification performance. Here, we mention the reason for this situation. According to the computational complexity of each method, we know that when the sample dimension and the number of samples are relatively large, the SVM and the QSSVM need a larger memory space. In addition, our QSMPM had the fastest running time except the MPM−L, and it ran quite fast when the number of samples or the dimension was large.
Statistical Analysis
To further compare the performance of the above 12 methods, the Friedman test and the post-hoc test were performed. The ranks of the 12 methods on all benchmark datasets is shown in Table 4.
First, the Friedman test was used to compare the average ranks of different methods. The null hypothesis states that all methods have the same performance, that is their average ranks are the same. Based on the average ranks displayed in Table 4, we can calculate the Friedman statistic τ F by the following formula: where N and k are the number of datasets and methods, respectively. r i is the average rank of the i-th method. According to the formula (38), τ F = 4.1825. For α = 0.05, we can obtain F α = 1.8526. Since τ F > F α , we rejected the null hypothesis. Then, we proceeded with a post-hoc test (the Nemenyi test) to find out which methods significantly differed. To be more specific, the performance of two methods was considered to be significantly different if the difference of their average ranks was larger than the critical difference (CD). The CD can be calculated by: For α=0.05, we know q α = 3.2680. Thus, we obtained CD = 4.4535 by the formula (39). Figure 6 visually displays the results of the Friedman test and Nemenyi post-hoc test, where the average ranks of each method are marked along an axis. The axis is turned so that the lowest (best) ranks are to the right. Groups of methods that are not significantly different are linked by a red line. In Figure 6, we can see that our QSMPM was the best one statistically among the compared methods. Furthermore, there was no significant difference in performance between the QSMPM and the MPM−R.
Conclusions
For the binary classification problem, a new classifier, called the kernel-free quadratic surface minimax probability machine (QSMPM), was proposed by using the kernel-free techniques of the QSSVM and the classification idea of the MPM. Specifically, our goal was to find a quadratic hypersurface that separates two classes of samples with maximum probability. However, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. Here, we clarify the main contributions of this paper. Unlike the methods realizing nonlinear separation, our method was kernel-free and had better interpretability. Then, our method was easy to use because it did not have any parameters. Furthermore, numerical experiments on five artificial datasets showed that the quadratic hypersurfaces found by our method were rather general, including that it could obtain the linear separating hyperplane. In addition, numerical experiments on benchmark datasets confirmed that the proposed method was superior to some relevant methods in both accuracy and computational time. Especially when the number of samples or dimension was relatively large, our method could also quickly obtain good classification performance. Finally, the results of the statistical analysis showed that our QSMPM was statistically the best one compared with the corresponding methods. Our QSMPM focuses on the standard binary classification problem, which we will extend to the multiclassification problem.
In our future work, there will be some issues to be address to extend our QSMPM. For example, we need to investigate further how to add appropriate regularization terms to our method. Meanwhile, we need to consider that the worst-case accuracies for two classes are not the same, and that will be interesting. Furthermore, we will pay attention to how the QSMPM achieves the dual purpose of feature selection and classification simultaneously. In addition, we can apply our method to practical problems in many fields in the future, especially image recognition in the medical field. | 6,412 | 2021-07-28T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Maximum Obtainable Energy Harvesting Power from Galloping-Based Piezoelectrics
Department for Management of Science and Technology Development, Ton Duc ang University, Ho Chi Minh City 700000, Vietnam Faculty of Civil Engineering, Ton Duc ang University, Ho Chi Minh City 700000, Vietnam CORIA-UMR 6614, Normandie University, CNRS-University & INSA, 76000 Rouen, France Dipartimento di Ingegneria Astronautica, Elettrica ed Energetica, Sapienza Universitá di Roma, Via Eudossiana 18, Roma 00184, Italy
Introduction
Piezoelectric energy harvester uses the ambient energy and transfers it into electric charge [1][2][3][4][5][6][7]. e parametric study and the design of piezoelectric energy harvesting from galloping motion is studied by Barrero-Gil et al. [1]. First, experimental results are obtained by Sirohi and Mahadik [2,3] using a cantilever beam exposed to air with constant velocity in a wind tunnel. Simulation of galloping cantilever coupled with a piezoelectric transducer in an electric circuit is performed by Abdelkefi et al. [4]. Analytical solution of that system of equations is presented in the work of Tan and Yan [5]. As the values for the harvested power of Abdelkefi et al. [4] and Tan and Yan [5] in some figures (watts) are beyond the order of the magnitude of experimental data of Sirohi and Mahadik [2,3] and Jamalabadi et al. [6,7] (milliwatts), this paper addressed the problem to the linear assumption of force-deflection relation for the Euler-Bernoulli beam.
is research proposes to consider the limitation of the yield stress of piezoelectric material as the maximum point of mechanical stability as well as energy harvesting.
Mathematical Model
e schematic of the system is shown in Figure 1. A bluff body exposed to the free stream is mounted on an Euler-Bernoulli cantilever beam. e two piezoelectric wafers are attached on free surfaces of beam which are in an electric circuit with electric impedance. e y-direction galloping of the bluff body in the first mode of the structure is modeled by Abdelkefi et al. [4] by By assuming the following functions for motion and voltage, e integration of equations (1) and (2) term for half period of motion, the onset of galloping, the maximum deflection of the beam, and power harvesting which are Before going further to derive the optimal values of the system parameters, same as of Tan and Yan [5], it should be noticed that the maximum bending moment at the base of the beam is calculated by where the yielding stress of piezoelectric material is about 31.2 MPa in the experiment.
Results
e numerical (solving equations (1) and (2)) and analytical (equations (5) and (6)) solutions are calculated based on the data provided [4]. e change in the amplitudes of the displacement of the tip of the cantilever beam with the free stream velocities at different electrical impedances from the numerical and analytical solutions are shown in Figure 2. As shown in Figure 2, the analytical and numerical solutions are in a good agreement. In addition, the limitation considered in equation (7) affected the results, where soon after the onset of galloping, the system experienced the tear in the piezoelectric sheet. To better assess the consequences of the constraint of equation (7), the vertical axis of Figure 2 is plotted in a log scale. As shown, the maximum displacement that the beam can bear is less than the order of centimeter, and for such stiffness, the order of the deflection before failure is millimeter. e variation in the harvested power with the electrical impedances at different free stream velocities from the analytical and numerical solutions is revealed in Figure 3. As shown in Figure 3, the results of numerical and analytical methods are in a good agreement. Additionally, the limitation considered in equation (7) affected the results, where soon after the onset of galloping, the system experienced the tear in the piezoelectric wafers and the harvesting of the wind energy is stopped. To see the significances of the constraint of equation (7) on the system clearer, the vertical axis in Figure 3 is schemed in a log scale. As exposed, the maximum harvested power in the electric circuit is less than the order of 10 − 1 watts, and for other cases, the order of the harvested power before failure is milliwatts. e results are in a good agreement with the experimental results [2,3,6,7].
By differentiating equation (6) with respect to the parameter C, the optimal design of the electric circuit for the galloping system is obtained as Analytical solutions, numerical solutions, and corrected solutions of the amplitudes of the harvested power versus the parameter C and free stream velocity are plotted in Figure 4. As shown, again the limited range of parameter C is allowed and the maximum obtainable power should be searched through those values.
When the value of expression under the square root in eqution (8) for R o is negative (the velocities higher than (9) As for the current parameters, this value of the C parameter is less than the allowed values of C parameter; for harvested power, the discussion is not necessary.
Analytical solutions for galloping-based piezoelectric energy harvesters with various interfacing circuits are summarized in Table 1. By assuming V � V m cos(ωt + φ) and u � u m cos(ωt) where tan φ � (1/RC p ω), the maximum obtainable average power in a standard RC circuit as a function of deflection limit is and for the synchronized charge extraction, For the case of wind direction parallel to the cantilever beam (see Figure 5), the governing equations are Various circuit interfaces are shown in Table 2. Maximum obtainable various galloping piezoelectric energy harvester for the case of wind direction parallel to the cantilever beam is plotted in Figure 6. e beam data is obtained from the energy harvester 2 in Table 3 of Zhao and Yang [8].
Conclusion
In this study, the nonlinear model of the galloping cantilever beam used for piezoelectric energy harvesting is simulated numerically with respect to the failure criteria as a limit of the maximum obtainable power. e ideal case of such Width of the tip body L tip Length of the tip body a 1 , a 3 Aerodynamic force coefficients R Load resistance Q Quality factor ξ Damping ratio of the structure ϕ Mode shape of the structure Angular velocity of the motion χ First natural angular velocity of the cantilever beam ω e − π/2Q c Cantilever-beam displacement system is compared with the case of maximum stress limited due to the yielding stress of piezoelectric material. e results show that the mechanical limits of the system do not allow us to obtain the anticipated values in theory, and the feasible values are 2-3 orders of magnitude lower than prediction values. Hence, the fracture limitation should be considered in the process of the design of galloping-based energy harvesters with piezoelectric materials. Furthermore, the current research proposes for engineering applications, and designing the control system for the amplitude of galloping is necessary as well. Finally, maximum obtainable average power in a standard RC circuit as a function of deflection limit and synchronized charge extraction is obtained. In addition, four electrical interfaces in gallopingbased energy harvesters are assessed. e results are for a feeble coupling SCE circuit, which is reasonable at higher wind while SSHI suits low wind speed. e standard circuit is suggested for strong electromechanical pairing, and the SCE has the best strength against the wind and can produce the highest value of power.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 1,755.6 | 2020-09-07T00:00:00.000 | [
"Engineering",
"Physics"
] |
The Use of Artificial Neural Networks to Prioritize Impact Factors Affecting Thai Rural Village Development
: This paper aims to prioritize impact factors which affect Thai rural village development. The basic village-leveled information database (NRD-2C) of the Community Development Department (CDD), Ministry of Interior, Thailand, was applied with Artificial Neural Networks (ANN) to measure the amount of impact for each factor affecting Thai rural village development. According to results, the top 5 impact factors are “Land Possession”, “Electricity”, “Communication”, “Educational Level”, and “Household Industry” with 17.88, 15.35, 14.02, 12.06, and 10.57 score of impact respectively with 95.60 percent of estimated accuracy.
Introduction
Human beings have known and learned the development of livelihood since they were born on the earth. As many evidences discovered so far, there have been the evolution of tools, constructions, weapons, clothes, vehicles, etc. Furthermore, much beliefs and teachings of Roman, Greek, Arabic, Chinese, Indian philosophers have been accepted and spread worldwide. So that "Development" has become a science and been adapted into many areas of study. Development has been a key to solve economic crisis such as inflation, economic recession, and unemployment. The Community Development Department of Thailand was established in 1962 under the Ministry of Interior to coordinate with all governors, CDD workers, community leaders and people for collecting community data in rural areas, then analyzing data, reporting useful information and making policies for community development. Community planners and developers have been persistently concerned with building the good community (Grant, 2006).
There are 2 types of Thai rural community database; BMN (Basic Minimum Needs), household-levelled information, and NRD-2C (Basic Village Information), village-levelled information. BMN covers approximately 8 million rural households and is updated every year, and NRD-2C covers 69,763 villages in rural area of Thailand and will be updated every 2 years (Community Development Department, 2007). In this study, only NRD-2C was used to measure the amount of impact of each factor. NRD-2C Database contains 31 key performance indicators (factors/variables) which are classified into 6 categories as shown in Figure 1 e.g. 1) Infrastructure, 2) Employment, 3) Public Health, 4) Education, 5) Community Strength, and 6) Natural Resources. So far, CDD has analyzed these databases with only a conventional quantitative analysis. They have only used a Statistical Distribution Analysis to describe and classify the development level of villages in Thailand. Then prioritize these factors by using the numbers of villages which failed in each factor. The result is displayed in Table 1 below. Land Utilization 2 But in term of impact evaluation, it is not necessary that the impact factor which most number of villages fail in has the most influence on Thai rural village development. On the other hand, the impact factor which has the most influence on Thai rural village development might have fewer number of failing villages. Moreover, all raw data in NRD-2C database were recorded as nominal data type (passed or failed), and the data of village development level were as ordinal data type (level 1, 2, and 3). By Technically, Correlation Analysis and Regression Analysis cannot be used to measure the amount of relationship or weight of impact. Previous studies done by West, Brockett, and Golden (1997); Thieme, Song, and Calantone (2000); Song and Zhao (2004) showed that Artificial Neural Networks (ANN) approach was well suited to complex relationships analysis and well dealt with all data types. It was found that ANN was used to evaluate general relationship quality (Bejou, Wray, and Ingram, 1996). So in this study, ANN was applied to measure the weight of impact for each factor.
Artificial Neural Networks (ANN)
ANN was applied in two stages: learning and testing. In the learning stage, a set of connection weights was calculated for a given set of input and output values. For back-propagation method, the output set contained previously known values. An algorithm called the delta-learning rule was used to adjust the weights at the end of iterations until the output was more similar to the pre-defined output. Learning was complete when the network had learned the relationship between the inputs and outputs. During the learning, setting the number and the size of the hidden neurons was difficult. Hidden layers represented non-linearity or interactions between variables. The more complex interactions, the more hidden neurons were required. In the testing stage, weights calculated during the learning stage were engaged to estimate a new set of output for a given set of input values (Yesilyaprak, 2004) for measuring the estimated accuracy of the analysis.
Methodology
In this paper, it was assumed that all factors were related to the level of village development. So that all 31 factors were selected as the independent variables and the level of village development as the dependent variable as shown in Figure 3. Then ANN was applied to measure the weight/amount of impact, called "importance", for each independent variable affecting the level of rural village development.
Figure 3: Conceptual Framework
Back-propagation algorithm was selected as a main approach of ANN and it was set to run continually and re-process as looping until the expected accuracy was not less than 95%, and then the analysis process was stopped. Once the weights of impact for each independent variable were retrieved. Then prioritized all of the weights of impact in descending order. The highest weight/importance was ranked 1 st , and the lowest weight/importance was 31 st .
Conclusion and Recommendations
Referring to Table 2, the factor which has the most impact to Thai rural village development is "Land Possession" (17.88%), the first runner-up is "Electricity" (15.35%), the second runner-up is "Communication" (14.02%), the third runner-up is "Educational Level" (12.06%), and the forth runner-up is "Household Industry" (10.57%). "Land Possession" has played the most important role in Thai rural development. This suggests that Thai government should keep on providing the title deeds or certificates of land ownership for agriculture to poor people in rural areas by Agricultural Land Reform Office (ALRO) established in 1975. Moreover ALRO has worked on this mission until present.
Furthermore, when consider "Natural Resources and Environment Category", "Soil Quality" has significantly more impact to Thai rural development than others. This corresponds to "Land Possession", the 1 st rank. It seems that the government should make policies for people to possess their land first. Because the findings show that it is more important than the quality of soil improvement and land utilization. The 2 nd and 3 rd impact factors, "Electricity Access" and "Communication Technology Access", which are related to information systems and technology, have slightly less impact. It seems likely that the information and communication technology (ICT) nowadays play a very important role in developing rural areas as well as urban areas. ICT helps provide useful information, news and knowledge to people which are the key capital of community development. So it seems very worth for the government to keep investing in ICT for covering all areas of Thailand.
The 5 th , 6 th , 8 th and 11 th impact factors, "Household Industry", "People Assembly", "Community Participation", and "Getting a Job", are classified into "Employment Category" mixed with "Community Strength Category". This finding could guide the government to promote grouping and participating of people and support them to make their household goods/products or jobs at the same time for earning their living. Owing to "Educational Level" is ranked 4 th . So the government should pay more attention to educational policies to continually support and gain people's opportunities and accesses into all supported educational systems.
From the study, the findings show that 2 methods of prioritization referred in this study, either Statistical Frequency Distribution Analysis or ANN gives different results and different views. Statistical Frequency Distribution Analysis shows the number of villages which passed and failed. On the other hand, ANN shows how much impact of each factor. From now, the government or a policy maker should consider these results both in term of frequency and impact. Some problems might have been found in many villages (high frequency), but they have caused small impact to those villages. In contrast, some problems might have been rarely found (low frequency), but they have caused much impact to those villages.
Suggestions for Future Studies:
A next researcher might change the database, the independent variable, or the dependent variable for extracting meaningful knowledge or try to use different ranking methods to compare the results with this method. | 1,947.4 | 2011-08-15T00:00:00.000 | [
"Economics"
] |
Students ’ perceived benefits, adoption intention and satisfaction to technology-enhanced learning: examining the relationships
Purpose – Providing quality education with the help of technologies in order to create global competitiveness among the students is the current trend in the education field. This research attempts to investigate following objectives:(1)theeffectofstudents ’ perceivedbenefitsandadoptionintentionoftechnology-enhancedlearning (TEL) on their satisfaction; (2) the effect of students ’ perceived benefits of TEL on their adoption intention of TEL; (3) the mediating and moderating effect of students ’ perceived benefits of TEL in the link between students ’ adoption intention and satisfaction to TEL. Design/methodology/approach – The primary data were collected from 600 undergraduate and postgraduate students, particularly those who are using TEL for at least one year. The authors used purposive sampling technique with “ criterion variable ” . Findings – Resultsindicatedthatstudents ’ perceivedbenefitsandadoptionintentionofTELhave significant and positive influence on their satisfaction. Direct effect was also found between perceived benefits and adoption intention of students. Authors also concluded that mediating and moderating effect of students ’ perceivedbenefits of TEL in the link between students ’ adoptionintention and satisfaction for TEL was found significant and positive. Originality/value – There is a huge lack of empirical studies available in the knowledge domain explaining the significance and implication of TEL in higher education in the state of Chhattisgarh, India.
Introduction
COVID-19 has impacted all spheres of life so as the educational institutions.Globally, all educational institutions are striving to get into the students' life in order to provide all the necessary learning services and become first in the online learning platforms.World Health Organization (WHO) has also announced that COVID-19 will be there among us for a long time (Jagannath, 2020) as the other diseases such as polio, cancer and so on, and this made the biggest challenge for all the educational institutions to transform themselves from traditional learning to technology-based learning.However, it is not new that learners are going to study or learn something using technology as the new-generation learners have high adoption intention of technologies.Digital devices and several other applications such as YouTube, Facebook and other chatting apps and so on have dynamically transformed people's way of living including social activities, communications and learning environment (Tiyar and Khoshsima, 2015).The increased tendency towards online learning materials with different technologies clearly indicates that various learning systems assist students in their learning process which eventually improves their academic performance (Chunwijitra et al., 2013).However, higher education is now more focussing to provide higher order skills and experiences which requires a major change in communication and learning environment (Thomas, 2011).
Technology-enhanced learning (TEL) is the need of the hour, and in this regard, higher education institutions (HEIs) have started incorporating technology-based learning in their curricula considering its importance in the academic performance (Bhuasiri et al., 2012) and developing higher order skills such as analysing, critical thinking and problem-solving ability.In today's competitive environment, it becomes imperative for learners to complete their higher education with latest knowledge and technologies in their respective field which they will need in their professional career (King and Boyatt, 2014).However, introducing information and communication technology (ICT) into HEIs ensures neither adoption intention among students nor usage of such technologies.In addition, learners will not be able to take benefits from these technologies, unless they intend to adopt and use them as the previous studies suggest that effectiveness and efficiency of technology-enhanced learning lie on the learners' adoption intention towards modern technologies (Chang and Tung, 2008;Park, 2009;Tarhini et al., 2014).Bhuasiri et al. (2012) investigated critical success factors of technology-based learning with experts such as faculty, researchers and ICT experts in developing countries.The top factors were found, that is, perceived benefits, adoption intention, programme flexibility and clear direction.The researchers concluded that people in developing countries have less familiarity with technologies which makes it critical for technology-based learning for such learners.
However, looking at the trend, the usage of technology has been increasing continuously, according to Wadhwani and Gankar (2020), technology-based learning market size was 200bn in 2019 and is expected to grow at over 8% CAGR (Compound Annual Growth Rate) between 2020 and 2026.But, due to the COVID-19 impact, the usage of online learning technologies would increase faster than the calculative figure.
Undoubtedly, technology is proven significant to enhance the learning process and productivity of students (Al-Hariri and Al-Hattami, 2017), and the present study attempts to investigate the effect of students' perceived benefits and adoption intention of TEL on their satisfaction enrolled in higher education in Chhattisgarh state.It also focusses to examine the mediating and moderating effect of students' perceived benefits of TEL in the link between students' adoption intention and satisfaction in the higher education of Chhattisgarh.
Literature review
Growing trend can be seen in the education institutions in the usage of ICTs in order to enhance the knowledge and skills of students as demanded in the 21st century.Understanding the effect of ICT in the workplace and everyday life, educational institutions are restructuring their curricula and other facilities in order to cope with the current technologies in teaching and learning process.Effective adoption of technologies is required in this restructuring process in order to provide knowledge of specific subject areas, to foster meaningful learning and to increase the professional productivity (Tomei, 2005).TEL supports the teaching-learning process with the usage of different technologies.Prior studies (Sife et al., 2007;Demiray, 2011) have highlighted the significance of technology-based learning in the improvement of teaching and learning in higher education.Sandars (2012) also discussed about the importance and usage of TEL in today's educational environment.The researcher concluded that technology has the potential to develop an international viewpoint related to teaching and learning.
Technologyenhanced learning
According to Alfraih and Alanezi (2016), traditional learning is being transformed into electronic learning as it reinforces the teaching-learning process and helps to understand different concepts easily.TEL can be defined as the delivery of learning materials and methods using information technologies to teach, learn or acquire knowledge anytime from anywhere (Turban et al., 2015).It offers benefits to learners to have flexibility and convenience in their learning process irrespective of their time and location.Even learners have the opportunity to acquire and disseminate knowledge digitally (Tetteh, 2016).Garrison (2011) also revealed that TEL provides learners to learn from their home or workplaces, and this reportedly reduces the time and cost of teaching and learning by 50-70%.Learners are free to adjust their time and location including the learning materials; meanwhile the learners can find the best instructors to deliver the quality lectures.The lectures can be attended by numerous learners at the same time with an opportunity to ask queries to experts.It also allows experts to check progress of each learners.However, technology-based learning is also helpful to them who do not want to attend face-to-face classes or do not have time to attend such classes.Despite these benefits and others that are not discussed above, if the learners do not adopt technology-based learning, they would be deprived from such a beneficial tool (Tarhini et al., 2017).Therefore, it can be said that the success of technology-based learning lies on the students' adoption intention towards such tools (Al-Qirim et al., 2018).
Evidently, due to the benefits and flexibility, technology-based learning helped in reducing learners' dropout rates (Turban et al., 2015), while some studies suggest that it records high dropout rates than the face-to-face programmes (Dodge et al., 2009;Patterson and McFadden, 2009).Many learners stop using technology-based learning courses after preliminary experiences (Dutton and Perry, 2002;Sun et al., 2008;Aixia and Wang, 2011).TEL success is clearly dependent on the students' adoption intention, from which they would be benefitted.Many universities have established technology-based learning environment in their higher education institutes, but could not get success due to some challenges (Baloyi, 2014;Kisanga, 2016;Queiros and de Villiers, 2016;Makokha and Mutisya, 2016;Chawinga and Zozie, 2016;Al-Azawei et al., 2016).However, several factors such as students' perceived benefits and adoption intention of electronic learning systems create a positive effect on learners' satisfaction, and it is also measurable even after the learning activity (Sun et al., 2008;Hui et al., 2008;Lee and Lehto, 2013;Del Barrio et al., 2013).
Previous researchers stated that TEL benefitted the educational community (Beetham and Sharpe, 2013).For instance, TEL facilitates learners to explore online educational content in their own space and time (De Jong and Van Joolingen, 1998) which makes the learners in charge of their own learning, instead of being completely dependent on teachers (Saye and Brush, 2007), and learning gap can also be fulfilled using TEL (Becker et al., 2017).In terms of whether TEL encourages the improved learning outcomes to learners, researchers found certain positive results with academic benefits (Henderson et al., 2015;Heflin et al., 2017).Further, students' perceived benefits were found to be significant with adoption intention of technology-based learning (Park, 2009;Cheng, 2011;Hair et al., 2013;Tarhini et al., 2014;Lee and Hsiao, 2014).Perceived benefits reflect that using technology for learning will benefit them in future.Thus, TEL is relatively more beneficial to the learners (Fonseca et al., 2014), but there are studies that showed negative effects of using TEL (Jacobsen and Forste, 2011) such as deterioration of interests related to reading and writing among students, distorted relations between teachers and students, dehumanizing learning environment and isolation issues when using technologies (Alhumaid, 2019).
What is technology-enhanced learning (TEL)?
The term technology-enhanced learning can be described as its application in the field of teaching and learning.It is a broad category which is not defined specifically, but, in short, JRIT 14,3 TEL can be defined as the combination of educator's practices regarding teaching and learning with appropriate usage of technologies to maximize the students' outcomes and experiences (Cullen, 2018).In general, TEL is understood as learning which occurs through the application of ICT and Internet-based educational technology.TEL is also termed as technology-assisted learning, technology-based learning, e-learning and mobile learning.
2.2 Significance of technology-enhanced learning in higher education TEL is significant for many reasons (Cullen, 2018).Firstly, TEL can facilitate numerous benefits for universities as well as students (Bhuasiri et al., 2012).It helps universities in reducing significant costs invested in the physical teaching and learning infrastructures (Arbaugh, 2005).Secondly, TEL also helps universities in becoming more digitized and contributing to build digital learning society which offers knowledge and learning in a very simple and fast way to the learners at anytime and anywhere using Internet technologies (Taylor, 2007).And thirdly, TEL facilitates universities to integrate their services into globallevel educational learning environment (Lee, 2010).Specifically, international cooperation and links in the field of education provide numerous opportunities for online learning beyond the boundaries of one country.For instance, joint training programme with a foreign university in which domestic students are not required to go abroad, instead they can avail full training and services offered by the foreign university with the help of Internet-enabled technologies.In reality, it is now impossible to survive in the world without the presence of technologies.Therefore, it becomes important that everyone has to learn how to use technologies effectively.Arguably, being computer literate is now more significant than some traditional skills taught earlier in educational institutes (Cullen, 2018).
Operational definitions
(1) Students' perceived benefits of TEL Perceived benefits refer to the degree to which a learner thinks that using TEL will be beneficial for his/her study in terms of time, effort and cost.Bennett and Bennett (2003) stated that students' perceived benefits are the degree in which the teachers compare new innovation with existing one and also talk about the benefits and costs of an adopted new technology (Rogers, 1995).
(2) Students' adoption intention of TEL Adoption intention term can be defined as individual's approach to engage into certain behaviour (Institute of Medicine, 2002).According to Fishbein and Ajzen (1975), intention is a subjective likelihood in order to perform certain tasks by an individual.
(3) Student satisfaction According to Sweeney and Ingram (2001), student satisfaction is defined as the pleasure and success which they receive from the learning environment.There are several factors which also influence satisfaction to students such as teachers' knowledge and performance, positive learning environment, effective communication, interaction in the teaching-learning process, the prestige and value of the institution (Wu et al., 2010).
Research questions
The research questions are: RQ1.What effect do students' perceived benefits and adoption intention have on satisfaction to TEL in HEIs of Chhattisgarh state?
Technologyenhanced learning RQ2.What effect do students' perceived benefits of TEL have on adoption intention of TEL in HEIs of Chhattisgarh state?
RQ3. Whether students' perceived benefits of TEL play as moderating and mediating role between students' adoption intention and satisfaction to TEL in HEIs in Chhattisgarh state?
Methodology 3.1 Conceptual frameworks of the study
There are a few studies that explained the relationship of students' perceived benefits and adoption intention on satisfaction to technology-based learning (Sun et al., 2008;Hui et al., 2008;Lee and Lehto, 2013;Del Barrio et al., 2013) which is congruent with the objectives of the present study as shown in the following conceptual framework (see Figure 1).
Research hypotheses
The hypotheses of the study are as follows: H1. Students' perceived benefits of TEL would positively influence their satisfaction.
H2. Students' adoption intention of TEL would positively influence their satisfaction.
H3. Students' perceived benefits of TEL would positively influence students' adoption intention of TEL.
H4. Students' perceived benefits of TEL would positively mediate the link between students' adoption intention and satisfaction to TEL.
H5. Students' perceived benefits of TEL would positively moderate the link between students' adoption intention and satisfaction to TEL.
Sampling and data collection
The primary data were collected using purposive sampling technique with "Criterion Sampling" (Palys, 2008).Respondents were finalized with certain specifications, that is, using TEL by undergraduate or postgraduate students, enrolled in non-technical courses in any government or private university/college in Chhattisgarh state, for at least one year.Sample size for the study was finalized as 600.The primary data were collected during April-November 2019 (see Table 1).
Research instrument and scale validation
Adoption of the right tool is the prime necessity for obtaining the correct form of data from the respondents.Development, selection and validation of scale are a systematic process in research which leads to the formulations of standard tools and that are considered appropriate for data collection.The present study also followed the same process where the authors adapted the constructs from previous studies.After identification and development of the measurement items, it was sent to four subject experts for further examination for scrutiny and content validity.After getting a positive response from subject experts, authors conducted a pilot study taking a sample size of 50 respondents to check the content creation.The content creation was found to be adequate and suitable to respond by the participants (see Table 2).The present study employed partial least square confirmatory factor analysis for scale validation with the help of Smart PLS 3 (trial version).It is a structural equation-based methodology that deploys component-based approach for estimating the parameters.The entire process of scale validation is done with two steps, that is, reliability measures and validity measures including convergent validity and discriminant validity.
Table 3 shows the measures with item loading.The value of t statistics for all the items was above 1.96 and was significant, thus each of the items had significant contribution in making the construct.From the result of factor analysis of the measurement items, it can be observed that the factor loading for each of the items of the construct was found to be > 0.5 (Hulland, 1999;Truong and McColl, 2011), confirming that each of the items had significant loading value and thus contributed to the formation of their respective constructs.
3.4.1 Reliability measures.Internal consistency refers to the extent to which the items in a test measure the same construct and can be accessed through Cronbach's alpha (Nunnally, 1978).The assessment of Cronbach's alpha for all the individual constructs was found above 0.7.The value of α ≥ 0.7 suggests that the construct is internally consistent and fairly reliable (Nunnally, 1978).Table 3 depicts the value of Cronbach's alpha for students' perceived benefits α 5 0.707, students' adoption intention α 5 0.727 and student satisfaction α 5 0.811.
Students' perceived benefits
Adapted from Tarhini et al. (2017) 1.Using the free resources such as e-libraries helped me to save money and effort 2. Using emails to communicate with other student groups helped me to save my expense and effort 3. Use of Internet is reasonably priced 4. Use of Internet is a good value for the money
Students' adoption intention
Adapted from Ajjan and Hartshorne (2009) and Roca et al. (2006) 1.I will use the e-learning platform on a regular basis in the future 2. I will continue using e-learning platform in order to fulfil my future needs 3. I will strongly recommend others to use the e-learning platform
Student satisfaction
Adapted from Lin et al. (2018) and Tarhini et al. (2017) 1. Use of e-resources improves my ability to integrate information 2. I am satisfied with the learning flexibility of e-learning system 3.I am satisfied with the online learning environment 4. E-learning systems allow me to accomplish learning tasks more quickly 5. Using e-learning system increases my productivity
Technologyenhanced learning
The reliability measure can also be accessed through the value of Rho A. The value of Rho A ≥ 0.7 is also considered as fair measure of reliability.Table 3 depicts the value of Rho A for students' perceived benefits 0.724, students' adoption intention 0.731 and student satisfaction 0.821.Thus, the construct confirms the reliability measures of the data for the study.
3.4.2Validity measures.3.4.2.1 Convergent validity.The convergent validity is the degree to which multiple items to measure the same concept are in agreement (Fornell and Bookstein, 1982;Barclay et al., 1995).The value of composite reliability (CR) ≥ 0.7 suggests internal consistency reliability of the measures used in the study (Bagozzi and Yi, 1988;Hair et al., 2010).Table 3 depicts the value of CR for students' perceived benefits 0.769, students' adoption intention 0.846 and student satisfaction 0.869 indicating high degree of CR of scale.
The average variance extracted (AVE) is the determinant of convergent validity of the scale.It signifies the amount of variance captured by a construct from each scale.The value of AVE ≥ 0.5 provides fair evidence for the convergent validity measures for the construct (Hu et al., 2004;Henseler et al., 2009).Table 3 depicts the value of AVE for students' perceived benefits 0.573, students' adoption intention 0.648 and student satisfaction 0.570.Thus, all the constructs are fairly good in terms of convergent validity measures.
3.4.2.2 Discriminant validity.Discriminant validity signifies that the constructs are independent of each other.The discriminant validity signifies low correlation between intended construct measurement and to that of the other constructs in the study (Cheung and Lee, 2010;Hair et al., 2010).It means that the measures are from the own constructs (Fornell and Larcker, 1981).In partial least square measurement, it signifies comparison of squared correlation between the construct and variance extracted for a construct (Komiak et al., 2004;Henseler and Chin, 2010).The value of discriminant validity measures shown in student satisfaction 0.755 signifies higher value than that of the construct correlation and therefore can be said to have a satisfactory measurement model (Henseler and Chin, 2010).
3.5 Data analysis AMOS v25 (licensed), SPSS v25 (licensed) and Smart-PLS 3 (trial version) have been used for analysing the primary data for the present study.
Testing of H1 and H2
Hierarchal multiple regression (step-wise) was run to determine whether students' adoption intention and perceived benefits of TEL have effect on student satisfaction.Table 5 depicts the details of the regression model with student satisfaction to TEL as a criterion variable.In order to meet the assumption of multiple regression, partial regression plots and a plot of scrutinized residuals were assessed to check the linearity.The value of Durbin Watson statistics was 1.944 which indicated the independence of residuals.There was no multicollinearity in the data as all the tolerance values were greater than 0.1, variance inflation factor (VIF) found to range from 1.000 to 1.295, which was distant from 1.0-3.0, the criteria that may indicate multicollinearity concern (O'brien, 2007).It means that multicollinearity found significant correlation between all predicting variables.The value of cook's distance is above 1 and the data was approximate normal accessed by Q-Q plot.The results of the hierarchical multiple regression analysis for the composite scores of the independent variables are presented in Table 5 and Figure 2.
In model 2, students' perceived benefits of TEL made significant contribution in variation of student satisfaction (ΔF (1.597) 5 34.604, p < 0.01).The introduction of factor students' perceived benefits of TEL explained additional 2.4% variance in student satisfaction with Technologyenhanced learning overall 57.8% (R 5 0.76, ΔR 2 5 0.024).The predictor of students' perceived benefits of TEL was found to have significant positive association (β 5 0.178, t 5 5.883, p < 0.01) with student satisfaction.
Result indicates that the explaining percentage of all predictors was 57.8%; this total of the variance included 55.4% for students' adoption intention of TEL and 2.4% for students' perceived benefits of TEL (see Figure 2).
Testing of H3
A linear regression was run to find the effect of students' perceived benefits of TEL on students' adoption intention of TEL.To assess linearity scatter plot of students' adoption intention against perceived benefits of TEL with superimposed regression line was plotted.Visual inspection of the plots indicated linear relationship between the variables under study.There were homoscedasticity and normality of residuals.Students' perceived benefits of TEL significantly predicted students' adoption intention of TEL, F (1.598) 5 176.308, p < 0.001, accounting for 47.7% of variation in adoption intention of students.The adjusted R 2 5 22.6% was a medium size effect (Cohen, 1988).The predicted regression equation was students' adoption intention of TEL 5 8.175 þ (0. 254 X Students' perceived benefits of TEL) (see Tables 6, 7 and 8, Figure 3).
Testing of H4
In order to find the mediating effect of students' perceived benefits of TEL as a link between students' adoption intention and satisfaction to TEL, process macro in SPSS developed by Andrew Hayes was used (Hayes, 2013) (see Table 9).
Regression analysis was used to investigate whether students' perceived benefits (SPB) of TEL mediates the effect of students' adoption intention (SAI) on their satisfaction to TEL.Result indicated that students' adoption intention of TEL was a significant predictor of students' perceived benefits positively, β 5 0.897, SE 5 0.0676, t 5 13.2781, p < 0.05, and that students' perceived benefits of TEL were also a significant predictor of student satisfaction (SS), β 5 0.1537, SE 5 0.0261, t 5 5.8825, p < 0.05.This result supports the mediational hypothesis.After controlling for the mediator, students' adoption intention of TEL was found a significant predictor of student satisfaction, β 5 1.07, SE 5 0. Technologyenhanced learning consistent with partial mediation.Approximately 57.81% of the variation in student satisfaction was accounted by the predictors (R 2 5 0.5781).The indirect effect was tested using a percentile bootstrap estimation approach (Shrout and Bolger, 2002), implemented with the PROCESS macro (Hayes, 2013).These results indicated that the indirect coefficient was significant, β 5 0.1378, SE 5 0.093, 95%, CI 5 0.0607-0.3604.Thus, students' adoption intention of TEL is associated with student satisfaction, which is partially mediated (0.13) by students' perceived benefits of TEL.
Testing of H5
In order to find the moderating (interaction) effect of students' perceived benefits and adoption intention of TEL on student satisfaction, step-wise hierarchical multiple regression analysis was employed.All the variables in the study (namely dependent, independent and the moderator) were transformed into their respective standard z score.
Then, the interaction component between the obtained z score of students' perceived benefits and adoption intention of TEL was calculated by creating a new variable interaction.Hierarchal multiple regression (step-wise) was run to find out the individual and the interaction effect of students' perceived benefits and adoption intention of TEL on the dependent variable, that is, student satisfaction.The model was first accessed without the interaction effect (F (2.597) 5 408.996, p < 0.01) and explained 57.8% variance in student satisfaction.Then, the effect of the interaction component was taken into consideration (F (1.596) 5 311.170, p < 0.01) and explained 61.0% variance in student satisfaction.Both the models were found to be significant.The value of ΔR 2 5 0.032 which means the interaction effect (moderation effect) of the variables explained additional 3.2% variability in the model.Thus, the combination of perceived benefits component with students' adoption intention to use online learning can yield higher satisfaction among the students (see Tables 10, 11, 12 and 13).
Findings and discussion
(1) The result of the first hypothesis test indicated that students' perceived benefits of TEL positively and significantly influenced student satisfaction and the hypothesis is accepted.The outcome was found to be consistent with the previous research studies ( Al-Hawari and Mouakket, 2010;Ifinedo, 2016).Thus, it can be concluded that perceived benefits of TEL in comparison with the traditional learning are found more productive and performance-based medium of learning among students.Students found themselves more connected with TEL which further enhances their academic performance and satisfaction.
(2) The outcome of the second hypothesis test revealed that students' adoption intention of TEL positively and significantly influenced student satisfaction and the hypothesis is accepted.The study outcome was found to be consistent with the previous studies (Davis et al., 1989;Seddon, 1997;Limayem and Cheung, 2008;Stone and Baker-Eveleth, 2013;Cheng, 2014;Tang et al., 2014;Liao et al., 2015;Chen et al., 2015;Ifinedo, 2016).Thus, it can be concluded that higher intentions of the students to take online medium of learning would result in better understanding of learning materials and, then, higher satisfaction.
(3) The outcome of the third hypothesis test revealed that students' perceived benefits of TEL positively and significantly influenced their adoption intention of TEL and hypothesis is accepted.Similar outcomes were found in previous studies (Ong and Lai, 2006;Lee et al., 2011;Chu and Chen, 2016;Salloum and Shaalan, 2018).Thus, it can be said that students' perceived benefits for online learning would generate higher intention to adopt online learning.Considering the various benefits such as several quality learning materials, less cost, own time and place including their own Technologyenhanced learning learning pace, these benefits would make them to adopt technology-based learning for their study.
(4) The result of the fourth hypothesis test concluded that students' adoption intention of TEL was associated with student satisfaction and that were partially mediated by students' perceived benefits of TEL and the hypothesis is accepted.Thus, it can be derived that various benefits offered by TEL would create positive intention to adopt technology for the improvement in their academics which would create higher level of satisfaction among learners.
(5) The outcome of the fifth hypothesis test revealed that the interaction effect (moderation) of students' perceived benefits in the effect of students' adoption intention on satisfaction was found to be significant and the hypothesis is accepted.Thus, it can be concluded that combining the component of students' perceived benefits with adoption intention to use online learning can yield higher satisfaction among the students.It is, therefore, required to make the students aware about the advantages of online learning practices and create adoption intention of TEL to achieve the higher level of student satisfaction.
Contributions of the study
The present study contributes to the theory and practice, specifically in Chhattisgarh higher education, where people are less aware of technologies or less number of people are using technologies for learning purposes.At the times of COVID-19, where educational institutions are struggling to know more about their potential learners, this study provides knowledge about the students' adoption intention and perceived benefits and its relation to learners' satisfaction to TEL.It also showed the relationship of students' perceived benefits on adoption intention which actually demonstrated that the various benefits can attract and retain learners for a long time, if they are satisfied with the institutional learning services.
In order to remain competitive in the educational sector, one must change their strategies in reaching out to the potential learners and currently, all the educational institutions in the world are taking all of their services to online mode to provide the best services to the learners.A lot of new challenges will emerge over time before them such as how to provide better services, what benefits students seek, how to create adoption intention among students and what services make learners satisfied.These questions keep coming when the educational institutions start working on it, and this study will help them to understand the learners' perspective and their expectations that what benefits they seek and how their adoption intention are created and what things are needed to get satisfied.
The findings of the study will not only help to the Chhattisgarh higher education, but also all the educational institutions which are striving to know what benefits are actually turning out to students' adoption intention which further lead to student satisfaction.The COVID-19 pandemic has forced all the educational institutions to get their services online or perish over time as the WHO has also announced that COVID-19 will be among us for a long time (Jagannath, 2020).
Conclusion
Over the last decade, the use of TEL has been recorded an exponential increase, especially among the higher education students with an aim to improve their academic results (Walker et al., 2016), and in future, the usage of technologies will be much higher in order to enhance the productivity and performance in the teaching-learning process.TEL allows students to JRIT 14,3 explore educational content in their own space and time, and it also helps to become in charge of their own learning instead of learning completely through teacher.This article examined the relationship of students' perceived benefits, adoption intention and satisfaction to TEL enrolled in different higher education in Chhattisgarh state.The outcome clearly indicated that TEL is affecting today's educational environment enormously by improving the academic results of students as the previous studies evidenced that TEL helps students in attaining the education/learning gap (Van Der Schaik, 2018).
Figure 2 .
Figure 2. Model specification for students' adoption intention and perceived benefits of TEL on student satisfaction
Figure 3 .
Figure 3. Model specification for students' perceived benefits on students' adoption intention of TEL
Table 2 .
Theoretical construct and measurement scale Table 4 elucidates values of students' adoption intention 0.805, students' perceived benefits 0.687 and
Table 5 .
Result of hierarchical multiple regression analysis
Table 6 .
Model summary
Table 13 .
Outcomes of the proposed hypotheses | 6,984.6 | 2021-08-17T00:00:00.000 | [
"Education",
"Computer Science",
"Business"
] |
Modeling the role of environmental variables on the population dynamics of the malaria vector Anopheles gambiae sensu stricto
Background The impact of weather and climate on malaria transmission has attracted considerable attention in recent years, yet uncertainties around future disease trends under climate change remain. Mathematical models provide powerful tools for addressing such questions and understanding the implications for interventions and eradication strategies, but these require realistic modeling of the vector population dynamics and its response to environmental variables. Methods Published and unpublished field and experimental data are used to develop new formulations for modeling the relationships between key aspects of vector ecology and environmental variables. These relationships are integrated within a validated deterministic model of Anopheles gambiae s.s. population dynamics to provide a valuable tool for understanding vector response to biotic and abiotic variables. Results A novel, parsimonious framework for assessing the effects of rainfall, cloudiness, wind speed, desiccation, temperature, relative humidity and density-dependence on vector abundance is developed, allowing ease of construction, analysis, and integration into malaria transmission models. Model validation shows good agreement with longitudinal vector abundance data from Tanzania, suggesting that recent malaria reductions in certain areas of Africa could be due to changing environmental conditions affecting vector populations. Conclusions Mathematical models provide a powerful, explanatory means of understanding the role of environmental variables on mosquito populations and hence for predicting future malaria transmission under global change. The framework developed provides a valuable advance in this respect, but also highlights key research gaps that need to be resolved if we are to better understand future malaria risk in vulnerable communities.
Background
Among the potential effects of climate change on human health, the impact on infectious diseases has attracted increasing attention in recent years [1]. Vector-borne diseases (VBDs) are likely to be particularly vulnerable given the poikilothermic nature of vector survival and development, as well as the effects of temperature on pathogen development. Although the link between climatic variables and transmission has attracted interest for VBDs such as dengue and schistosomiasis, the combined global mortality of these diseases is less than 7% of that due to malaria [2], and this, combined with the significant effects of climatic variables on multiple stages of the transmission cycle, has led to malaria remaining an important focus of ongoing debate regarding climate change and VBDs [3,4].
In the context of better understanding the role of weather and climate on transmission, two modeling approaches are possible. Statistical models use empirical relationships between climatic variables and past (or current) disease incidence (or prevalence) to predict future disease trends [5,6]. Mechanistic models, on the other hand, adopt a process-based approach, incorporating known biological, epidemiological and entomological relationships affecting vector and pathogen vital rates and formulating mathematically how these combine [7][8][9]. Both types of model have important roles to play in improving our understanding of climate-driven transmission changes, but the focus here is on exploiting the explanatory power of the latter.
A vital component in developing reliable VBD transmission models is establishing a realistic model of the vector population dynamics, yet only a few studies have explicitly modeled and parameterized the impact of climatic drivers on vector vital rates [8,[10][11][12]. While these studies have greatly improved our understanding of the relative importance of temperature, rainfall and relative humidity (RH) on vector populations, they also highlight the need to develop a comprehensive mathematical framework for analysing how a range of environmental factors, arising at different spatial scales, combine at the level of breeding sites to affect stage-specific vector abundance in malaria-affected regions.
This work aims to provide such a framework by formulating and parameterizing environment-vector relationships through surveying and modeling relevant experimental and field data, and incorporating these relationships within a low-dimensional, deterministic mathematical framework. Model simplicity permits ease of integration into malaria transmission models and the model is calibrated and validated against longitudinal Anopheles gambiae abundance data from Tanzania [13]. The model also highlights where further experimental and modeling work is required to improve parameterization, in addition to developing a framework readily generalized to different Anopheles species and other disease vectors.
Methods
Given that An. gambiae s.s. development and mortality depends on the life cycle stage and that field data available to parameterize mathematical models is often collected daily, a stage-structured, discrete-time model (with a daily time-step) is motivated. An alternative framework is based on physiological, rather than chronological, age and this has been adopted elsewhere [7,8,10]. In physiological age-structured models, progression through the life cycle is dependent on temperature conditions within a time-step and the minimum temperature for physiological development. However, while processes such as age-dependent mortality, heterogeneities in larval instars, and oviposition differences between gonotrophic cycles are more naturally incorporated within such approaches, there are several drawbacks of relevance to this article.
For a general physiological age-structured model of the form where n ¼ n 1 n 2 n 3 n 4 ð Þ T and M is the projection matrix, the high-dimensional nature of M increases by an order of magnitude as temperature measurements become more precise. The dependence of development on other factors (such as RH for adults) also increases the complexity of M, as well as making an implicit assumption about the linearity of development with temperature that is often violated. Thus, a low-dimensional approach is instead adopted here, providing a simple, structurally-parsimonious, deterministic model that more transparently illustrates the basic structure that may be built upon in future model development, is considerably easier to construct, analyse and interpret, and may be readily appended to malaria transmission models.
Immature An. gambiae s.s. pass through three distinct aquatic stages (eggs, larvae (instars L1 to L4) and pupae) prior to adult development. Let n i (t) represent the number of vectors in state i (where i = 1, 2, 3 and 4 refers to eggs, larvae, pupae, and adults respectively). The exposed nature of breeding sites results in considerable vulnerability to environmental influences and the impacts of rainfall, temperature, and biotic effects on immature survival and development are considered here. For immature stages, the daily survival probability p i of stage i is assumed to be determined by (independent) factors attributable to the mean daily water temperature T W (°C), cumulative daily rainfall R t (mm), prolonged periods of desiccation D (days), and density-dependence DD, so that (where i = 1, 2, 3), while, for adults, p 4 ¼ p 4 T A ; RH ð Þ where T A is the mean daily air temperature (°C) and RH the relative humidity (%). If n i (t) represents the number of (female) An. gambiae s.s. in stage i at the breeding site at time t, then where F 4 is the average number of eggs laid per day per female adult, P i is the proportion of vectors surviving and remaining in stage i in t to t + 1, and G i the proportion surviving and progressing from stage i in t to t + 1. To calculate P i and G i , the expressions from [14] are (2). The resultant population model (3) is then calibrated and validated against vector abundance data from [15].
Results and discussion
Modelling breeding site hydrodynamics To capture the dependence of vector breeding site characteristics on environmental conditions, sites are modeled as right-centered cones to account for the increasing surface area of water available for oviposition as rainfall increases [16]. Let V t be the volume of water (ml) within the site at time t given a fixed site opening of surface area A T (mm 2 ), A' the exposed surface area of water within the site after rainfall (mm 2 ) (where A' ≤ A T ) (which is then used to calculate the evaporation E t from the site at the end of day t), and h' the water depth after all daily rainfall (mm) (see Figure 1).
where E t is the evaporation from the site on day t (mm). Since the total volume of water (existing volume plus new rainfall) on day t is To determine h' , consideration of the geometry of the cone before and after rainfall on day t gives, using similar triangles, V'/V 0 = (h'/h 0 ) 3 (where V 0 and h 0 are the initial volume and depth of water respectively). Rearranging for h' , using the expression above for V' , and substituting into (5) gives To calculate E t , the standard FAO Penman-Monteith method is used to first calculate the daily reference crop evapotranspiration ET 0 (mm/day) [17] as Here, Δ is the slope of the vapour pressure curve (kPa°C -1 ) (which depends on T A ), R n the daily net radiation transferred to the breeding site (MJm -2 day -1 ) (which, for a given location and day number, depends on the daily cloud fraction CF (through its relationship with the number of sunshine hours per day), dewpoint temperature T DP (°C), minimum daily temperature T min (°C) and maximum daily temperature T max (°C)), G the soil heat flux (MJm -2 day -1 ), γ the psychrometric constant (kPa°C -1 ) (constant for a given site), U 2 the wind speed at 2 m (ms -1 ), e s the saturation vapour pressure (kPa) (dependent on T min and T max ), and e a the actual vapour pressure (kPa) (dependent on T DP ). The climatic variables R t , T A , T DP and CF are readily available from the ECMWF ERA-40 re-analysis dataset [18], while U 2 may be approximated from U 10 (the wind speed at 10 m, available from ERA-40) using the conversion U 2 = 0.748U 10 [17]. The outgoing heat conduction between the water body and surrounding soil G is typically negligible compared to R n [17] and, as in [17], is neglected here. Daily evaporation from an exposed breeding site is likely to differ from ET 0 , however, due to differences in the reflectivity, heat capacity and typical microclimatic conditions of water bodies compared to crops. Pan evaporation E pan , the evaporation rate from pans filled with water and sunken into the ground, is more akin to breeding site conditions and hence E t can be estimated as where K p is an empirically-derived pan coefficient (dimensionless) that depends on the type of pan, breeding site surroundings, RH (obtained from RH ¼ 100exp 17:27T DP = 237:3 þ T DP ð ÞÀ17:27T A = 237:3 þ T A ð Þ ð Þ Þ and U 2 . Although immature An. gambiae s.s. typically prefer clear water, examples of breeding within turbid waters also exist [19], but the turbidity of water does not typically affect ET 0 (and hence E t ) by more than 5% [17], so this is ignored here. Daily values of K p are Figure 1 Geometry assumed for modeling breeding site hydrodynamics. estimated using the empirical tables for Colorado sunken pans (with 1 m radius dry fetch) in [17] based on daily values of RH and U 2 . A summary of model parameters is given in Table 1.
Environmental influences on immature development Rainfall
Rainfall typically correlates strongly with vector abundance and malaria prevalence [20]. Anopheline species often differ in their habitat preference -An. gambiae s.s. prefer to breed in small, shallow, temporary rain pools or stagnant bodies of water fully exposed to the sun (such as hoof marks, tyre tracks or other pools created during land use changes) [21], while other species within the An. gambiae complex differ in their preference for freshwater, brackish and saline water [19]. To capture the dependence of oviposition behaviour on environmental conditions, let N EP and N EO be the number of eggs per female per oviposition produced and laid (respectively), so that where 0≤f t ≤1 is the proportion of eggs laid given the environmental conditions on day t. An. gambiae s.s. oviposition may be influenced by two signalsa chemical cue directing the suitability of habitat water for oviposition and the existing density of juveniles present [22][23][24][25]. Dependence on the latter is quantified using the oviposition index OI introduced in [26]. Using [24] and refitting to find OI as a function of the number of immature per ml ρ t (using data on L1 and L2 instars) gives It is shown in [24] that this does not depend on the number of eggs present, while [22] demonstrates that pupae presence also has no significant influence on oviposition choice. Thus, for the model here, where n i (t) represents the number of vectors in stage i, the relevant density is where N T and N S are the number of eggs laid in the test substrate (pool water with larvae) and control substrate (pool water without larvae) respectively, whereupon substituting from (10), and assuming that L3 and L4 presence has the same effect on site-attractiveness, In addition to creating breeding sites and influencing the characteristics of existing pools, high levels of rainfall have been associated with significant immature mortality, either due to flushing from habitats or from secondary effects [27]. These are aggregated here into total rainfall-induced mortality, modeling the decrease in survivorship by letting p i (R t ) represent the daily survival probability of immatures in stage i given rainfall R t . It is assumed that where σ i quantifies the decrease in survival of stage i. Given the focus on L1 and L4 larvae in [27] and the absence of data elsewhere on egg and pupal mortality due to rainfall, eggs and pupae are assumed to respond similarly to L1 and L4 larvae respectively (although pupal response may differ from L4 larvae in reality due to their ventral air space that aids buoyancy, yet significantly increases mortality if this hydrostatic balance is disrupted [28]). Assuming average L1 and L4 losses of 17.5% and 4.8% per night respectively over the study period with 207 mm rainfall across 26 rainfall nights (K. P. Paaijmans, pers. comm.) gives σ 1 ¼ 0:0242mm À1 and σ 3 ¼ 0:00618mm À1 . Given that the model here does not distinguish between larval instars, the average duration spent in each instar (as a function of T W ) is accounted for by interpolating between L1 and L4 mortalities in [27] to determine L2 and L3 survival, whereupon averaging over all temperatures gives σ 2 ¼ 0:0127mm À1 (Figure 2a). The prolonged absence of water also affects immature longevity; anopheline egg survival in desiccating conditions is two to three weeks [29], while An. gambiae s.l. eggs are viable for up to 12 days without water [30]. To model the decrease in egg viability in dry habitats, the findings of [31] are used, which demonstrate that the duration of exposure to desiccating conditions is a better measure of egg viability than soil moisture content. If p i (D) is the daily survival probability of stage i given D days without water, the functional form where ω i quantifies the sensitivity of stage i to desiccation and the functional form ensures that survival is near unity when D is small and approaches zero as desiccation increases. Least-squares estimation using field populations under medium-moisture conditions gives ω i = 0.405days −1 (R 2 > 0.99). Survival of larvae and pupae may be similarly parameterized using [29], which demonstrates that L4 larvae survive significantly better than L1, L2 and L3 instars in such conditionsweighting by the average duration in each instar stage gives ω 2 = 0.855days −1 (R 2 = 0.97). In the absence of data on pupal survival, pupae are assumed to demonstrate a similar response to L4 larvae, whereupon using [29] gives ω 2 = 0.602days −1 (R 2 = 0.94) (Figure 2b).
Temperature
Despite the strong influence of water temperature on immature populations, few detailed experimental studies have been undertaken. The model here requires the daily survival probability p i (T W ) and stage duration d i (T W ) for each i. For all three stages, age-independent mortality is assumed and hence p i T W ð Þ ¼ exp À1=d i T W ð Þ ð Þ (Figure 3a and 3b).
Egg survival is poor outside 10-40°C and [32] find that no An. gambiae s.s. eggs survive more than five hours at or above 41°C, with survival decreasing exponentially beyond 40°C. For egg development time d 1 (T W ), the functional form of [33], with the corrected coefficients of Bayoh and Lindsay (unpublished data) ( Table 2), is adopted. Of the juvenile stages, larval survival demonstrates the strongest dependence on temperature and the effect of competition between An. gambiae s.s. and Anopheles arabiensis on temperature-dependent survival has been examined [34]. The relationship between survival, development and water temperature, and age-dependent mortality, for An. gambiae s.s. is considered in [35]. Larval duration is parameterized as a function of T W in [5], but this is An. gambiae s.l., rather than An. gambiae s.s. Moreover, this parameterization is based only on temperatures between 23.0 and 32.8°C and extrapolating to temperature extremes gives inconsistent results with experimental findings in [33] (such as development times around 30 days at 18°C in the former compared to 15 days in the latter). While [10] provides a literature survey of larval development times as a function of T W , eight of the twelve data points for An. gambiae s.s. are calculated from [33] on the assumption of eggs and pupae developing within one day, which is inconsistent with experimental data in the latter. The revised coefficients from Bayoh and Lindsay (unpublished data) are therefore used to determine d 2 (T W ).
Aside from the work of [36] on the effects of temperatures from 21.2 to 29.5°C on An. gambiae s.l. pupal mortality and [33], there is little experimental data to parameterize pupal development and survival. The latter, with the corrected values in Bayoh and Lindsay (unpublished data), are therefore used to parameterize d 3 (T W ).
Finally, it is important to note the importance of using water temperature to calculate juvenile survival and development, rather than air temperature. The difference between mean daily water and air temperatures is typically around 3-6°C depending on factors such as breeding site dimensions, microclimate and weather conditions [32,37]. To account for this, it is assumed that T W ¼ T A þ ΔT ; where ΔT > 0 is assumed to capture all thermodynamic processes taking place at breeding sites leading to a difference between mean water and air temperatures. Lower and upper temperature thresholds for juveniles are taken from [33].
Predation and density-dependence
Density-dependent juvenile mortality arises from several sources. Body size and intra-species competition for resources, together with inter-species competition, significantly affect the population dynamics of many mosquito species and recent work has demonstrated the importance of larval density on juvenile Anopheles development and ecology [38]. Here, only within-stage densitydependent mortality is assumed and the potential effects of juvenile density on adult longevity or fertility are not considered. Figure 4 demonstrates the dependence of larval survival on existing larval density (H Tsila, unpublished data), while field populations of Anopheles larvae typically demonstrate low densities (for species that do not breed in tree holes or containers)some field estimates suggest densities of less than 0.3/ml in rice fields, pools and small ponds [36,39], while others suggest densities around 0.02-0.06 larvae/ml and 1.5 larvae/ml [40,41] (respectively). Comparing these estimates with Figure 4 suggests that larval densities in field populations occur in regimes where intra-species competition for resources is minimal, suggesting that densitydependent mortality is most likely due to predation, although cannibalism may also occur [42].
Field observations also suggest the spatial aggregation of juvenile An. gambiae s.l., with larvae typically distributed negative binomially [43]. To model the effects of density-dependence, these observations are incorporated within application of the framework of [44] for developing first principles population models given knowledge of intra-species competition and spatial distribution. For larval populations following a negative binomial distribution and demonstrating predominantly contest competition (given the dominance of predation, also consistent with findings elsewhere such as [40]), [44] demonstrates that if X t is the population size at time t, where m is the number of resource sites across which the population is distributed, λ the aggregation parameter of the negative binomial distribution and b a positive constant. The value of λ is calculated by averaging the aggregation parameters from the five experiments in [43] for which the negative binomial provides the best fit to obtain λ ¼ 1:5 . To determine b, consider, without loss of generality, an arbitrary one litre volume of water within a breeding site and divide this into 1ml blocks (so m ¼ 1000). The observed difference in juvenile mortality between field data and the contribution from temperature and rainfall is attributed to density- dependence. Given that the datasets used consider survival from L1 instars and that the duration of field studies is often longer than the development time from L1 to pupae, it is assumed that predation reduces larval and pupal survival and acts identically on both stages. Since the population affected is n 2 t ð Þ þ n 3 t ð Þ , the number of larvae and pupae per litre is Þ =V t and p i DD ð Þ ¼ X tþ1 =X t; the daily larval and pupal survival probability due to density-dependence, p 2 (DD) and p 3 (DD), is for i = 2 and 3. If n 2 (t) + n 3 (t) = 0 or V t = 0, p i (DD) is assumed to be unity. Predation on eggs is assumed to be negligible by comparison and anopheline rice-field survival data from [39], [41] and [45] is used to provide seven independent datasets to fit b at ΔT = 3°C, 4°C, 5°C and 6°C. For each dataset, air temperature and rainfall data from the nearest meteorological station (using [46] and where missing values are interpolated) are used to calculate the daily survival and development of larvae and pupae due to climatic influences (assuming fixed vector density and assuming no desiccation effects for rice fields) and estimate the additional mortality required to agree with the study data (attributed to p i (DD)). Two approaches are adopted, namely to (a) calculate the number of juveniles remaining after a fixed number of days (determined by the study design), and (b) track the number of cohort larvae and pupae until less than 0.05% of the original population remain. For method (a), where experimental dates are not specified, b is calculated for a range of plausible start dates and the average computed. No significant difference in calculating b using these methods is found and b ¼ 0:89 for ΔT ¼ 3 C and 0.88 for ΔT ¼ 4 C, 5°C and 6°C.
Environmental influences on adult development
The survival of adult Anopheles is sensitive to temperature and RH, although few experimental studies have examined this in detail and [11] have recently undertaken a review of parameterization work to date. Although the fitting of [47] has been used in work examining the effects of climatic variables on malaria transmission (such as [8]), this parameterization is inconsistent with [48] demonstrating that An. gambiae s.s. cannot survive longer than one day at 40°C. Thus, the majority of modeling studies to date investigating malaria transmission under changing environmental conditions [5,7,9,49] use despite its basis on fitting a three-parameter function to three data points in the range 9-40°C [9] (with the 40°C point inconsistent with [48]). This relationship assumes no adverse effects of RH on mortality, which is unlikely given that RH < 50% leads to significantly reduced survival [50]. Field observations of An. gambiae adults are only approximately consistent with (18), but reflect the relatively high survival at 22-30°C [36].
To obtain a more systematic fitting, experimental data from Bayoh and Lindsay (unpublished data), who estimate survival thresholds of 5°C and 40°C and, within this range, examine the effects of temperature and RH on mortality, are used. Age-independent survival is assumed and p 4 (T A ,RH) fitted given mean female survival times at 5-40°C inclusive (in 5°C intervals) and 40-100% RH (at 20% intervals) to obtain where β 2 ¼ 4:00 Â 10 À6 RH 2 À 1:09 Â 10 À3 RH À 0:0255; β 1 ¼ À2:32 Â 10 À4 RH 2 þ 0:0515RH þ 1:06 and β 0 ¼ 0:00113RH 2 À 0:158RH À 6:61 (Figure 5a). Survival outside this temperature range is assumed to be zero, but no RH thresholds are assumed. The duration of the gonotrophic cycle G c is also temperature-dependent and [9] parameterizes this as G c (T A ) = D M /(T A -T M ) where D M = 36.5°C days and T M = 9.9°C. An alternative functional form is given by [11] as where D E = 37.1°C days and T E = 7.7°C. Comparing these formulations, the latter gives longer gonotrophic cycles
Number of larvae/ml
Number of hours Figure 4 The daily survival of larvae as a function of initial density.
at temperatures above 17.6°C, the regime generally of interest. On the basis of better agreement with [51], (20) is adopted (Figure 5b). To calculate F 4 , d 4 /G c (T A ) gives the average number of ovipositions per adult across her lifetime. If the average number of eggs per oviposition is N EO = f t N EP , the average number of eggs laid over her lifetime is d 4 N EO /G c (T A ), so that F 4 , the average number laid per day, is Given the absence of age-structure in this model, each gonotrophic cycle is assumed to be of equal duration for all adults and produce the same number of eggs, although studies have shown variation in both [52].
No direct influences of rainfall on adult survival are assumed (with indirect effects through changes in RH captured by (19)) and adult survival is assumed to be densityindependent following [53] and the weak, but statistically significant, relationship between adult density and survivorship in [15]. There is some evidence of predation on adult An. gambiae s.l. at oviposition sites, with the severity potentially depending on the type of site [39], but there are few quantitative studies in this respect.
Model calibration and validation
To assess performance, the model is calibrated and validated against longitudinal An. gambiae s.l. abundance data from [13] collected in an environment free of vector controls. Data on T A , T DP (for calculation of RH), (low) cloud fraction CF, and the horizontal and vertical components of 10 m wind speed (to calculate U 2 ) are taken from the ERA-40 re-analysis dataset [18] for the rural community in Masaika, Tanzania (5 16' 0'' S, 38 49' 60'' E) (with the nearest ERA-40 point at 5 o , 0' 0" S, 37 o 30' 0" E). Rainfall data from the Maji Depot Tanga Rainfall station (at 5 4' 58'' S, 39 5' 21'' E), approximately 35 km from Masaika, is used when available (see [13]), with missing data taken from [18]. Since the daily values of T min and T max are not available from [18], we derive empirical relationships between T A and these variables using data from the nearest meteorological station (Tanga at 5 o 4' 48" S, 39 o 4' 12" E), approximately 34 km from the study site, and apply these relationships (T min = 0.724T A + 14.4, with R 2 = 0.53, and T max = 0.728T A + 28.3, with R 2 = 0.61) to ERA-40 data on T A to estimate the associated values of T min and T max .
Daily abundance data is available from 06/07/1998 to 30/11/2001 (approximately 41 months), consisting of the number of adult An. gambiae s.l. caught in CDC light traps; further details on mosquito collection and experimental procedures are given in [13]. The model is calibrated using the first twenty months of complete monthly data (August 1998 to March 2000 inclusive) and validated over the subsequent twenty months (April 2000 to November 2001 inclusive). The variation in T A and RH over the calibration and validation periods is shown in Figure 6. Data on the average number of An. gambiae s.l. caught per trap is available for each weekday in the period (aside from short breaks for public holidays), but not at weekends. Given the daily time-step nature of the model, weekend abundance is estimated using linear interpolation and these values appended to the weekday values. This data is then aggregated by month and the model fitted at this scale. A minimum 365 day burn-in period is applied to remove early model transients.
For model calibration, the average number of adult An. gambiae s.l. per light trap is fitted to model output after the burn-in period. To account for the difference in scale between data and the model, the scaled fecundity F 4 ¼ α 1 F 4 and adult An. gambiae s.s. abundance n 4 ¼ α 2 n 4 are defined and just three parameters fitted over the calibration period the scale parameters α 1 and α 2 , and ΔT. All other parameters are derived from parameterizations in this paper and local breeding site properties (altitude and latitude). It is assumed that N EP ¼ 120 (based on model calibration in [7]) and breeding site dimensions consistent with the characteristics of typical An. gambiae s.s. habitats (in the presence of multiple An. gambiae s.l. species given the collection of multiple Anopheles species in data collection in [13]) reported in [21], namely A T = 1.79 x 10 6 mm 2 and h 0 = 97mm. An initial water volume of 1 litre is assumed (V 0 = 1000ml). Model fit to data is found to be independent of the initial conditions, so 100 mosquitoes are arbitrarily initially assumed to be in each lifecycle stage.
Fitting the model using least-squares to the 20 month calibration data gives the best-fit parameters α 1 ¼ 141:612 , α 2 ¼ 0:030 and ΔT ¼ 6:9 C (R 2 = 0.84). Running the model for a further 20 months with these parameters and assessing the goodness-of-fit gives R 2 = 0.50 across the validation period ( Figure 7). The model is encouragingly able to capture the overall decline in An. gambiae s.l. abundance in Masaika reported in [13] across the calibration and validation periods, as well as the general seasonal trend (although the timing of the two abundance peaks in the validation period are underestimated by one month in both cases, as well as the magnitude of the peaks). The water volume within the breeding site over time (with these best-fit parameters) is shown in Figure 8, while the immature population dynamics, and estimated daily water temperature, are plotted in Figure 9. Alternatively fitting the model across the entire 40 months of data ( Figure 10) gives R 2 = 0.70 (with α 1 ¼ 280:486 , α 2 ¼ 0:026 and ΔT ¼ 6:1 C ) and, in this case, the timing of two of the three seasonal peaks are correctly predicted, as well as the approximate severity of these peaks. The fitted values of ΔT ¼ 6:9 C and 6.1°C are slightly greater than typical ΔT values observed in the field (values in [37], for example, lie in the range 4.0-6.1°C on clear days for three different sized pools), and this reflects the simplified nature of the T W ¼ T A þ ΔT formulation and fitting a single mean value of ΔT across annual timescales. Future refinements will improve this An. gambiae s.l. abundance data from [13]. component of the model by calculating ΔT from thermodynamical principles based on daily weather conditions and this is expected to further improve model fit. Nonetheless, the model offers the potential for mechanistic insight into vector response to temperature, rainfall, RH, wind speed and cloudiness, and hence how future changes in these variables may affect mosquito dynamics. The results suggest that the observed decline in vector numbers (and malaria) reported in [13] could, in turn, be due to long-term changes in environmental conditions. Further model analysis (such as application of the methods of matrix population modeling [54]) will provide valuable insight into the dominant environmental variables influencing the observed changes in vector numbers, as well as furthering our understanding of the dominant drivers on short and long-term timescales.
While such analysis is beyond the scope of this paper and will follow in a forthcoming article, these results highlight the explanatory power of validated mathematical models and their role in evaluating the effects of temporal changes in weather and climate on vector dynamics and, ultimately, disease transmission.
Conclusions
Along with An. arabiensis and Anopheles funestus, An. gambiae s.s. is one of the principal malaria vectors in Africa [19] and understanding its ecology and dynamics is vital in better understanding the associated impact on malaria transmission and the prospects for eradication [55], as well as the effectiveness of vector controls in different communities and settings. Vector population dynamics are driven by a range of biotic and abiotic factors and clarifying the role of both is key, particularly in the context of how climate change may influence the future spread and distribution of VBDs. Here, a useful framework for understanding how changes in rainfall, temperature, RH, wind speed and cloudiness (both mean values and temporal variability), and density-dependence, at breeding sites may influence vector abundance is presented. By calibrating and validating the model against longitudinal abundance data, this framework is shown to be capable of reproducing the observations in [13] on long-term timescales, suggesting a mechanistic underpinning of mosquito dynamics in terms of environmental variables, an important result given the ongoing debate regarding the link between malaria transmission and climatic changes in Africa [3,4]. This work also highlights the power of mathematical models in addressing key questions surrounding the role of environmental variables, compared to the multitude of other ecological, epidemiological, socioeconomic and demographic factors, on disease transmission [1]. An important advance of this work is the construction of a modeling framework enabling the linkage of climatic events at large spatial scales to processes at the localized scale of vector breeding sites, enabling assessments of how climatic phenomena at different scales may affect disease transmission in host communities.
Model reliability may be enhanced with improved parameterization and future experimental and modeling research will lead to further understanding of speciesspecific Anopheles population dynamics and their response to environmental variables. These include (i) improving our understanding of Anopheles oviposition behaviour, (ii) better quantifying the role of rainfall and temperature on egg, larval and pupal survival, as well as the role of heterogeneities, such as body size, that might influence response, (iii) improved modeling of the relationship between air and water temperatures at breeding sites, (iv) improving our understanding of densitydependent effects on juvenile and adult development and survival (including intra-specific competition, interspecific interactions between species, cannibalistic tendencies, and predation, as well as their dependence on climatic variables), (v) assessing evidence for agedependent mortality in juveniles and adults, and (vi) better understanding variability in gonotrophic cycles.
New longitudinal vector studies that simultaneously measure changes in environmental variables are also required to improve the validity and reliability of vector models, which will not only further our understanding of dominant factors driving mosquito dynamics, but will also improve our understanding of the implications for VBD transmission. Nonetheless, the approach here not only provides a useful framework for An. gambiae s.s. modeling, but its structure may be readily applied to other Anopheles species with suitable parameterization, as well as other vectors (such as Aedes or Culex). This will ultimately enable a better understanding of the response of a variety of VBDs to environmental change, an important question given the likely influences of weather and climate on many regions of VBD risk over the coming decades. | 8,379 | 2012-08-09T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Medicine"
] |
Analyzing the Symmetry Properties of a Distribution in the Focal Plane for a Focusing Element with Periodic Angle Dependence of Phase
We analyze the symmetry properties of the focal plane distribution when light is focused with an element characterized by a periodic angular dependent phase, sin (mφ) or cos (mφ). The majority of wave aberrations can be described using the said phase function. The focal distribution is analytically shown to be a real function at odd values of m, which provides a simple technique for generating designed wave aberrations by means of binary diffractive optical elements. Such a possibility may prove useful in tight focusing, as the presence of definite wave aberrations allows the focal spot size to be decreased. The analytical computations are illustrated by the numerical simulation, which shows that by varying the radial parameters the focal spot configuration can be varied, whereas the central part symmetry is mainly determined by the parity of m: for even the symmetry order is 2m and for odd is m.
Introduction
Various aberrations in the focusing system are known to result in a wider, distorted focal spot with disturbed axial symmetry [1]. Such an effect is normally considered to be a negative factor. However, it has been shown [2,3] that some types of wave aberrations enable the central focal spot size to be decreased, providing tight focusing. Note that while only primary (axisymmetric) aberrations were dealt with in [2], aberrations associated with vortex phase components on the basis of Zernike polynomials were also discussed in [3].
In Zernike polynomials, the radius dependence is polynomial and the angle dependence is trigonometric (periodic). Optical elements characterized by periodic angular changes were considered in [4,5]. In [4], such an element was shown to form the zero central intensity, whereas [5] also looked into diffraction-free properties of the generated light beams.
Based on the decomposition of a cosine angular dependent phase function in terms of angular harmonics, the transmission function exp{ia cos(mϕ)} was shown [5] to produce a diffraction pattern composed of 2m light spots arranged on a circumference. The coaxial interference of two vortex beams with identical topological charges and opposite signs was shown to produce a similar result [6,7].
At the same time, the odd-order aberrations, such as distortion and coma, have been known to appear in distribution patterns with odd symmetry [1,8,9]. In particular, the presence of coma (m = 3) results in distributions with the third-order symmetry [10], similar to the 2D Airy beams [11].
It was also shown that the product of three 1D Airy functions, rotated by the angle of 120 • relative to each other and characterized by the third-order symmetry, was transformed in the spectral plane into a function proportional to exp{iar 3 sin(3ϕ)} [12].
Notice, the phase mask with sine but nonperiodic angular dependence [13] breaking symmetry can be used for selective edge enhancement.
In this paper, we analyze symmetry properties of the distribution formed in the focal plane by an optical element characterized by a periodic angular dependent phase function in the form of sin(mϕ) or cos(mϕ). Based on this phase relation, it is possible to describe the majority of wave aberrations, which then can be represented [14] as the decomposition in terms of Zernike functions [1].
The focal distribution is analytically shown to be described by a real function at odd values of m. Analytical estimates of the central part of the focal distribution are derived for an input circular aperture.
The analytical computations are illustrated by the numerical simulation, which shows that by varying the radial parameters the focal spot configuration can be varied, whereas the central part symmetry is manily determined by the even value of the angular parameter m.
Fourier Transform of a Complex Distribution with Periodic Angular Dependence
Assume a complex distribution with periodic angular dependence of the general form: where ψ(r) is an axisymmetric function, m is integer, q is a positive real number, and α, β are real parameters (with α having the dimension of mm −1 ). Note that various combinations of the functions sin(mϕ) and cos(mϕ) in the phase function of (1) can be represented as a product of functions given by (1).
The complex distribution in (1) can describe the majority of wave aberrations, which, then, can be represented [14] by the decomposition in terms of Zernike functions [1].
For the sake of specificity, choose sin(mϕ) in (1). Note that for the cos(mϕ) function the results will be analogous.
The spatial Fourier spectrum for the function (1) is given by where k = 2π/λ is the wave number, λ is the wavelength of laser light, f is the focal length, (r, ϕ) and (ρ, θ) are the polar coordinates in the input and output planes, respectively.
Let us analyze the integral over the angle in braces in (2): The axial distribution of light (ρ = 0) is described by the relation independent of m: Designating and following the transformation in (3) where Replacing sin ϕ in (1) by cos ϕ (at m = 1) yields an analogous relation to (7), following the relevant substitutions. Let us consider a more general case of integer m. The first term in (3) can be decomposed as Equation (8) employs the following relations [15]: To calculate the integral in (3), we can employ the wellknown relations: Advances in Optical Technologies 3 Thus, we obtain It should be noted that for the odd values of m, (11) is a real function, because the coefficients in the third sum Taking cos mϕ in (1), we obtain an analogous relation: which utilizes the relations [15]: For the odd values of m, the function in (14) is real.
Substitute (11) in (2): Equation (15) can be rearranged as follows: where Let us obtain approximate estimates of the integrals in (17), putting and assuming J 0 (z) ≈ 1 − z 2 /4, J n (z) ≈ (z/2) n /n!. Then, (18), It can be seen from (19a) and (19b) that the major contribution to the central part of the focal plane (at ρ → 0) is provided by the terms with lower-order p. Truncating (16) at the first two terms, the intensity in the central part of the focal plane can be estimated for the even values of m as with the first term being independent of the angle and the second term resulting in 2m uniformly distributed intensity peaks of equal height (at sin(mθ) = ±1). The third term is only present for odd m, introducing corrections into the peaks: depending on the sign of the term containing sin(mθ), half of the peaks are enhanced, while the other half are suppressed, which results in m clearly expressed maximums. Thus, (20a) and (20b) suggest that the symmetry of the central part of the focal distribution is determined by the parity of m: for even m, the symmetry order is 2m, for odd m, the symmetry order is m.
By varying the parameters β, α, q, and m, the values of the coefficients in (20a) and (20b) can be varied, thus changing the symmetry of the intensity pattern in the focus. Advances in Optical Technologies 5
Numerical Simulation
The numerical simulation based on (2) was conducted for the following parameters: wavelength of incident light, λ = 633 nm, focal length, f = 100 mm. Table 1 gives the simulation results based on (1), for sin(mϕ), for different values of m and different parameter's values of α, β, and q, with the ψ(r) function being given by (18), describing a circular diaphragm of radius R = 1 mm, and a Gaussian beam exp(−r 2 /σ 2 ) of waist radius σ = 0.5 mm.
Notice that the input phase distributions (shown in the second column of the table) have the same values in opposite points on diameter line for even m and negative values for odd m. It is easy to confirm by sin(2(x + π)) = sin(2x), sin(x + π) = − sin(x). Both of cases differ from a situation with a spiral phase exp(imϕ) where a phase difference at a symmetric position is mπ [13], so the spiral phase is used for the radial Hilbert transform of mth order. In our case a phase difference in symmetric points can explain focal distribution. Since at odd m in focus there are identical phases with opposite sign the field will be real valued (exp(iψ) + exp(−iψ) ∼ cos(ψ)). For even m, identical values are summarized, therefore the phase structure is regenerated in focal domain. Table 2 presents the focusing results for q = 0 and β = 5, with the values of m and ψ(r) being varied.
The effect of replacing the axisymmetric function ψ(r) with a more general function ψ(r, ϕ) with a harmonic angle dependence, exp(ilϕ), was also studied.
The four top lines are for the function ψ(r, ϕ) described by the Fourier-transform-invariant Laguerre-Gaussian modes [15] of different orders (n, l), including those with a vortex function exp(ilϕ). It is noteworthy that the vortex angle component does not exert such an essential influence on the focal pattern structure as the periodic function in question sin(mϕ) does. The comparison of lines 1 and 3 shows that a phase vortex introduced in the input plane results in a similar pattern in the focal plane, without changing the major distribution structure (except for the zero intensity in the singular point).
When m = 1, the resulting distribution is similar to that associated with the coma-type aberration introduced in the focusing system; when m = 3, the distribution is described by the product of three 1D Airy functions, rotated relative to each other [12]. Note, however, that in the latter case the product is actually composed of somewhat different functions.
In two bottom lines of Table 2, the function ψ(r, ϕ) is given by the Bessel modes J l (γr) exp(ilϕ), which produce a Fourier spectrum in the form of a narrow ring of radius proportional to γ. The simulation results show that the annular focal structure produced by the Bessel modes undergoes essential variations caused by the periodic angle change.
Conclusions
We have analyzed symmetry properties of the light distribution generated in the focal plane by focusing the light beam with a periodic angular dependent phase given by sin(mϕ) or cos(mϕ).
The focal distribution of light is analytically shown to be a real function when the value of m is odd. Thus, in a way similar to [12], phase distributions associated with the corresponding specified aberration types can be formed in the focal plane of a conventional spherical lens with the aid of binary diffractive optical elements. Such a possibility may prove useful in tight focusing, where the presence of certain wave aberrations enables the focal spot size to be reduced [2,3]. In particular, for a circularly polarized beam the spot size can be reduced by the presence of coma (m = 3), corresponding to the 2D superpositions of Airy beams [11,12].
The analytical computations have been illustrated by the numerical simulation, which has shown that by varying the radial parameters, the focal distribution configuration can be varied; meanwhile, the central part symmetry is mainly determined by the parity of the angle parameter m: when m is even, the central part has the 2m-order symmetry, and when m is odd-the symmetry is m order. Note that although the introduced vortex component exp(ilϕ) is preserved in the focal plane, it does not have an essential effect on the focal distribution structure when compared with the periodic function in question sin(mϕ). | 2,692.2 | 2012-10-10T00:00:00.000 | [
"Physics"
] |
Word Sense Disambiguation Using Prior Probability Estimation Based on the Korean WordNet
: Supervised disambiguation using a large amount of corpus data delivers better perfor ‐ mance than other word sense disambiguation methods. However, it is not easy to construct large ‐ scale, sense ‐ tagged corpora since this requires high cost and time. On the other hand, implementing unsupervised disambiguation is relatively easy, although most of the efforts have not been satisfac ‐ tory. A primary reason for the performance degradation of unsupervised disambiguation is that the semantic occurrence probability of ambiguous words is not available. Hence, a data deficiency prob ‐ lem occurs while determining the dependency between words. This paper proposes an unsuper ‐ vised disambiguation method using a prior probability estimation based on the Korean WordNet. This performs better than supervised disambiguation. In the Korean WordNet, all the words have similar semantic characteristics to their related words. Thus, it is assumed that the dependency be ‐ tween words is the same as the dependency between their related words. This resolves the data deficiency problem by determining the dependency between words by calculating the 𝜒 (cid:2870) statistic between related words. Moreover, in order to have the same effect as using the semantic occurrence probability as prior probability, which is used in supervised disambiguation, semantically related words of ambiguous vocabulary are obtained and utilized as prior probability data. An experiment was conducted with Korean, English, and Chinese to evaluate the performance of our proposed lexical disambiguation method. We found that our proposed method had better performance than supervised disambiguation methods even though our method is based on unsupervised disambig ‐ uation (using a knowledge ‐ based approach).
Introduction
The present paper addresses lexical disambiguation occurring in the semantic analysis phase of the natural language analysis process that includes cases of ambiguity. In natural language processing, lexical disambiguation refers to the determination of the correct semantic meaning for a word that has multiple meanings (hereafter referred to as an ambiguous word) by evaluating the meaning in its context [1]. Lexical disambiguation, which is the same as morphological analysis and syntactic analysis, is essential in natural language processing and plays an important role in various application areas. In machine translation, lexical disambiguation is critical to select the correctly translated word for a given word. For example, the English verb 'build' can be translated into Korean as construct, build, produce, establish, or develop, and the word that is the most correct should be selected from among these. In information search systems, lexical disambiguation of a query word can provide the high-quality information that a user needs. For example, if a query word inputted by a user is court, the search engine should present the results by categorizing the information into courthouse-related and palace-related suggestions. In addition, it is important to resolve semantic ambiguity in text mining for documents in specialized fields such as medical documents [2,3].
Lexical disambiguation has been a primary interest since the 1950s when natural languages began to be processed by computers. Its study has been conducted based on the following two lexical disambiguation methods. The first is a method based on knowledge bases such as machine-readable dictionaries. The second is a method based on statistical information extracted from large amounts of corpus data. In particular, since the 1990s, studies based on large amounts of corpus data have been actively conducted. In this method, a problem of word sense ambiguity has been simplified as a statistical classification problem in machine learning so that traditional machine learning techniques (for example, case-based learning, decision tree, and Bayesian classifier) are applied to solve the problem. Lexical disambiguation through machine learning is divided into supervised and unsupervised disambiguation, depending on whether a corpus consisting of individual sense-tagged words (hereafter referred to as a sense-tagged corpus) is used for the learning [4].
In lexical disambiguation, supervised disambiguation using a large amount of sensetagged corpus has shown better performance than other lexical disambiguation methods. However, the construction of a large sense-tagged corpus takes high cost and time, which is a drawback. On the other hand, while implementing unsupervised disambiguation is relatively easy, their performance is usually not satisfactory. In particular, Korean lacks language resources such as machine-readable dictionaries and sense-tagged corpora, compared with English. Therefore, in order to overcome the limitations of these linguistic resources in minority languages such as Korean and Vietnamese, it is urgent to study a method for clarification of vocabulary [5].
In this paper, a novel, unsupervised disambiguation method that shows better performance than existing knowledge-based lexical disambiguation and unsupervised lexical disambiguation methods, without the need of a large sense-tagged corpus, is proposed. Generally, the reason for the low accuracy of knowledge-based lexical disambiguation and unsupervised lexical disambiguation is the lack of the semantic occurrence probability of ambiguous words and the data deficiency problem that occurs while dependency between words is being determined. The novel, unsupervised disambiguation method proposed in this paper uses prior probability estimation based on the Korean WordNet [6], which takes advantage of the semantic characteristic that all words share the same semantic characteristics with their related words (hypernym, hyponym, and coordinate term). Thus, it is assumed that a dependency between words is the same as a dependency between related words so that a data deficiency problem is solved by determining the dependency between words by calculating the chi-square statistic between related words. Moreover, in order to have the same effect when using the semantic occurrence probability as using prior probability, which is used in supervised disambiguation, semantically related words of ambiguous vocabulary are obtained and utilized as prior probability data.
The present paper is organized as follows: In Section 2, existing studies on lexical disambiguation are summarized. The lexical disambiguation method using related words in the Korean WordNet, which is proposed in this paper, is explained in Section 3. In particular, a solution to the data deficiency problem using the expansion of related words and a method of using semantically related words as the prior probability is explained in detail. In Section 4, the experimental method and results are described. Finally, in Section 5, conclusions and future research are discussed.
Related Study
Lexical disambiguation has been a major concern since natural language began being processed with computers in the 1950s, but its research achievement has been insufficient compared with studies on morphological analysis. In morphological analysis, part-ofspeech tagging accuracy is generally more than 95% with respect to all vocabularies in a total corpus. On the other hand, in lexical disambiguation, sense-tagged accuracy for frequently used specific words is only 80% to 90%.
In early research on lexical disambiguation, studies based on knowledge bases such as machine-readable dictionaries were conducted actively. The study of Lesk [7] can be referred to as a representative example. Lesk identified the meaning of an ambiguous word according to the multiplicity between the words used in the definition of the ambiguous word in a dictionary and the words used in the definition of neighboring words of the ambiguous word in a dictionary. This method had the advantage that it did not require high-cost language resources, and implementation was relatively simple. It did suffer from a severe data deficiency problem. However, that occurred due to its requirement of a highly accurate match between words, only showing a low accuracy of 50% to 70%. To minimize this problem, Luk [8] proposed a method of extracting common words as a definition concept from the Longman modern English dictionary and then extracting statistical information regarding the definition concept from the Brown Corpus to remove ambiguity. However, this method did not provide a fundamental solution to the data deficiency problem.
A study based on a knowledge base opened a new era by utilizing WordNet, a lexical semantic network developed for lexical disambiguation. WordNet calculates the shortest path between meanings in a word sense disambiguation study using a lexical-semantic network [6,9,10]. The similarity or semantic relationship type between meanings is found using the hierarchy's distance from the highest meaning. Resnik [10] proposed a method of measuring the semantic similarity of nouns in the IS-A hierarchy relationship in Word-Net for the use of lexical disambiguation. Agirre et al. [11] defined conceptual density that calculated a distance between words using the semantic relationships of WordNet to calculate the conceptual density between co-occurrence words within the context that included ambiguous words, thereby determining the meaning of the ambiguous word. Mihalcea et al. [12,13] proposed a technique to remove the word sense ambiguity by obtaining the co-occurrence statistical data between the two words and then measuring the semantic density between the two words through WordNet and removing the word sense ambiguity based on the rank. Other than the similarity calculation between senses or concepts based on WordNet, additional studies on lexical disambiguation using WordNet can be found, such as Pederson et al. [14] and Ganesh et al. [15]. Ted proposed a method using a modified dictionary-based algorithm of Lesk that was applicable to WordNet. Ganesh proposed a method that determined the word sense by choosing the synset (synonym set) of the gloss that had the highest similarity once the similarity between the words in the context that included the target word and gloss in WordNet was calculated using cosine and Jacquard similarity. Such WordNet-based lexical disambiguation techniques have an advantage that mitigates the data deficiency problem by expanding an ambiguous word and co-occurrence words used with the ambiguous word.
A graph-based word sense disambiguation method using WordNet is also one of the widely studied methods [13,[16][17][18]. Such methods convert an input sentence into a graph format that has a synset of WordNet as a basic unit and calculates the semantic similarity of the global context rather than the local context using lexical chains. A lexical chain is a sequence of related words in writing that is referred to as a unit that represents consistent meaning in context or a whole paragraph. That is, rather than calculating semantic similarity between words in a local context, the semantic similarity between lexicon chains, or a lexicon chain and a word, is calculated so that information that is more accurate can be obtained for lexical disambiguation. In graph-based lexical disambiguation, well-known graph-based technologies are used to structure the graph, thereby determining the optimal lexicon chain. The graph-based lexical disambiguation method showed the best performance among the methods of lexical disambiguation utilizing WordNet, but it had the drawback that it took a long time to determine the optimal lexicon chain when the graph structure was complicated.
In the case of the Korean language, a large-scale lexical semantic network such as WordNet did not exist in the early days of research on lexical disambiguation, and so studies based on statistical-based lexical disambiguation were conducted. Since 2000, several lexical-semantic networks have been developed, and thus studies based on lexicalsemantic networks have been conducted to overcome word sense ambiguity. Heo et al. [19] proposed a lexical disambiguation model utilizing mutual information extracted from the Korean Noun Concept Network (ETRINET), a compound noun sense-tagged dictionary and raw corpus.
Like other tasks in the field of natural language processing, deep learning-based supervised learning models show good performance in resolving word sense disambiguation [20][21][22][23]. However, these models are expensive to build the training data because they require a large corpus containing semantic information for word senses. Therefore, using external resources such as WordNet, knowledge-based word sense disambiguation is a good approach for word sense disambiguation [24][25][26]. In this study, the relationship between ambiguous words and co-occurrence words within a context is determined using a Korean lexical-semantic network. Moreover, to have the effect of using prior information in supervised disambiguation, semantically related words of ambiguous vocabulary are obtained and utilized as prior probability data.
Lexical Disambiguation Using the Korean Lexical Semantic Network
This section explains the unsupervised lexical disambiguation method using the Korean Lexical Semantic Network (KorLex), which is proposed in this paper. Generally, supervised disambiguation shows better performance than unsupervised disambiguation but requires a large-scale, sense-tagged corpus. In this paper, a sense-tagged corpus, which involves a high development cost, is not used. In its place, a morph-tagged corpus of 5 M word phrases is used. Moreover, richer statistical information is exploited through the expansion of semantically related words by utilizing KorLex (Korean Lexical Semantic Network), and prior probability is estimated by calculating semantically related words of ambiguous words.
Analysis of Relationship between Words Using the Korean Lexical Semantic Network (KorLex)
The Korean Lexical Semantic Network (KorLex) was developed using WordNet as a reference model and included approximately 130,000 synsets and about 150,000 word senses. A synset is a set of synonyms that share the same word sense. In this paper, a word that has more than two synsets in the Korean Lexical Semantic Network is considered as an ambiguous word. For example, in Korean, sagwa is an ambiguous word that has two synsets, sagwa 1 meaning 'apology' and sagwa 2 meaning 'fruit of apple tree (apple).' To distinguish such ambiguous words, semantic relation words are used. Semantic relation words are words in semantic relations in the hierarchy of the Korean Lexical Semantic Network. Semantic relation words are called hypernym, hyponym, and coordinate terms, depending on the relationship. Figure 1 shows the relation words of sagwa 2 in the Korean Lexical Semantic Network.
Relation words in a hierarchy of the Korean Lexical Semantic Network have the same characteristic. In particular, coordinate terms in relation words have the same co-occurrence words. For example, sagwa (apple) and boksunga (peach) are hyponyms of gwail (fruit), and so are related with mukda (eat) and masitda (delicious). Sagwa (apology) and gamsa (appreciation), however, are hyponyms of inji (recognition), that has no relationship to mukda (eat) or masitda (delicious). Thus, word sense ambiguity can be removed by identifying the relationship of the semantic relation words of ambiguous and co-occurrence words in the local context. Figure 2 shows the relationship between coordinate terms of Sagwa and words in the local context in the Korean Lexical Semantic Network. The most basic method used to analyze the relationship between two words is to determine the frequency of co-occurrence of the two words. That is, finding how often two words are used in a local context will be a measure to determine the relationship between the two words. However, because of words that are used normally, regardless of the meaning of the ambiguous word, the frequency of the co-occurrence cannot determine the relationship between two words. To overcome this, various kinds of statistical approaches are used, such as information-theoretic measures, likelihood measures, statistical hypothesis tests, and coefficients of association strength. Among them, we use the chi-square independence test, which is known to be easy to interpret and effective in finding collocations [27][28][29]. Figure 3 shows the relationship analysis between two words using the semantic relation words of an ambiguous word. Assuming that the meaning of an ambiguous word is and co-occurrence word is , then the chi-square statistic of the two words , is calculated by the following formula according to the relation word of .
Here, if the relation word is an ambiguous word, it would cause a problem when calculating the chi-square statistic. One of the solutions is to exclude ambiguous words from the related words. However, it would be dangerous to exclude ambiguous words since it could, in the worst case, remove all related words. This would also not help to reduce the data deficiency problem. In this study, then, assuming that the semantic frequency of an ambiguous word is the same, the frequency of an ambiguous word that has meanings is calculated as ⁄ . Table 1 shows an analysis of the relationship between an ambiguous word and the co-occurrence words in a local context in the sentence: 'Sagwa han gairul megeotda'. ('I ate an apple'). Based on Table 1, various methods that can distinguish the meaning of the ambiguous word 'sagwa' can be found. The simplest method is to select the meaning with the largest number of related words in the local context using the chi-square test of independence. A null hypothesis is set that the co-occurrence of two words has no relationship to each other and the hypothesis is tested through the independence test. If the null hypothesis is rejected, then the alternative hypothesis is selected, concluding that the co-occurrence of two words is related to each other. Null hypothesis: Two words ( , ) are not related to each other (independent), Alternative hypothesis: Two words ( , ) are related to each other (dependent), If the chi-square statistic is above a critical value, the null hypothesis is rejected, concluding that the two words are related to each other. In the chi-square distribution table (Table A1), the critical value is 7.88 when the degree of freedom is 1 and the significant level is 0.005. Table 2 shows the number of semantically related words of the ambiguous word using the chi-square test of independence. In Table 2, the number of words related to 'Sagwa1' is three while the number of words related with 'Sagwa2' is one. Thus, in the sentence 'I ate an apple', the ambiguous word 'Sagwa' should be 'Sagwa 1', which has more related words in the local context. However, this method has several problems. First, when the number of related words is the same, there is no way to resolve the lexical ambiguity. Table 3 shows an analysis of the relationship of the ambiguous word and the co-occurrence words in a local context in the sentence 'Naneun sagwareul badatda' (I received an apology). As shown in Table 3, the two meanings have the same number of related words, namely, one. Thus, there must be another method to distinguish the meaning of an ambiguous word other than simply using the number of related words. Second, despite the fact that each word's degree of semantic relation is different, the relationship of the words above the critical chi-square statistic value is the same. That is, some words among the cooccurrence words in the local context may have more weight to determine the meaning of the ambiguous word, but this method cannot reflect this. For example, in Table 3, both of the co-occurrence words, 'Na' and 'Batda', have a chi-square statistic above 7.88. The cooccurrence word 'Batda', however, has a closer relationship with 'Sagwa1'.
Generally, the larger the chi-square statistic, the greater the relationship between the two words. Therefore, a method of applying the chi-square statistic is to use a sum, average, and multiplication of weight of the chi-square statistic. The multiplication of weight is a factor calculated to show the influence on the meaning based on the ratio of the chisquare statistic assuming that the total influence of all the co-occurrence words on the ambiguous word is one.
As shown in Table 4, the sum, multiplication, average, and multiplication of weight of the chi-square statistic indicate a correct answer for the lexical disambiguation. Among them, using the multiplication of weight showed the best performance in the experiment result. Due to the characteristic of the chi-square statistic, if the frequency of a specific cooccurrence word is significantly over a certain threshold, the chi-square statistic also becomes too large. Because of this, the use of the sum, multiplication, and average of the chisquare statistic can result in an incorrect result where a specific word can decide the outcome. Thus, normalization of the chi-square statistic between 0 and 1 is required. The weight is used for normalization in this study. The following formula shows the word sense disambiguation using co-occurrence words of semantically related words in the local context and weight of the value.
, argmax , , To prevent a resulting value of the above formula from being zero or infinite value because the value was zero, a non-observed data frequency was estimated using the Good-Turing frequency estimation. In addition, performance can be different depending on which relation words and which relationships in the Korean Lexical Semantic Network are used. In this study, a relationship that can be used for word sense disambiguation is divided into five relationships and is calculated by varying the weight. The five relationships are: ① coordinate term (s), ② hyponym (c), ③ hypernym (p), ④ hyponym of coordinate term (sc), and ⑤ coordinate term of hypernym (ps). In Section 4, the weight ( ) of relation words will be found through experiments. , Furthermore, the data deficiency problem will be solved by normalizing the number of words using the part-of-speech information of the words. Table 5 shows the normalized expression and an example of words.
Expansion of Semantically Related Words to the Ambiguous Word
In Section 3.1, semantic relation words of the ambiguous word were expanded in the hierarchical structure of the Korean Lexical Semantic Network. However, a lack of statistical information due to an ongoing data deficiency problem prevented finding the semantically related co-occurrence words. One of the reasons is insufficient relation words. For example, 'shinjang' used as a meaning of kidney is the lowest hyponym in the Korean Lexical Semantic Network; thus, there is no hyponym and only two coordinate terms 'Kongpat (kidney)' and 'Bulggotsepo (flame cell)'. Even using all five relationships in Section 3.1, only 13 related words can be found. To solve such a data deficiency problem, words that are related to an ambiguous word must be expanded.
In this paper, a set of semantically related words of an ambiguous word is created through the chi-square statistic used in Section 3.1. The related words refer to a collocation of two words in a semantic co-occurrence relationship. This is a significant clue to determine the correct meaning of the ambiguous word. First, the collocation words that are in the relationship of collocation with the ambiguous word are found using the chi-square statistic from the Sejong morph-tagged corpus. Then, a set of semantically related words of the ambiguous word is created using the chi-square test of independence to determine which meaning of the ambiguous word is in the collocation relationship with the collocation words found. Table 6 shows a part of a set of semantically related words of the ambiguous word 'Noon'. Using the semantically related words of the ambiguous word, word sense ambiguity can be removed as shown in Section 3.1. Not only can using the relationship between semantically related words of an ambiguous word and co-occurrence words in the local context be a method for removing word sense ambiguity, but also using the appearance of semantically related words of the ambiguous word in the local context. Word sense ambiguity is removed using the semantic determination formula in Section 3.1. Figure 4 shows the expression of the relationship analysis between words using related words. Moreover, we attempted to overcome the data deficiency problem by expanding the coordinate terms of words that are highly related among the related words of an ambiguous word. In Section 4, we will discuss how many expansions of the related word are needed through an experiment. Word sense ambiguity will be removed using the coordinate terms of the semantically related words of an ambiguous word. Not only using the relationship between the semantically related words of an ambiguous word and the cooccurrence words in the local context but also using the appearance of the semantically related words of the ambiguous word in the local context as shown in Section 3.1 can be a method of removing word sense ambiguity. Figure 5 shows the expression of the relationship analysis between words using the coordinate terms of the semantically related words of an ambiguous word. In supervised disambiguation, a disambiguation corpus, a corpus that was classified into meaning in a specific context of all appearances of the ambiguous word, is used as the learning data. A naïve Bayesian classifier is a statistical theory that is applied in natural language processing and lexical disambiguation. The naïve Bayesian classifier identifies the meaning using adjacent words of the ambiguous word in a large-scale context. Adjacent words provide useful information to identify the meaning of the ambiguous word so that statistical inference can be applied using the co-occurrence frequency information of these adjacent words. The naïve Bayesian classifier uses Bayesian decision rules to minimize error probability when determining class.
Assuming that is words used for a contextual feature in a context where the ambiguous word appears in a corpus, the decision rule of the naïve Bayesian classifier that solves the word sense ambiguity based on the contextual feature is as follows: In the above formula, , ⋯ , | and are calculated by maximum-likelihood estimation from the learning corpus of disambiguation. Here, , ⋯ , | is posterior probability and is prior probability. Generally, the reason for the high performance of probability models, such as the naïve Bayesian classifier, in supervised disambiguation is due to the large influence of prior probability, which is the semantic probability of words. Most ambiguous words have two or more meanings, but only one or two meanings are actually used frequently in our daily lives. Thus, if we know the semantic word's prior probability in advance, it would significantly increase the lexical disambiguation performance.
Moreover, in this paper, in order to realize the same effect of using prior information in supervised disambiguation, semantically related words of the ambiguous vocabulary are obtained and utilized as prior information. Using prior information, word sense ambiguity can be solved even under cases where words that are strongly related to a specific meaning in the local context are not found or semantically related words cannot be found due to the lack of statistical information caused by data deficiency.
In this paper, the semantic prior probability of an ambiguous word can be calculated using the weight of the semantically related words of the ambiguous word, as indicated in the following formula. The prior probability of meaning of an ambiguous word is assumed to be a ratio of the frequency of and the related word of meaning . (9)
Experiment Environment
In this paper, the 'Sejong morph-tagged corpus (approximately 5 M word phrases)', a 21st Century Sejong Project deliverable, was used to extract statistical information. Nouns, adjectives, and verbs were extracted from the Sejong morph-tagged corpus and the co-occurrence frequency of all the words was found in the dictionary.
In order to compare the lexical disambiguation method proposed in this paper with other studies, experiments were conducted using the Korean learning data called SENSE-VAL-2. SENSEVAL is a contest for word sense disambiguation technology under the sponsorship of ACL SIGLEX and EURALEX. It has been held every three years since 1998. Two Korean teams participated in the second contest. The target words in SENSEVAL-2 for Korean learning data were 'mal', 'noon', 'son', 'baram', 'geori', 'jari', 'euisa','mok','jeom', and 'bam'. The detailed data composition can be found in Appendix A.
The evaluation measure for lexical disambiguation methods in this paper was accuracy. The accuracy can be obtained as follows: % the number of ambiguous words whose meanings were correctly distinguished the number of ambiguous words (10)
Experiment Method
A window size of context was considered when co-occurrence words appeared in the local context of the ambiguous word used for the lexical disambiguation. A window size refers to the number of words on the right and left sides of the ambiguous word. As the window size became larger, accuracy also increased rapidly, eventually being stabilized. In this paper, considering the size of the statistical dictionary, five was selected as a basic value of the window size.
The performance also depends on which relation words are used in the Korean Lexical Semantic Network. In this study, a relationship that can be used for word sense disambiguation was divided into five relationships, as shown in Section 3.1 and was calculated by varying the weight. The weight of the coordinate terms was fixed to 1.0 while varying the weights of other related words in the experiment.
As shown in Figure 6, when the weight of the coordinate term is one, the best accuracy was obtained if the weight of the hyponym was 0.8 and the weight of the hyponym of the coordinate term was 0.2. Furthermore, the accuracy was higher when there was no expansion of the hypernym and the coordinate term of the hypernym. In this study, weights of relation words were set as follows.
, 0.5, 0.4, 0, 0.1, 0 (11) In addition, when the coordinate terms of the semantically related words of an ambiguous word were expanded, the range of related words to be expanded was changed in the experiment. Figure 7 shows a change in accuracy according to changes in the expansion range of the coordinate terms of the semantically related words of an ambiguous word. As shown in Figure 7, it is more effective for word sense disambiguation to expand only the words that are highly related rather than expanding the collocation coordinate terms of all the semantically related words. In this study, only the coordinate terms of the collocation words in the top 25% of the semantically highly related words of an ambiguous word were expanded.
To evaluate the performance of the algorithm proposed in this paper, a method of determining the meaning by the most frequent class (MFC) was used as the baseline for comparison of performance. In addition, performance for a basic algorithm and the newly improved algorithm was compared.
The basic algorithm is the one that was used previously in the lexical disambiguation system at Busan University that calculated using the semantic coordinate terms of the ambiguous word. The improved method for lexical disambiguation in this paper solves the data proficiency problem as follows: ① A weight is adjusted according to the types of semantically related words of an ambiguous word so that more information regarding the relation words can be used than in existing methods. ② Semantically related words of an ambiguous word and the coordinate terms of the related words are expanded so that more information can be used than in existing methods. ③ Using the part-of-speech information of words, normalization is done for words such as numerals and proper nouns. Table 7 shows a comparison of the performance between the basic and improved algorithms. The average accuracy of MFC was 78.29%, while accuracy of the proposed algorithm was 88.11%. Then, the number ① improvement method was applied to the basic algorithm. Next, the numbers ① and ② improvement methods were applied, and finally, the numbers ①~③ were applied. Accuracy was improved 5.08%, 8.01%, and 9.90%, respectively. The proposed method showed better accuracy in most ambiguous words than the MFC. However, the MFC had significantly high accuracy for words whose evaluated corpus meaning was biased to one-sided direction such as 'baram' and 'mok'. In particular, the accuracy for 'mal' was the lowest in terms of lexical disambiguation. This was because 'mal' expressing 'grain' or 'unit of quantity of liquid' appeared more frequently than 'mal' meaning 'means to express people's thought and feeling' that was widely used in general. Table 8 shows the ratios of the meanings of 'mal' in the 'Korean learning data in SENSE-VAL-2'.
Analysis of Effect of the Prior Probability Estimation
A method of using prior information in the previous studies was developed that included: performing lexical disambiguation on raw corpus using a basic model (hereafter referred to as a primary model) based on unsupervised disambiguation, and applying the extraction of prior knowledge from the previous result to the primary model (hereafter referred to as a secondary model). Figure 8 shows this process. In order to compare with the proposed prior probability estimation, an experiment was conducted using the prior probability estimation method shown in Figure 8. Using statistical information extracted from the learning corpus, a primary model was constructed (using related words and relation words) and ② using the primary model, lexical disambiguation was conducted with regard to learning corpus. ③ Prior knowledge was extracted using the lexical disambiguation result, ④. A secondary model was constructed using the primary model and extracted prior knowledge and ⑤ lexical disambiguation was conducted with regard to the evaluation of the corpus using the secondary model. Table 9 shows the performance of lexical disambiguation when prior probability estimated using the method in Figure 8 and the prior probability proposed in this study were used. As shown in Table 9, the use of prior knowledge estimated using semantically related words of the ambiguous word contributed more to the lexical disambiguation than using prior knowledge extracted from the learning corpus tagging results. This result was revealed because the accuracy of the primary model was 83.49% on average so that the secondary model could not be constructed using the accurate prior knowledge.
To determine whether the proposed method in this paper showed the same performance in other languages, we conducted an experiment with English. For the English, the English WordNet was used instead of the Korean Lexical Semantic Network. We evaluate out methods using the SemCor [30], which is an English corpus with semantically annotated texts. The semantic analysis was done manually with WordNet 1.6 senses (SemCor version 1.6) and later automatically mapped to WordNet 3.0 (SemCor version 3.0). The SemCorpus corpus consists of 352 texts from Brown corpus. Table 10 shows the performance of our model and the performance of the existing model for SemCor. Existing models to be compared are deep-learning language model-based fine-tuning models for word sense disambiguation. All three models were fine-tuned to 80% of SemCor using the basic model and then evaluated for 20%. As can be seen in Table 10, our proposed method showed almost the same or slightly lower performance than the existing models even though it is not supervised learning. In particular, in the evaluation results for SE13, our model showed the best performance. This is because, in the case of SE13, the number of training data are very small compared with other tasks. It can be seen that the deep learning-based model performs better if the training data for target word is sufficient, but our proposed method performs better when the training data are insufficient.
Practical Experiment with the Proposed System
As explained earlier, lexical disambiguation can be utilized as a preprocessing system in various natural language processing application areas such as information search or machine translation. To increase the acceptance of the system as a preprocessing system, it is necessary to increase the performance of the lexical disambiguation and reduce the processing time and minimize the required storage space. In particular, the calculation of the chi-square statistic and semantic prior probability takes significant time. In this study, the dictionary of chi-square statistics between the words and the prior probability dictionary were constructed beforehand. The chi-square statistics and related words were previously obtained through a search method thereby minimizing the processing time of the lexical disambiguation.
To search the chi-square statistic, a certain block unit of indexes was created using the chi-square information. A target block was found using a word pair key to load the block from the file to the memory thereby fetching the chi-square statistic of the target word pair using a binary search. The prior probability information was connected to the word index directly, thereby fetching the prior probability information when a word pair key was searched. Figure 9 shows the aforementioned chi-square statistic and search method for the prior probability information.
For the practical experiment of the lexical disambiguation technology based on the large-scale chi-square statistic and prior probability, speed was measured. Based on the top ranking of the appearance of words in the Sejong semantic-tagged corpus, 200 ambiguous words were extracted and tested over 10,000 sentences to analyze the processing speed and performance of the lexical disambiguation. The average number of meanings of the ambiguous words was 5.7 words.
As shown in Figure 10, when the memory-based search method was not used, the execution time was 350 s but when the memory-based search method was used, the execution time was 22 s. That is, an average of 450 ambiguous words was resolved per second. The average accuracy of the lexical disambiguation was 86.3%, which was about 4% lower than using the SENSEVAL-2 data. This was because the average number of meanings was larger than the number of ambiguous words in the SENSEVAL-2 data. Figure 10 shows the semantic analysis accuracy distribution over 200 ambiguous words. More than 90% of accurate semantic analysis was shown in 67 ambiguous words, which accounted for 31% of the total words; 85~90% accuracy was shown in 89 ambiguous words.
Conclusions and Future Research
This paper proposed a novel unsupervised disambiguation method that showed better performance than existing knowledge-based lexical disambiguation or unsupervised lexical disambiguation methods without need of a large amount of sense-tagged corpus.
Since the related words in the Korean Lexical Semantic Network have the same characteristics, the meaning of an ambiguous word could be distinguished by determining the relationship between the semantic relation words of the ambiguous word and the co-occurrence words in a local context. Moreover, the performance of the lexical disambiguation method was improved by using more relation word information than existing methods. Weights were adjusted depending on the semantic relation word type of an ambiguous word and expanding the semantically related words of the ambiguous word and coordinate terms of the related words. Finally, numerals and proper nouns were normalized using the part-of-speech information to solve the data deficiency problem, while semantically related words of an ambiguous word were obtained and used as prior information in order to have the same effect of using prior information in supervised disambiguation.
The contributions of this study are as follows: First, lexical disambiguation was conducted using statistical information without a sense-tagged corpus by utilizing KorLex, which is a Korean Lexical Semantic Network. Second, better performance was achieved using only the minimum information (frequency of appearance of a single word, frequency of appearance of co-occurrence, and part-of-speech information) than the existing knowledge-based lexical disambiguation method.
Future research will first include, evaluating additional ambiguous words using other evaluation data to further increase the reliability of the systems. Second, a study on preprocessing, such as selection constraints, will be done for an analysis that cannot be solved by statistical information due to the data deficiency problem.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,902.6 | 2021-11-26T00:00:00.000 | [
"Computer Science"
] |
Analysis of Riverbed Evolution of the Waigaoqiao Branch Channel of the Yangtze Estuary in Flood Period Under New Water and Sediment Conditions
With the change of sediment conditions in the Yangtze River Estuary, the river regime evolution pattern has entered a new stage. The overall trend has changed from siltation to erosion, and the impact of the evolution trend of overall river regime on local river regime needs to be further studied. Based on the hydrological and topographic data of flood season in 2020, this paper analyzes the riverbed change of Waigaoqiao branch channel during flood period under new water and sediment conditions. The river bed evolution law and causes of scouring and silting are analyzed statistically. Result shows that the water depth of branch channel decreases with the influence of channel dredging and river regime change in South Harbour under new water and sediment condition. During the flood period of 2020, the Waigaoqiao branch channel generally presents a continuous silting situation, mainly located in the front of Waigaoqiao Wharf (phase IV). The increase of sediment concentration caused by the flood and the scouring of shoal along the upstream of the branch channel are the main reasons for the silting of the branch channel. Based on the actual needs of waterway operation and maintenance, this paper puts forward relevant countermeasures and suggestions according to the analysis results.
Introduction
The Shanghai Waigaoqiao Branch Channel is located on the south bank of the South Port of the Yangtze River Estuary and adjacent to the Yuanyuansha precautionary area. It is recognized as a vital part of the Waigaoqiao Channel and the connecting water area between the front of the wharf in the Waigaoqiao Port Area and the deep-water channel. In addition, it can primarily satisfy the berthing and unberthing needs of ships in the Waigaoqiao port area and the up and down navigation requirements of small and medium-sized ships in the main channel. Moreover, there is a relatively large flow of ships and frequent arrivals and departures of ships at the wharf [1]. The variations in the river bed and the safety of the water depth are of essential significance for ensuring the navigation safety of the coastal port channel and the main channel of the South Port.
Due to the changes of river regime in Nangang around 2005, amount of sand passed through the northern channel, which not only caused the process silting from the upstream to the downstream, but also intensified the backsilting in the middle section [2][3][4][5]. As the amount of the incoming sediment in the Yangtze River Basin has been decreasing, the evolution of the Yangtze River Estuary is undergoing a significant change [6]. The succession model has ushered in a novel stage, which is being transformed from a trend of overall silting over the years to a trend of overall erosion. However, the respective region of the estuary exhibits different erosion and deposition characteristics. The river section above the sandbar at the estuary and the front edge of the underwater delta have been scouring, and the sandbar water area remains slowly silting up [7]. Over the past few years, as river regime adjustments have been gradually transmitted downstream, the deep trough of the Nangang section above the sandbar has been scouring and Flood Period Under New Water and Sediment Condition cutting downward. Besides, the evolution of the areas on both sides of the deep groove reveal different characteristics. Moreover, the response of local river regime variations to the overall river regime evolution has displayed a certain spatial difference [8]. After a series of waterway regulation and river regulation projects have been conducted [9], the river bed boundary of the Yangtze River estuary and the overall river regime have been primarily stabilized. Nevertheless, the erosion and cutting of local sand bodies (e.g., the Xinliu River Sand and Shabao, Ruifeng Sand in the Nangang River section) still more significantly impact the Nangang Channel and the port area. In particular, the vicinity of the Waigaoqiao branch channel is significantly impacted by the overall river regime variations in Nangang and the sediment that has been discharged from the upstream beach, and some variations may irreversibly affect the future river bed evolution and the channel maintenance [10,11]. Besides, the effect of river basin floods on river bed changes is of critical significance [12]. The 2020 flood season displayed the largest flood peak since the impoundment of the Three Gorges Reservoir, with the Datong flow of 84800 m 3 /s, which is nearly twice the normal flow in the flood season. Given on the recent variations in the Nangang river regime, an analysis is conducted on the variations in the water depth conditions of the Waigaoqiao branch channel in the flood period under the novel water and sediment conditions and the effect of the overall river regime of the Nangang reach. Thus, this study is critical to ensure the navigation safety of the existing waterways and the stable operation of the Waigaoqiao Port area, or research the dynamics of water and sediment under the new river regime.
Incoming Water and Sand Conditions and the 2020 Flood Process
Since the 1950s, the annual average runoff of the Datong Station has been 895.9 billion m 3 , with relatively stable variations. After a major flood in 2020, the Datong flow was peaked in July, and the Datong flow after July increased overall as compared with previous years. In May 2020, the average monthly Datong flow was approximately 20000m 3 /s; the peak value was 84800m 3 /s in July 2020, with the average monthly flow of 71000m 3 /s; the flow in September remained at a relatively high value of nearly 52000m 3 /s as compared with previous years.
From 1951 to 2019, the annual sediment transport at the Datong Station decreased [13]. To be specific, the average annual sediment transport from 1951 to 1985, from 1986 to 2002, and from 2003 to 2019 reached 470 million tons, 340 million tons and 134 million tons, respectively. During the flood and dry seasons, the sediment transport in the Yangtze River estuary is different. The sediment transport takes up nearly 78.5% and 21.5% of the year in the flood season and dry season, respectively.
Analysis of Recent Evolution
1) Current status of the river The waters north of the A54A-A54B light-floating line in the branch channels of Waigaoqiao Phases IV~VI is connected with the deep-water channel, and the water depth is great, which is basically 13~14m. The Waigaoqiao Coastal Channel is the water area on the south side of the light-floating line, and the water depth is relatively shallow, generally shallower than 12.5m. It is also characterized by the low water depth in the upstream and the downstream, as well as the high water depth in the middle section. To be specific, the water depth at the front of Waigaoqiao Phase IV decreases significantly from north to south.
The design bottom elevation of the branch channel of Waigaoqiao Phases IV~VI undergoes the stepped arrangement, increasing from -10.0m to -11.5m from top to bottom, and then reducing to -10.5m. For the simple presentation, the maintenance dredging areas are numbered as 1~7 from top to bottom ( Figure 1).
2) The law of river regime changes over the years Given the water depth data of the fixed section of Nangang over the past years (the section illustrated in Figure 1), since 2009, the branch channel in the port area of Waigaoqiao Phases IV~VI has achieved the basically unchanged section shape, and the water depth has progressively increased. Recently, the increase rate has declined and tended to be stabilized [14]. To be specific, the section A8 refers to the water area in front of the upper wharf of Waigaoqiao Phase IV; the water depth exhibited by the section decreases from north to south; the coastal waters exhibit the relatively small water depth on the south side; the water depth patterns of other sections are relatively stable.
From November 2009 to November 2010, the deep-water channel significantly had been deepened by 1.5~2m. Since then, the water depth has basically remained below 12.5m. From November 2010 to November 2014, the water depth of the branch channel had increased significantly by 1.5~2m. Afterward, the water depth was relatively stable on the north side of the light-floating line, and the water depth on the south side slightly increased, marking a depth of about 1m.
Figure 6. Water depth variations of typical sections at Waigaoqiao Phases IV~VI.
3) The law of river regime variations during the flood Given the analysis on the topographical data in the major flooding period from May to September 2020, the branch channel of Waigaoqiao Phases IV~VI was continuously silted (Figure 7), which achieved an overall net siltation volume of 1.33 million m 3 . The section morphology of the respective dredging area did not significantly change, and the water depth decreased to a certain extent. The average water depth in the light-floating line in the dredging area decreased by nearly 0.5m. Figure 5 and Figure 8 respectively present the layout of` the dredging area and section and the typical section.
The near-shore siltation was largely concentrated in the front of the upper wharf of Waigaoqiao Phase IV; the siltation amplitude nearly reached 0.5~0.8m, and the riverbed close to the deep-water channel north of the light-buoy line achieved the overall siltation amplitude about 0.5m. From May to July, the downstream wharf of Waigaoqiao Phase IV and the vicinity of the A54B light buoy were being alternately scoured and silted, with an amplitude of 0.2~0.5 m. Under the new water and sediment conditions in the Yangtze River estuary, the overall erosion and deposition of the Waigaoqiao branch channel in the previous flood seasons was relatively balanced or undergoing a slight scouring process. From April to September 2018, the total scouring of the branch channel was 300,000 m 3 , and that from April to August 2019 reached 20,000 m 3 . Under the new water and sand conditions, the branch channel was significantly silted during the 2020 flood, and the siltation amount reached 1.33 million m 3 .
4) Shallow area analysis
With the normal operation of the wharf ensured, the bottom elevation of the stepped channel has been designed for the Waigaoqiao branch channel by complying with the characteristics of the water depth at the front of the wharf and years of dredging experience, as an attempt to decrease the amount of dredging and save the cost of channel maintenance. However, some areas remain, in which the water depth cannot satisfy the design requirements of the channel.
The Waigaoqiao branch channel was maintained and dredged at the end of February 2020. In May 2020, the water depth in the respective area was basically meet the channel design requirements. When the channel was continuously silted during the flood season in 2020, the shallow spot rates of the areas 1~4 from July to September all exceeded 10%. For the distribution of shallow areas, the shallow areas of the channel were mainly distributed in front of Waigaoqiao Phase IV. For the change in the rate of shallow spots, the area of the shallow area in front of the Waigaoqiao Phase IV Wharf significantly was broadened from May to July. From July to September, under the high upstream flow, there was no erosion in the shallow area, and the rate of shallow spots still increased slightly.
1) Changes in incoming water and sand conditions
The average sand content of the flood season in recent years overall decreased, as revealed by the statistics of the NG0 station on the fixed vertical line of Nangang (located on the north side of the lower section of the Nangang channel). The average sand content in the flood season from 2014 to 2020 was nearly 0.2kg/m 3 , about 60% less than that before 2008. Since the amount of sediment coming from the upper reaches of the Yangtze River declines, Nangang has been slightly scouring over the past few years, and the major changes in scouring and silting remain with the development of the channeling and the erosion and discharge of sand bodies. Under the new water and sand conditions, the overall water depth of the branch channel has increased and turns out to be increasingly stable, which helps maintain the water depth of the channel. During the flood of July 2020, the large runoff and relatively high sediment content in the Yangtze River estuary area led to the considerable sediment transport in the water body. This was one of the reasons for the siltation of the branch channel in the 2020 flood season. Besides, the flood period is strong, making it easy to form erosion. In the absence of late sediment replenishment, the local sand body exhibits a higher instability, and the sediment is discharged from the upstream beach, thereby causing the instability of the branch channel riverbed.
2) Adjustment of Nangang Beach and Trough Shape Under new water and sediment conditions, the level of sediment concentration in Nangang waters has decreased overall over the past few years. In particular, the overall pattern of the Nangang beach trough has remained stable after 2012, the main trough has been converted into an erosion trend, and the deep trough has been widened. On the whole, the Changxing Waterway exhibited a scour situation. The sand tail of the upper sand body of Ruifeng Sand was eroded, and the ebb channel was developed and cut the upper sand body of Ruifeng Sand, thereby causing the south of the upper sand body to scour along the beach surface. The discharged sediment has formed a siltation zone in its downstream, thereby making the Wusongkou anchorage on the north side of the lower section of the Nangang main trough silt. As impacted by the siltation of shallow sand bodies 10m on the north side, the depth of the lower section of the Nangang main trough underwent a southward shift. Shenhong swayed to the south of Waigaoqiao Phase IV~ VI wharf waters, and the Waigaoqiao branch channel was scoured.
On the whole, under the new water and sand conditions, Nangang has a stable overall river regime pattern. The volume of the 0m river channel increases, and the deep channel is in a state of erosion. The sand body on the north side of the lower section of the main trough is silted, while the deep part moves southward. All the mentioned conditions underpin the stability of the water depth of the Waigaoqiao branch channel on the south bank. According to the flow field during the rapid ups and downs of the Nangang flood season, Nangang's ebb tide exhibits the overall flow rate greater than the rising tide, and the overall flow direction is relatively smooth and consistent during the ebb tide. The upstream beach of Waigaoqiao Phase IV~ VI Branch Channel is scoured by the ebb tide and then affected by the diverting current at high tide. Accordingly, the flow rate decreases, and the sediment falls and causes the siltation. Thus, the front of Waigaoqiao Phase IV Wharf has constantly exhibited a silting state.
From May to August 2020, the upstream beach of Waigaoqiao Phase IV~VI pier was scoured and then narrowed, and the 10m isobath slightly retreated. The scouring of the upstream beach caused the sediment to be discharged, and silt fell in the maintenance dredging area of the downstream branch channel, especially in the front of Waigaoqiao Phase IV Wharf. The siltation range was between 0.2 and 0.6m.
3) Dredging maintenance
The deep-water channel is located on the north side of the branch channel. During the maintenance dredging, the bottom soil has been disturbed, and the sand content in the nearby waters has increased, thereby causing the branch channel of Waigaoqiao Phases IV~ VI to be silted [15].
The Waigaoqiao branch channel has been continuously maintained and dredged annually since the initiation of the infrastructure dredging project in 2010. Except for the emergency maintenance dredging in 2014, the amount of maintenance dredging decreased. From February 2016 to February 2020, the overall dredging volume for two maintenance dredging in the four years from February 2016 to February 2020 reached 100,000 cubic meters. As impacted by new water and sediment conditions in the upper reaches and human intervention in maintaining dredging, the water depth in this area has been relatively stable over the past few years. There was no channel for dredging during the major floods from May to September 2020. As indicated from the surveying map, the channel tended to be silted. In October 2020, the maintenance and dredging of the branch channel took up 200,000 cubic meters. It is therefore demonstrated that at this stage, especially in the environment of specific water and sand conditions, this area still requires regular terrain monitoring and maintenance dredging to meet the ship's berthing and navigation requirements.
1) Status of operation and maintenance of branch waterways
From the perspective of the Nangang shipping structure, the construction of the Waigaoqiao branch channel can comply with the berthing and unberthing requirements of large container ships at Waigaoqiao Wharf and ensure the normal and efficient operation of the terminal. Moreover, small ships can be diverted into the main channel, and the pressure on the deep-water channel can be reduced. The operation of the Waigaoqiao branch channel boosts the development of the Yangtze River Estuary shipping. From the perspective of the Nangang River regime, under the effect of new water and sand conditions and the implementation of a series of comprehensive treatment projects in recent years, the water depth conditions of the Waigaoqiao branch channel have been constantly stabilized with the progressive improvement and stabilization of the river regime of the Nangang deep channel. Accordingly, the requirements of the design water depth of the channel can be basically met. Thus, the construction of the Waigaoqiao branch channel is reasonable.
However, the branch channel is affected by the diverting flow pattern at high tide, the hydrodynamic force is insufficient, and the sedimentation has caused the shallow water depth in the front of Waigaoqiao Phase IV Wharf and a shallow area that hinders navigations. The mentioned situations were aggravated during the flood.
2) Response measures and suggestions (1) Maintain water depth monitoring and channel dredging As indicated from the changes in the water depth of the channel and the comparison of the dredging volume over the past years, the channel water is significantly impacted by the dredging. The branch channel may be blocked by the siltation in the absence of the man-made interference (e.g., maintenance and dredging). Thus, it is necessary to continuously stress the changes in the water depth and topography of the area, strengthen topographic monitoring, and perform dredging maintenance-related works promptly.
(2) Properly extend the maintenance cycle and strengthen monitoring under special water and sand conditions Under the new water and sand conditions and by maintaining the existing river regime pattern in Nangang, the maintenance period of this water area can be appropriately extended to down-regulate the maintenance cost. Besides, water depth monitoring is required to be optimized in the presence of floods, as an attempt to avoid obstructing navigations and affecting port productions.
(3) Continue to pay attention to changes in the pattern of Nangang river regime The Waigaoqiao branch channel is located on the south bank of the lower reaches of Nangang, which is affected by changes in the upstream river regime. The south side of the upper sand body of Ruifeng Sand has been recently eroded by channeling gullies, sand tails have been scoured, and an independent sand body has been cut out on the south side of the upper sand body of Ruifeng Sand. As impacted by the activity of independent sand bodies, the silt released under the current Nangang scouring environment may be increasingly eroded, causing temporary siltation in the lower section of Nangang. In the long term, it is necessary to continuously stress the changes in the overall river regime in Nangang, especially the effect of changes in the river core sand bodies (e.g., Ruifengsha) on the outer Gaoqiao branch channel. If necessary, it is suggested to work with relevant departments to take certain rectification measures on Ruifengsha and stabilize the local river regime of Nangang and the water depth of the Waigaoqiao branch channel.
Conclusions and Prospects
1) Under the new water and sand conditions, the overall river regime pattern of Nangang turns out to be stable, the overall volume of the river channel has increased, and the deep channel exhibits a state of scouring. The sand tails of the upper sand body of Ruifeng Sand are eroded, and the ebb channel develops and cuts the upper sand body of Ruifeng Sand. The discharged silt is deposited on the north side of the lower section of the main trough, which causes local deep swells to move southward. Under the overall scoured river pattern of the main channel of Nangang and the effect of channel dredging, the water depth of the branch channel of the Waigaoqiao Phase IV~VI port area has progressively increased since 2009, and the increase rate has declined in recent years.
2) During the 2020 flood, the Yangtze River estuary area had a large runoff and a high sand content. The scouring of the upstream beach of the Waigaoqiao branch channel caused the sediment to leak. The dredging area in the downstream branch channel has been maintained, especially in the front of Waigaoqiao Phase IV Wharf. During such a period, the Waigaoqiao Phase IV~VI branch channel was continuously silted, with a total net siltation volume of 1.33 million m 3 . The average water depth of each section has decreased to a certain extent.
3) Under the new water and sand conditions and by maintaining the existing river regime pattern in Nangang, the maintenance period of this water area can be appropriately extended to down-regulate the maintenance cost. However, at this stage, especially under certain water and sand conditions, regular terrain monitoring and maintenance dredging are necessary for this area to meet ship berthing and navigation requirements. In the long run, it is recommended to stress the overall river regime changes in Nangang, especially the effect of the changes in the sand bodies of the river core (e.g., Ruifengsha) on the outer Gaoqiao branch channel.
The analysis of riverbed evolution of Waigaoqiao branch channel in Changjiang Estuary can bring about significant direct economic benefits as well as many comprehensive social benefits. Analysis of riverbed evolution can not only promote water purification, but also create new habitats for aquatic animals. The analysis of riverbed evolution can reduce the energy consumption during ship transportation, promote energy conservation, emission reduction and green economic development. Therefore, the observation record of riverbed evolution of Waigaoqiao in the Yangtze Estuary should be kept in the next step, so as to promote the Yangtze River to drive the better development of surrounding economy. | 5,336.8 | 2021-05-21T00:00:00.000 | [
"Engineering"
] |
The Shifting Boundaries of Sustainability Science: Are We Doomed Yet?
In this issue of PLoS Biology, Burger and colleagues make several important contributions to the discourse of sustainability science, recalling limits of human economic and population growth derived from macroecology and physical principles [1]. We agree with many of the points offered in their paper in this issue and with those in the paper by Brown and colleagues [2]. However, we also believe there is danger in a vision of sustainability that is overly deterministic and does not reflect the dynamic nature of the biosphere, its ecosystems, and economies. We are also concerned about the implications of framing sustainability in the language of physics rather than ecology.
Recent policy discussions in preparation for the Rio+20 Convention emphasize the concept of “green economies.” Perhaps most cogently described by microbiologist Lynn Margulis, the term refers to any theory of economics that views human economic activity as embedded within ecosystems. Green economics is often used with or in place of the more widely used term of “sustainability” or “sustainability science.” Both terms reflect a new, evolving, and diffuse discipline—or perhaps a goal approached through many disciplines, including ecology, economics, engineering, and sociology. Given the central role of ecosystems in current paradigms for sustainable development, the science of ecology is a seemingly natural home for sustainability science.
However, ecology may also present some operational limits to assessing or implementing sustainable strategies. Given how difficult it is to develop ecological experiments and test hypotheses, ecology has been described as having more in common with the earth sciences (such as geology) than other biological sciences (such as physiology or molecular biology), and much less with physical sciences such as chemistry and physics [3],[4]. Given the importance of observation and inference in ecology, making predictions about complex ecological interactions requires accepting their inherent uncertainty and thus a particular humility in drawing conclusions [5].
A reader of the Burger and colleagues paper [1], for instance, might assume that the logical endpoints for its arguments are either an imminent global economic collapse triggered by stringent natural resource scarcities or catastrophic human population decline in a forceful realignment with global carrying capacity. These are dire options, with no realistically actionable response, and a reader would be forced to either reject the initial assumptions or to despair, neither of which is a useful motivational force for positive change.
Moreover, while we believe that heightened concern is warranted and that these endpoints are possible, we also believe there is evidence that they can be avoided or mitigated. Predictions made on similar first principles have been put forward repeatedly in the past (e.g., [6]–[8]), and rigidly materialist approaches to social and economic change often underestimate the flexibility and resilience of human economies and societies [9]. To date, technological advances such as increases in agricultural productivity spurred by the prospect or reality of scarce primary inputs (land, water, nutrients, energy), shifts in economic valuation, and policy-based human behavioral change, such as the actions under the Montreal Protocol to reduce tropospheric concentrations of ozone-depleting gases, have avoided or delayed our transgression of perceived thresholds in the Earth system [10],[11]. While we cannot assume that there is an equivalent to Moore's Law of semiconductor capacity for natural resource management [12] or have faith that efficiency and innovation alone will save us, we can credibly assume that the existential imperative for human adjustment and adaptation will prompt us to correct our seemingly disastrous course.
As a result, we believe that sustainability itself must rest on a broader foundation, particularly if we posit that sustainability science encompasses socioeconomic development, which requires the mobilization of natural resources in new ways to sustain and improve human well-being. Here, we describe several potential gaps in sustainability science, as well as evidence for what we hope is useful optimism that emerging economic paradigms are becoming more ecologically sensitive.
In this issue of PLoS Biology, Burger and colleagues make several important contributions to the discourse of sustainability science, recalling limits of human economic and population growth derived from macroecology and physical principles [1]. We agree with many of the points offered in their paper in this issue and with those in the paper by Brown and colleagues [2]. However, we also believe there is danger in a vision of sustainability that is overly deterministic and does not reflect the dynamic nature of the biosphere, its ecosystems, and economies. We are also concerned about the implications of framing sustainability in the language of physics rather than ecology.
Recent policy discussions in preparation for the Rio+20 Convention emphasize the concept of ''green economies.'' Perhaps most cogently described by microbiologist Lynn Margulis, the term refers to any theory of economics that views human economic activity as embedded within ecosystems. Green economics is often used with or in place of the more widely used term of ''sustainability'' or ''sustainability science.'' Both terms reflect a new, evolving, and diffuse discipline-or perhaps a goal approached through many disciplines, including ecology, economics, engineering, and sociology. Given the central role of ecosystems in current paradigms for sustainable development, the science of ecology is a seemingly natural home for sustainability science.
However, ecology may also present some operational limits to assessing or implementing sustainable strategies. Given how difficult it is to develop ecological experiments and test hypotheses, ecology has been described as having more in common with the earth sciences (such as geology) than other biological sciences (such as physiology or molecular biology), and much less with physical sciences such as chemistry and physics [3,4]. Given the importance of observation and inference in ecology, making predictions about complex ecological interactions requires accepting their inherent uncertainty and thus a particular humility in drawing conclusions [5].
A reader of the Burger and colleagues paper [1], for instance, might assume that the logical endpoints for its arguments are either an imminent global economic collapse triggered by stringent natural resource scarcities or catastrophic human population decline in a forceful realignment with global carrying capacity. These are dire options, with no realistically actionable response, and a reader would be forced to either reject the initial assumptions or to despair, neither of which is a useful motivational force for positive change.
Moreover, while we believe that heightened concern is warranted and that these endpoints are possible, we also believe there is evidence that they can be avoided or mitigated. Predictions made on similar first principles have been put forward repeatedly in the past (e.g., [6][7][8]), and rigidly materialist approaches to social and economic change often underestimate the flexibility and resilience of human economies and societies [9]. To date, technological advances such as increases in agricultural productivity spurred by the prospect or reality of scarce primary inputs (land, water, nutrients, energy), shifts in economic valuation, and policy-based human behavioral change, such as the actions under the Montreal Protocol to reduce tropospheric concentrations of ozone-depleting gases, have avoided or delayed our transgression of perceived thresholds in the Earth system [10,11]. While we cannot assume that there is an equivalent to Moore's Law of semiconductor capacity for natural resource management [12] or have faith that efficiency and innovation alone will save us, we can credibly assume that the existential imperative for human adjustment and adaptation will prompt us to correct our seemingly disastrous course.
As a result, we believe that sustainability itself must rest on a broader foundation, particularly if we posit that sustainability science encompasses socioeconomic development, which requires the mobilization of natural resources in new ways to sustain and improve human well-being. Here, we describe several potential gaps in sustainability science, as well as evidence for what we hope is useful optimism that emerging economic paradigms are becoming more ecologically sensitive.
Can Economies Achieve Ecological Stability?
The term green economy references a major point of difference with sustainability science by suggesting that economies are embedded in dynamic, evolving ecosystems rather than existing in steady-state conditions. The distinction is significant; ecosystems are not unchanging or fixed but dynamic, often cyclical, and capable of evolution, transformation, and reengineering by species other than humans [13].
Ecosystems are also not isolated or fully self-contained; the laws of thermodynamics may not be heuristic for assessing sustainability at ''all spatial and temporal The Perspective section provides experts with a forum to comment on topical or controversial issues of broad interest. scales'' [1], particularly local scales. Thermodynamic relationships are probably most revealing as global rather than local processes given that the Earth, all ecosystems, and socioeconomic networks are thermodynamically open rather than closed. Applications of physical laws to complex biological and social systems are often challenging (e.g., [14]).
Management and Manipulation of Ecosystems: The Consolation of History
Global economic forces and high population density characterize the current period of natural resource exploitation, but we have long influenced ecosystems in significant ways, even when we had little in the way of global trade or population pressure. For instance, a preponderance of evidence suggests that humans contributed to the extinction of many large mammals in North and South America following the Bering land bridge migration beginning about 12,000 years before present (BP), as well as of large fauna across the Pacific islands, Madagascar, and New Zealand [15]. Hydrologists have recently posited that Native American land management practices altered the dominant geomorphological features of eastern North America's mid-Atlantic rivers in the pre-Columbian era [16,17]. Even many aspects of global trade considered new are primarily a matter of the extent and speed of change rather than novelty per se. Chinese consumption of American ginseng in the 17th and 18th centuries, for instance, almost drove the species to extinction in the Appalachian mountains [18]. Iberian forests have yet to recover from the overproduction of wool during the 16th century, while the legacy of unsustainable farming practices in ancient Greece persists as degraded topsoils today [19]. With few exceptions, current human behavior differs from the past primarily as a matter of degree-one that merits concern at global aggregate levels, but does not present novel scenarios of local overconsumption per se.
Certainly not all long-term human impacts have been negative. Intensive rice agriculture began in the Yangtze basin about 8,000 years BP, a sustainable model for agriculture by any reasonable standard [20]. The extensive water infrastructure network around Chengdu, China, has diverted part of the Min River through the Dujiangyan for both flood control and irrigation without restricting fish connectivity since 256 BC [21], while some forests in India have been actively man-aged by surrounding communities for even longer periods [22].
Sustainability and Shifting Cycles: Macro-, Meso-, and Microecology
While organismal behavior (especially by humans) has profoundly altered many, if not most, ecosystems, most significant shifts in biogeochemical cycles and ecosystem qualities occur for abiotic reasons. The amount of water on earth, for instance, has declined in absolute terms about 26% since the beginning of life on Earth 3.5 billion years BP [23], but the relative balance between fresh and salt water evolves much more rapidly, normally in response to glacial-interglacial cycling. During the last glacial maximum about 20,000 years BP, glacial area extent was about 40 million km 2 , compared to about 17.5 million km 2 today, representing many times more fresh water than now present, with sea levels over 100 m lower than currently extant [24]. Most of these transitions occurred relatively rapidly-in decades to centuries, but occasionally over sub-decadal periods-and are thus quite relevant to human lifespans [25][26][27]. Even the Holocene (,the past 12,000 years) has seen dramatic shifts in lake levels (tens to hundreds of m) and river discharges (across several orders of magnitude) unrelated to human water management, reflecting changes in precipitation regime [28]. Fire frequency and severity for forest and savannah ecosystems are often connected to precipitation patterns [29]. These shifts have had important implications for human water management regimes, agricultural patterns, and urban densities, and pre-Columbian civilizations in the Americas excelled at developing innovative engineering approaches to manage such shifts in variability [30]. Sustainability over decadal to century timescales must be grounded in adaptive, flexible management that reflects many non-stationary aspects of human, climate, and biogeochemical conditions [31].
Innovation, Reorganization, and Efficiency
Humans have long caused irreparable harm to ecosystems, driven species to extinction, and have in turn endured major shifts in biogeochemical cycling. We agree that such incidents are avoidable and unacceptable and that the magnitude of current trends must not be dismissed. Humans have also developed ingenious and novel ways of making resource use far more efficient or exploiting new types of resources. Obvious developments here include the invention of agriculture and the domestication of wild plant and animal species, of course, but humans have also been innovative in energy development (wood, wind, coal, petroleum, hydropower, biofuels, geothermal, biogen, nuclear, solar, and wave power), the development of synthetic chemical fertilizers in the 19th century, and the discovery of modern antibiotics in the 20th century. Other innovations have been organizational, such as the development of cities in the Levant and east and south Asia, the birth of modern experimental science, and the transition from family-tribal-moeity structures to multiple scales of governance (including corporate, national, international, and global government structures and institutions).
Some responses to economic and environmental change defy the longstanding predictions of overpopulation concerns, such as the widespread trend towards declining birthrates as living standards increase [32], though the relationship between per capita energy consumption and population growth is complex [33]. While Burger and colleagues point to increasing energy consumption over the past few centuries, they disregard important shifts in the sources of energy in progressive economies [1]; the expansion of low-carbon energy sources in China, Brazil, the European Union, and other regions in recent decades marks a critical transition, and a shift from coal-fired sources of power to hydropower or wind mark very significant transformations, with important implications for ecological footprints. For example, over 98% of Norway's electricity is derived from hydropower [34], about 20% of Brazil's transport fuels consumption is derived from renewable biofuels [35], while China has installed to date about 61 GW of windpower, or roughly three times the generation potential of the Three Gorges Dam [36]. The development of a global environmental movement is also notable in this context, as signified by both the 1992 Rio Earth Summit (attended by over 100 heads of state and 172 governments) as well as its planned 2012 successor conference, the Rio+20 Summit, in addition to important milestones achieved under the UN biodiversity and climate conventions (i.e., the United Nations Convention on Biological Diversity [UNCBD] and the United Nations Framework Convention on Climate Change [UNFCCC]).
While these and other innovations in organization, efficiency, and technology have had unintended side effects, they also resulted in major transitions in human survivorship, resource extraction efficiency, and social and cultural organization. They were also largely unanticipated or very difficult to predict for most observers prior to their invention. Taken together, humans have demonstrated great creativity in how we use technological, social, and cultural ''tools'' to solve resource limitations.
Not Doomed (Yet)
Our ''adjustments'' to the view of sustainability science presented by Brown and colleagues [1] are not meant to obscure or downplay absolute declines in resources such as economically valuable metals and agriculturally productive land, our heedless approach to anticipated tipping points in greenhouse gas accumulation, and ecosystem transformation and species extinction. The availability of natural resources is less of a problem than absolute limits in the Earth's ability to absorb the different outputs of economic activities, while maintaining conditions necessary for human productivity, much less the survival of humans and other species. Anthropogenic climate change is perhaps the most prominent example of these new scarcities and emerging ''limits to growth.'' Indeed, we attribute great merit to these cautionary appeals and to the evidence of Earth system thresholds. We argue for positive responses in behavior, technological progress, and economic realignments commensurate with the challenge of fulfilling human needs while maintaining an Earth system suitable for the long-term survival of humans and other species.
The authors ask, Can the Earth support even current levels of human resource use and waste production, let alone provide for projected population growth and economic development? They answer their question with little doubt: ''There is increasing evidence that modern humans have already exceeded global limits on population and socioeconomic development, because essential resources are being consumed at unsustainable rates'' [1]. We agree that our present consumptive trajectory risks surpassing perceived planetary boundaries in the safe operating space for humanity (c.f. [11]). We argue that these risks merit a paradigm shift, a global transformation-and that this par-adigm shift is underway. We believe that the transition from relatively static approaches to sustainability to flexible green economies embedded in dynamic, variable ecosystems will prove to be a critical intellectual shift for humans this century.
There are reasons for cautious optimism. It is no accident that the modern synthesis of payments for ecosystem services crystallized in the developing world in Costa Rica when the scarcity of ecosystem goods and services from forest conversion was recognized as a social and economic threat [37]. Revolutionary approaches to water management such as dynamic environmental flows have evolved to address both climate variability and absolute shifts in Tanzania's precipitation regime (http://www.iucn.org/about/ union/secretariat/offices/esaro/what_we_ do/water_and_wetlands/prbmp_esaro/). A global policy and economic transformation attributing value to standing forest has emerged with the development of ''REDD+'' incentives to reduce greenhouse gas emissions from deforestation, particularly in tropical forests (c.f. [38]). Many developing countries understand that Western models of development are inappropriate if not impossible to achieve. We believe that these and other positive trends are both accelerating and permeating local, national, and global economies quickly and permanently.
Blending Conservation and Development into Green Economies
Perhaps the most significant shifts in resource management consciousness have emerged through climate change adaptation and the recognition that institutions, infrastructure, and ecosystems have been managed on the basis of climate ''stationarity,'' which is the assumption that the past is an effective guide to the future [30,39].
We suggest that ecosystems and economies should be managed flexibly for at least three non-stationary processes, including demographics, economics, and climate. A fourth non-stationarity should target research and investments that lead to increased efficiency and smaller resource footprints. Taken together, these non-stationarities fit social-ecological resilience theory quite closely. Complex and shifting human interactions with ecosys-tems and biogeochemical cycles can be translated into decision-making processes [40].
With increasing scientific knowledge and global awareness of emerging environmental risks, scarcities, and potential tipping points in social and ecological systems, measures are being taken to correct our flawed economic modelsinternalizing externalities in accounting and decision making, integrating planetary boundaries in policy discussions, and committing to reverse trends in environmental and social decline. We agree with our respected colleagues that this change is not happening at the scale or pace necessary to resolve the problem [1], and exceeding tipping points is a genuine risk. Such signal failures of resource management as the collapse of the Atlantic cod fishery in the 20th century [41] or the lack of a global carbon emissions agreement at the UNFCCC CoP15 in Copenhagen in 2009 highlight our difficulty in negotiating science, institutional change, and governance. However, we also highlight that the adaptive capacity of humanity to overcome seemingly insurmountable constraints on human development within a productive and resilient biosphere has been demonstrated at more modest scales and that this capacity for transformation exists in our interconnected global community at a scale previously unimaginable.
Science-based resource management has seen dramatic growth in sophistication in recent decades, as conservation and economic development have blended together and flexible, non-stationary management approaches have become increasingly mainstream in development banks, governments and aid agencies, and corporations. These shifts represent real advances in linking ecology to practical challenges in managing resources across multiple spatial and temporal scales.
For science to maintain a useful role with policymakers and resource managers, we must find ways to communicate in ways that can be translated into policy and practical action. Our intuition is that fear has proven to be a far less helpful means of communicating the need for positive change than hope. | 4,674.6 | 2012-06-01T00:00:00.000 | [
"Economics"
] |
VESPUCCI: Exploring Patterns of Gene Expression in Grapevine
Large-scale transcriptional studies aim to decipher the dynamic cellular responses to a stimulus, like different environmental conditions. In the era of high-throughput omics biology, the most used technologies for these purposes are microarray and RNA-Seq, whose data are usually required to be deposited in public repositories upon publication. Such repositories have the enormous potential to provide a comprehensive view of how different experimental conditions lead to expression changes, by comparing gene expression across all possible measured conditions. Unfortunately, this task is greatly impaired by differences among experimental platforms that make direct comparisons difficult. In this paper, we present the Vitis Expression Studies Platform Using COLOMBOS Compendia Instances (VESPUCCI), a gene expression compendium for grapevine which was built by adapting an approach originally developed for bacteria, and show how it can be used to investigate complex gene expression patterns. We integrated nearly all publicly available microarray and RNA-Seq expression data: 1608 gene expression samples from 10 different technological platforms. Each sample has been manually annotated using a controlled vocabulary developed ad hoc to ensure both human readability and computational tractability. Expression data in the compendium can be visually explored using several tools provided by the web interface or can be programmatically accessed using the REST interface. VESPUCCI is freely accessible at http://vespucci.colombos.fmach.it.
INTRODUCTION
Grapevine (Vitis spp.) is an economically important fruit crop and one of the most cultivated crops worldwide (Vivier and Pretorius, 2002). Grape berries are consumed as fresh fruit or used for high-valued commodities as wine or spirits. Grapevine transcriptomics studies started over a decade ago, initially using microarrays but later, exploiting the sequenced genomes (Jaillon et al., 2007;Velasco et al., 2007) and the availability of high-throughput sequencing, also using RNA-Seq approaches. As system biology becomes more prevailing in everyday analysis, one of the pressing aspect of analysis is how to integrate different sources of information into one coherent framework that can be interrogated in order to gain knowledge about the system as a whole (Rhee et al., 2006). Prior to biological information integration across several levels (such as proteomics, transcriptomics, and metabolomics), it is important to acquire and combine all the possible available information within each specific field. Together with the methodological problem of combining different sources of information, there's the more practical issue of having sufficient data to justify data integration in the first place, because in order to draw general and valid conclusions a large amount of data is a desirable feature. While for model species this is hardly an issue, for non-model crop species the number of performed experiments might be limited, the technological platforms less established, and heterogeneous data a further complicating factor. Nevertheless, as biology is turning into a data-driven science the prospect of large dataset availability becomes more and more feasible even for non-model species, and in terms of gene expression and functional analysis there have been several efforts to fulfill data integration in different organisms including grapevine (Wong et al., 2013;Pulvirenti et al., 2015), strawberry (Yue et al., 2015), and citrus (Wong et al., 2014).
In this paper, we present an expansive grapevine gene expression compendium that can be used to analyze grapevine gene expression at a broad level. It was created based on an approach for dealing with the large heterogeneity of data formats present in public databases, and to integrate crossplatform gene expression experiments in one dedicated, coherent database. The proof-of-concept of this approach was presented in Engelen et al. (2011) as a web-application for exploring and analyzing specific expression data of several bacterial species. This original technology platform has already been used as a basic framework for creating a gene expression compendium for a more complex case as the multicellular, higher eukaryote Zea mays . Here, we used the most updated version of the COLOMBOS technology (Moretto et al., 2016) to show how this approach can be further extended for the creation of gene expression compendia on other important crop species, focusing our attention on grapevine gene expression studies. Regardless of the available tools, most of the steps toward the creation of such a compendium, require a massive amount of manual curation, from defining a controlled vocabulary for description of experimental conditions to the interpretation of experiment designs and annotation of the included samples. The benefits of Vitis expression studies platform using COLOMBOS compendia instances (VESPUCCI) lie in the availability of the whole known measured transcriptome activity of grapevine in a single programmatically accessible repository and the possibility to extensively explore gene expression patterns through the visual tools made available by the web interface.
Data Sources
The experiments included in VESPUCCI have been collected from the Gene Expression Omnibus (GEO; Barrett et al., 2013), ArrayExpress (Kolesnikov et al., 2015), and the Sequence Read Archive (SRA) 1 . The majority is made up of microarray experiments (91% of samples), with the 'NimbleGen 090918 Vitus HX12 array' and 'Illumina HiSeq 1000' being the most used platforms among microarray and RNA-Seq experiments, respectively. Table 1 shows the summary of samples imported per platform. The complete overview of imported experiments and platforms is available in Supplementary Table S1.
Sample Annotation
Samples in VESPUCCI have been manually curated using a controlled vocabulary to precisely describe which parameters have changed across different experimental conditions. The creation of the controlled vocabulary is an ongoing adaptive manual process, in which curators add or modify new terms as needed during the acquisition of new experiment samples, keeping the vocabulary as concise and organized as possible. Terms in the vocabulary have largely been introduced ex novo following the original experimental designs, but on occasion have also been borrowed from other annotation systems like the Plant Ontology 3 (Cooper et al., 2013) for describing the plant anatomical structures or the modified Eichhorn-Lorenz scale (Dry and Coombe, 2004) for describing grapevine-specific developmental stages. The complete vocabulary, along with its hierarchical structure, is available in the Supplementary Table S2.
Compendium Creation
The compendium creation process can be divided in three major steps: data collection and parsing, sample annotation, and data homogenization. To facilitate these three steps and to deal with the complexity of maintaining big amounts of data and metadata, we have relied mostly on the COLOMBOS v2.0 and v3.0 (Moretto et al., 2016) backend managing applications.
For this V. vinifera expression compendium, new tools were added to the COLOMBOS backend software, mainly related to the probe-to-gene (re)mapping. Specifically, microarray probes are now aligned by a two-step filtering procedure using the BLAST+ program (Camacho et al., 2009). The two filtering steps are done to ensure that probes not only map to genes with high similarity (restrictive alignment threshold), but also that they map uniquely (unambiguously) to a single location and be less prone to cross-hybridization (less restrictive alignment Overview of all samples imported in VESPUCCI ordered by number of samples. The first column contains the name of the transcriptomics platform, the second column is the type of platform either microarray or RNA-Seq. The third column contains the number of samples measured with the respective platform imported in VESPUCCI. threshold). Probes of different microarray platforms generally vary in terms of length, species/cultivar of origin, and sequence quality. To always obtain the reasonably best possible alignment according to each platform's specific characteristics, parameters, and cutoff thresholds were employed on a platform-specific basis.
Vitis vinifera Gene Expression Compendium
At the core of the VESPUCCI V. vinifera compendium is a gene expression matrix that combines publicly available transcriptome experiments from the most common microarray and RNA-Seq platforms (an overview is given in Table 1 and Supplementary Table S1). VESPUCCI's distinctive characteristics are its data integration strategy and the way in which it handles information coming from different platforms and technologies, which is based on COLOMBOS technology. Data and meta-data are gathered and curated starting from raw intensities or sequence reads for microarrays and RNA-Seq, respectively. A robust normalization and quality control procedure is performed to permit direct comparison of gene expression values across different experiments and platforms. This results in a single expression matrix in which each row represents a gene and each column represents a 'sample contrast.' Sample contrasts measure the difference between a 'test' and a 'reference' sample from the same experiment. The decision as to which samples are paired to form contrasts, is made in part based on technical considerations as explained in Engelen et al. (2011), and in part on the desire to deviate as little as possible from the original intent of the experiment. Both samples, and the differences between them, are then extensively annotated with various sorts of metadata. The expression data itself are log-ratios (in base 2), so that positive values represent up-regulation, and negative values represent down-regulation of a gene in the test sample compared to the reference sample. VESPUCCI's compendium was built with specific modifications and additions for V. vinifera to the COLOMBOS technology, and these are described in the following sections.
Defining Measurable Gene Transcripts
The list of measurable gene transcripts, representing the rows of the expression matrix, is based on the CRIBI V1 gene annotation, with some modifications to optimize probe-to-gene remapping (see next section), and read alignment. An important consideration for this remapping is that the CRIBI V1 gene predictions can show (regions of) high similarity, which is not uncommon for plant crop species. As a result, probes can end up matching perfectly, or near perfectly, to more than one gene. According to the way in which, we built the compendium, such shared, ambiguous probes would usually be discarded because of their inability to reliably measure one single gene. Instead of removing these probes, with consequent loss of information, we decided to keep them as a measurement of a whole cluster of genes, implying those genes expression changes can only be assessed as a whole but not individually. The decision is a trade-off between losing probes (measurements) and losing the possibility to distinctively measure each gene as a single entity. We used the Nimblegen platform to investigate both ambiguous probes behavior and gene prediction structure, and decided on 466 cases in which genes can be "clustered" together according to their sequence similarity and the probes they share. One clear-cut case to present the complexity of the issue is depicted in Figure 1. From this example is clear that each gene is actually measured on average by four probes (as expected) but, except for three probes (VitusP00165181, VitusP00165231, and VitusP00165171) all the other probes align perfectly (or near perfectly) to other genes, making impossible to distinguish one gene from another. In particular these four genes, beside being different among each other, are all annotated as Myb-related, a well-known transcription factor gene family composed by 100s of genes (Matus et al., 2008) and are positioned one after the other across chromosome 2 in a region of approximately 130 kb. This target cross-talk is corroborated by the actual probe-level intensities, which are highly correlated across all sample contrasts included in the compendium (Figure 2).
To better understand the nature of gene-probe clusters, we carried out a survey of each of the 466 clusters. They consist in total of 1366 genes and 3472 probes, distributed across clusters as depicted in Figure 3. We inspected the clusters based on the probe-to-gene alignment quality and probe-level expression values across all Nimblegen experiments imported in VESPUCCI (38% of sample contrasts). The great majority of clusters consist of only a few genes with consistent behavior (according to probe expression patterns) and that are part of gene families and positioned one after the other along the same chromosome (or predicted on un-anchored loci). Other clusters are extremely dense and highly connected (e.g., clusters 1, 15, 176, and 177). Another set of clusters is composed by weakly connected genes (few probes) positioned on different chromosomes. For example cluster 283 is composed by five putative kinase proteins that span four chromosomes, and for which probes might be designed on a conserved catalytic domain. Some clusters present a 'perfect ambiguity' structure (e.g., clusters 47, 65) for which each probe aligns perfectly to each gene, making impossible to distinguish across measured genes. Interestingly, clusters with a non-perfect alignment (e.g., clusters 134, 220) instead show how probe level expression values reflect alignment mismatches, exposing the issue of measuring genomic variability instead of expression changes. Cases such clusters 185, 213, and others suggest that the measured genes could be allelic variants of the same gene as they are 99% similar with similar structure and predicted on contiguous or un-anchored loci. Finally, few other clusters appear to be problematic due to bad expression data and ambiguous probeto-gene alignment (e.g., clusters 20, 21, and 42). All of the gene cluster related information (probe-to-gene alignment graphs, probe-level expression, and correlation heatmaps) is available as Supplementary Materials.
Probe-to-Gene Remapping
To take full advantage of an updated gene annotation and for a more coherent integration of different platforms, we remapped probes for each microarray platform to the CRIBI V1 gene prediction. Such remapping of probes to transcripts has advantages over original annotations .
Different microarray platforms have different probe-to-gene alignment qualities. Given the disparateness in terms of number of samples, number of measured transcripts, and probe-to-gene mapping quality not all the available microarray platforms have been imported. The top performing platform is the Nimblegen microarray that shows a nearly perfect correspondence to the one in VESPUCCI. This is easily explained by the fact that it contains 118015 probes of 60 nucleotides with an average of four probes per gene and was specifically designed to match the CRIBI V1 gene prediction. It measures the expression of 29549 (out of 29971) gene predictions representing ∼98.6% of the genes of the CRIBI V1 gene prediction and 19091 random probes as negative controls (Fasoli et al., 2012;Cookson and Ollat, 2013). On the other hand, platforms like the 'University of Arizona Vitis buds spotted DNA/cDNA array' exhibit quite poor performance in terms of number of measured transcripts, probe-to-gene mapping, and probe signal (data not shown), which made us decide to exclude it from the compendium. The low quality can be ascribed to the fact that its 10369 probes have been designed from ESTs of two V. vinifera cultivars (Perlette and Superior) as well as the V. riparia species, and have an average length of nearly 1 kb.
We compared our probe-to-gene mapping results to the original mappings for the microarray platforms using the complete gene annotation 4 (Grimplet et al., 2012). The results are reported in Table 2. The mapping is quite consistent to the original mappings, with the notable exception being the 'Combimatrix GrapeArray 1.2' platform, which lacks nearly 40% of correspondence between the mapped genes. The higher numbers for our mapping can be attributed to a different mapping program and strategy used, while the differences in Total number of genes measured per platform. First column contains the microarray platform name. The second column holds the number of measured genes according to the platform original probe-to-gene mapping. The third column contains the number of measured genes according to VESPUCCI probe-to-gene mapping. The fourth column contains the number of overlapping genes between the two mappings. The last column contains the percentage of genes for which there is no measurement.
overlapping gene mappings in the INRA and Combimatrix arrays could be due to the need of double mapping the probeset to the corresponding tentative consensus (TC) and then to the CRIBI V1 gene prediction in the gene annotation file. This could lead to two different gene ids if the genes are similar to each other or if the TC has been wrongly annotated.
Sample Annotation
The V. vinifera gene expression compendium in VESPUCCI comes with an expansive and curated annotation of the biological conditions for all the included samples. Each sample in the compendium has been manually annotated using qualitative and quantitative terms from a controlled vocabulary specifically created for V. Vinifera (more information can be found in the Section "Materials and Methods" and Supplementary Table S2). Annotating test and reference samples to conveniently show the differences and similarities between these samples provides a useful way to assess which are the potential driving properties responsible for the observed changes in expression. The condition annotation system, with its hierarchical vocabulary, provides a broad view of publicly available grapevine gene expression studies and the nature of the experiments that have been carried out (Figure 4). Nearly half of the VESPUCCI compendium is composed of sample contrasts measuring changes in developmental stages, particularly in the berry around véraison (Eichhorn-Lorenz stage 33-38), which is by far the most investigated topic. Together with development-related traits, biotic, and abiotic treatments also comprise a big chunk of available experiments. They include a variety of infections with several grapevine pathogens, together with temperature, water, and salinity stresses among others, while the preferred sampled tissue is fruit, as a whole or as separated parts, e.g., skin and flesh.
Vitis Expression Studies Platform Using COLOMBOS Compendia Instances (VESPUCCI)
The VESPUCCI web application is a specifically designed interface to interact with the expression data, without the need for external tools or programmatic skills. It is built around the idea of expression modules. A module is a subset of the whole gene expression matrix composed by rows and columns that represent genes and sample contrasts, respectively. A set of built-in tools serves for creation and modification of modules by querying the database for genes and sample contrasts in FIGURE 4 | Categories of annotated sample contrasts. Number of sample contrasts annotated as measuring a change in one of five major categories. The differences between test and reference sample for some contrasts are related to more than one category; the proportion of these is indicated as 'shared' versus 'unique.' several ways. Users can look for expression patterns starting from specific genes, conditions or whole experiments they are most interested in and extend or reduce expression modules with more genes or sample contrasts either manually or automatically relying on VESPUCCI's clustering algorithm. Similar to a BLAST search, VESPUCCI tries to retrieve expression values for a given set of conditions, but using expression correlation instead of sequence similarity to score the best matches. Alongside tools for building and modifying modules, the web interface comprises several tools to convey information, like annotation term enrichment, the correlation network and the complete contrast annotation that display the link between changes in biological condition and gene expression. The VESPUCCI compendium is also accessible through a set of REST API calls, or from within the statistical software environment R (R Development Core Team, 2013) via the R package Rcolombos.
The web application of VESPUCCI is very much an exploratory tool to help researchers explore patterns of gene expression behavior for genes of interest. A prototype of VESPUCCI (dubbed MARCOPAOLO) has already been used to identify candidate genes involved in the fine regulation of anthocyanin and flavonol biosynthesis. In particular, coexpression with genes involved in the regulation of flavonoid biosynthesis was one of the criteria adopted to refine the list of genes identified in the genomic regions deduced by a QTL analysis for anthocyanin and flavonol content in ripe berries Malacarne et al., 2015).
A co-expression analysis against VESPUCCI was also carried out to find putative interacting partners and target genes of VvibZIPC22, one of the candidate genes specifically associated to flavonol biosynthesis, which is being proposed as a new regulator of flavonoid biosynthesis in grapevine (Malacarne et al., 2016). While both these cases represent a 'guilt-byassociation' co-expression analysis, VESPUCCI's tools are not limited to that and are designed to encourage users to play around with data in the compendium given the biological process they are interested in. One could also query for experiments of interest instead of genes, or simply study the behavior of (a set of) genes of interest across the different biological conditions without necessarily looking for other co-expressed genes. For instance, the top part of Figure 5 shows the results of a default Quicksearch for the 11 genes of the carotenoid cleavage dioxygenases (CCD/NCED) gene family, part of the grape carotenoid pathway (Young et al., 2012). The results of such a default search do not show all condition contrasts in the compendium, but only the top most relevant for the query genes, and can already provide insights into their behavior. First and foremost, it appears that the genes of this small gene family are not at all expressed in the same manner, and that for this particular family similarities in expression profiles are correlated up to a certain extent with the phylogenetic relationships between its genes [the superimposed tree in the bottom part of the figure is adapted from the phylogeny presented in Figure 6 of Grimplet et al. (2014)]. A deeper inspection of that behavior (Grimplet et al., 2014).
Frontiers in Plant Science | www.frontiersin.org not only confirms previously reported results, such as upregulation at berry ripening of CCD4a and CCD4b, but not CCD4c (Lashbrooke et al., 2013), but it also provides some novel, potentially interesting leads for further exploration. For instance, there is a prominent -but not consistent-anti-correlation of NCED2 and NCED3 with CCD4a and CCD4b. There are also strong changes in expression of some gene family members in response to Eutypa lata infection. These sort of observations generally only represent the initial starting point of further VESPUCCI analyses, such as investigating these genes' behavior in other infection processes contained in the compendium, or maybe looking for co-expressed genes with NCED2/NCED3 or CCD4a/CCD4b.
For an in-depth illustration of these concepts, we have included another case study in the website as well, which is presented there as a detailed step-by-step tutorial with the ability to load associated data directly in the interface. This particular case study is meant to show off VESPUCCI's most common features and capabilities in a hands-on manner. It focuses on a set of genes found to be modulated by the phytohormone abscisic acid (ABA) in pre-véraison berries (Stefania Pilati, personal communication); this list of genes was used as input to query the database. After performing any database query, VESPUCCI creates an expression 'module, ' a subset of the whole expression compendium determined by a set of genes and a set of sample contrasts and the corresponding expression values. The returned gene expression module indicated that the 55 ABA genes appear highly modulated in 353 experimental conditions in the VESPUCCI compendium. The default visualization of this module ('by expression'; Figure 6) emphasizes the interesting patterns of condition-dependent (anti-)co-expression behavior among this set of ABA genes. The gene annotation enrichment in turn reports their involvement in the response to stress and ABA, as well as in galactose metabolism. The main biological processes represented in our module, correspond to different biological contexts in which ABA affects gene expression: fruit and berry development, bud development, and water and salinity stress. The explorative purpose of the web-interface is strengthened by tools used to modify the module by extending (or shrinking) it with new genes and/or contrasts. Continuing the analysis, the module was split according to these three biological processes, and these sub-modules then formed the basis for new queries to include additional genes with highly similar (or opposite) expression profiles in these three specific biological contexts. The final lists of (anti)-coexpressed genes are candidates for being involved in the pathways regulated by ABA, and/or for sharing similar, but currently unannotated mechanisms of regulation with the genes in the module.
DISCUSSION
In this paper, we present VESPUCCI, a gene expression compendium for grapevine that integrates publicly available transcriptomics data from several microarray and RNA-Seq platforms into one coherent database, queryable via a web or REST interface. The web interface is meant to be intuitive and flexible for non-expert users, and is designed to encourage them to 'play around' with the data in the compendium, centering on the biological processes and/or genes they are interested in. In that sense, it is very much an exploratory tool, meant to assist more dedicated research in grapevine genomics, biology, and physiology, even if the integration of over 1500 transcriptomics samples into a single data set can be quite powerful in and of itself. The case studies presented in the results are examples of the type of analyses that can be done with VESPUCCI, and the sort of insights that can be gained from the combined data in the compendium. They all represent cases where VESPUCCI shows interesting modular gene expression responses that were not known previously, whether from the individual experiments included in the compendium, from published papers, or from other, independent (even non-transcriptomics) experiments or sources of information.
In contrast to model organisms for which available -omics experiments are considerable, crop species usually lack of a substantial amount of data. Nevertheless, there is an increasing interest for a more systemic view of crop species (Yin and Struik, 2010;Sheth and Thaker, 2014), driven by the ever-decreasing cost of high-throughput technologies and the development of new analysis tools. The availability of transcriptomics technology has increased substantially during recent years. Nowadays, RNA-Seq experiments enable scientists to reliably measure the majority of expressed genes. However, during the early days of transcriptomics, microarray measurements often comprised only a part of the complete transcriptome. The end result is that across the entire VESPUCCI gene expression compendium, the proportion of missing values is substantial (36%). Even though the great majority of samples have been measured using the Nimblegen or RNA-Seq technologies which can both cover the near complete transcriptome, the probes of the other microarray platforms are not able to provide measurements for as many genes. This is irremediable and intrinsic to the source data. We dealt with it by attempting to provide optimal, as reliable as possible expression measurements across the compendium, both at the level of the actual probe-to-gene mapping, as well as at the level of defining of the list of measurable gene transcripts.
These measurable transcripts (representing the rows of the gene expression matrix) incur some limitations in and of themselves as they are entirely based on the gene predictions for V. vinifera cv. Pinot Noir, with implications for experiments done on other cultivars. When microarray experiments are performed to measure expression for a specific cultivar with platforms containing probes designed from different cultivars, this generally leads to poorer signals, given the impossibility to distinguish expression variability from genomic differences among those cultivars. The reason is the lack of available high-quality gene predictions for each cultivar. While RNA-Seq has the advantage of enhancing its value over time with better genomes and gene annotations by re-doing the transcriptome mapping on the appropriate FIGURE 6 | Case study of ABA modulated genes. The default 'by expression' visualization of VESPUCCI orders both genes and contrasts in this heatmap (resp. rows and columns) in such a way as to highlight the different patterns of condition-dependent gene expression behavior.
cultivar, the situation is more complicated for microarray data. The solution is never ideal as nothing can be done to increase the quality of intensity signals if there is a mismatch between the cultivar used to design the probes, and the one used to do the experiment. Nevertheless, remapping the microarray probes on the cultivar-specific genes of the experiment would improve the gene annotation of the array platform and ensure only the reliable probes are considered to generate the final expression values. A novelty in the latest release of COLOMBOS is the option to explicitly recognize genomic differences between strains or cultivars instead of using a single reference genome to represent a species. This improves read alignment (RNA-Seq) or probe-to-gene mapping (microarrays) and generates higher quality expression data. In the long term, as more grapevine cultivar genomes become available, we can rely on these COLOMBOS innovations to build compendia for different cultivars and integrate them at the species level using homolog mappings, creating a proper 'meta-compendium' for grapevine varieties.
Currently VESPUCCI is limited to our knowledge of the V. vinifera cv. Pinot Noir genome, and despite the existence of a more recent version of the CRIBI gene prediction (Vitulo et al., 2014), we decided to keep V1 as the basis for this first release. From a practical perspective, by the time V2 was made publicly available, most of the compendium was already built and the switch to the newer version didn't show a significant increase quality-wise. The great majority of genes does not change in terms of gene structure, and as such for our purposes the end result was largely unaffected by the enhancements of the newer version over V1. Nevertheless, as the number of experiments (especially RNA-Seq) increases, the benefits of relying on V2 will become more prominent; for future VESPUCCI releases, we will most likely shift toward V2 (or more recent versions) to take advantage of the extended UTR regions for which NGS technologies provide better measurements.
The measurable gene transcripts that, we defined do not correspond one-to-one to the CRIBI gene predictions, but instead contain some 'gene clusters.' Expression data for these gene clusters are a compromise between our ability to measure each and every single gene individually, and how many genes can be reliably measured in total. While not absolute proof that these probes are unable to adequately distinguish the intended target genes, our results (Figures 1 and 2, and Supplementary Materials) showed that it is almost impossible to measure differences between each single gene in the clusters. This supports our decision to throw them together: even if these probes were capable of capturing different transcripts, the results do not indicate that this was the case for the more than 500 Nimblegen sample contrasts in the compendium. Therefore, instead of discarding the shared probes and lose potentially valuable information, we accepted the impossibility to unambiguously discriminate each and every single gene, gaining the opportunity to have a single measurement for those gene transcripts as a whole. Note that while the issue itself is (microarray) platform specific, the proposed 'gene clusters' are not. We chose to define them based on the platform with the highest data representation: the Nimblegen platform holds the largest number of samples as well as the highest quality of probe-to-gene mapping. This has no detrimental effect on data from the other microarray platforms, but RNA-Seq technology can provide individual gene measurements for at least some of our defined clusters (given that the corresponding gene sequences show enough dissimilarity). Due to the current low number of RNA-Seq experiments compared to the Nimblegen ones, we decided on clustering genes together in measurable sets to get the best out of all the data as a whole. As soon as RNA-Seq experiments will be more prevalent, we will revise the gene clusters to gain the ability to measure more genes separately for RNA-Seq, at the expense of losing the corresponding probes on the Nimblegen platform.
VESPUCCI includes nearly all of the gene expression data that is publicly available for grapevine at the moment; it provides a snapshot of the current situation of transcriptomics experiments performed. We're planning to keep it up to date by releasing yearly content updates. In the current release, berry development studies are the most represented experiments (especially during véraison) and this comes with no surprise given the importance of fruit quality in wine and spirits' production. This will be all the more obvious when mining for genes related to fruit ripening. Given the complexity of this developmental process, in which the fruit undergoes radical phenotypic and biochemical modifications (related to shape, size, color, sugar and aroma content, etc.), the number of modulated genes is quite big. VESPUCCI is meant as an exploratory tool to help researcher not only in finding patterns of gene expression for genes of interest, but also to aid the design of new experiments providing the most complete transcriptomics information currently available.
AUTHOR CONTRIBUTIONS
MM and KE conceived the work, implemented the procedures, analyzed the data, and wrote the manuscript; KE supervised the work. MM, PS, and KE collected and processed the data. MM, PS, SP, GM, LC, LG, GB, SG, CM, and KE built the controlled vocabulary, did the meta-data annotation, beta-tested the application, provided the case studies, and revised and edited the manuscript.
ACKNOWLEDGMENTS
The authors would like to thank Pieter Meysman for his valuable input, Alessandro Cestaro, Emanuela Coller, Michele Perazzolli, and Lorena Leonardelli for their suggestions on the sample annotation controlled vocabulary. | 7,525 | 2016-05-10T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
The impact of particulate electron paramagnetic resonance oxygen sensors on fluorodeoxyglucose imaging characteristics detected via positron emission tomography
During a first-in-humans clinical trial investigating electron paramagnetic resonance tumor oximetry, a patient injected with the particulate oxygen sensor Printex ink was found to have unexpected fluorodeoxyglucose (FDG) uptake in a dermal nodule via positron emission tomography (PET). This nodule co-localized with the Printex ink injection; biopsy of the area, due to concern for malignancy, revealed findings consistent with ink and an associated inflammatory reaction. Investigations were subsequently performed to assess the impact of oxygen sensors on FDG-PET/CT imaging. A retrospective analysis of three clinical tumor oximetry trials involving two oxygen sensors (charcoal particulates and LiNc-BuO microcrystals) in 22 patients was performed to evaluate FDG imaging characteristics. The impact of clinically used oxygen sensors (carbon black, charcoal particulates, LiNc-BuO microcrystals) on FDG-PET/CT imaging after implantation in rat muscle (n = 12) was investigated. The retrospective review revealed no other patients with FDG avidity associated with particulate sensors. The preclinical investigation found no injected oxygen sensor whose mean standard uptake values differed significantly from sham injections. The risk of a false-positive FDG-PET/CT scan due to oxygen sensors appears low. However, in the right clinical context the potential exists that an associated inflammatory reaction may confound interpretation.
www.nature.com/scientificreports/ EPR oximetry, with their potential for providing prognostic information and improving outcomes by directing therapy 5 .
The clinical experience with injectable/implantable particulate EPR oximetry sensors in humans is still in its early stages. In total, 71 patients have been injected with carbon particulate sensors. Initial studies with India ink in volunteers totaled fewer than twenty patients each, and established the feasibility of the technique 6 . Tumor oximetry studies using ink particulates at Geisel School of Medicine at Dartmouth/Dartmouth Hitchcock Medical Center (Printex or Carlo Erba ink) have enrolled 27 patients 3 ; studies investigating oximetry in human breast tissue using Carlo Erba ink at Emory University reported a total of 9 patients 7 ; 19 additional patients received Printex ink injections, most in other types of tumors. A first-in-humans EPR oximetry study using the OxyChip at Geisel School of Medicine at Dartmouth/Dartmouth Hitchcock Medical Center has enrolled over twenty patients (PS, personal communication). As future plans include expansion to larger clinical trials investigating directed hypoxia modification using EPR tumor oximetry, it is expected that more patients undergoing diagnostic imaging in the context of their cancer will have particulate oxygen sensors present.
In July 2017, the investigators involved in a set of clinical EPR oximetry studies that used all three types of particulate materials were alerted to the possible role of an EPR oxygen sensor resulting in a false-positive fluorodeoxyglucose (FDG) positron-emission tomography/computed tomography (PET/CT) examination. The use of the combination metabolic and morphologic PET/CT examination is widely recognized as useful in the management of many types of malignancies 8,9 . Also well recognized is the potential for non-specific metabolic uptake of the FDG due to non-malignant factors, which can lead to false-positive and false-negative results when evaluating malignancies 10 . These imaging findings can result in unnecessary biopsies, additional costs, potential morbidity, and increased worry for patients about disease recurrence. Although our particular interest is in evaluating the oxygen sensors for our clinical studies, there are many medical uses of carbon particulates in common practice including the care of malignancies, e.g., for marking suspicious colon polyps for follow-up, or for marking the exact location of a breast biopsy for future surgery 11 . Additionally, the widespread use of tattoos for cosmetic purposes raises the possible importance for determining whether carbon particulates may lead to artifactual findings on PET/CT imaging.
For these reasons this case prompted further analysis of the potential impact of the injection of EPR oxygen sensors on FDG imaging characteristics in tissues. The present report describes the case, involving a patient on a first-in-humans tumor EPR oximetry trial, in which a false positive FDG-PET/CT scan resulted in an unnecessary biopsy. A retrospective study was then performed investigating all patients enrolled in clinical trials of ink-based oxygen sensors injected into human tumors who also received FDG-PET/CT imaging. A preclinical study investigating the impact of oxygen sensors on FDG-PET imaging is also reported.
Materials and methods
Clinical studies. All procedures performed in studies involving human participants were done in accordance with the ethical standards of the institutional and national research committees and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all subjects. FDG-PET/CT scans were performed on a GE Discovery ST scanner. The dose of FDG was 0.15 mCi/ kg, all patients fasted for at least 6 h, serum glucose prior to scanning was less than 250 mg/dl, and scans were performed approximately 60 min after dose administration. The standardized uptake value maximum (SUV max ) was obtained by defining a region of interest (ROI) that encompassed the lesion, calculating the standardized uptake value (SUV) = (activity concentration in tissue)/(injected activity/body weight), and obtaining the maximum SUV in the ROI.
Clinical event. The patient was enrolled in two in vivo oximetry protocols at Dartmouth-Hitchcock Medical Center (DHMC), termed the OxyChip trial (Study 00028499) and the Printex ink trial (Study 00012459), both approved by the Dartmouth Institutional Review Board. The patient had had a prior history of melanoma on the back, but the nodular tumor in the neck being investigated by the studies was determined by pathological examination to be a squamous cell carcinoma. Briefly, following enrollment on the OxyChip trial (National Clinical Trial number NCT02706197), a paramagnetic OxyChip composed of LiNc-BuO was implanted into the patient's malignant nodule. The pO 2 surrounding the OxyChip was subsequently measured repeatedly (four measurements, each lasting between 10-30 min, approximately once weekly over four weeks) via non-invasive EPR oximetry, and the OxyChip was then necessarily removed during standard-of-care surgery on the nodule. This protocol was carried out under an investigational device exemption (IDE) for the use of OxyChips, from the Food and Drug Administration. Following surgical removal of the nodule, and after subsequent enrollment in the Printex ink protocol, a paramagnetic sensor material composed of Printex Ink was injected into the postoperative bed prior to therapeutic radiotherapy; non-invasive EPR oximetry was performed repeatedly (nine measurements, each lasting between 10-30 min, approximately weekly over the course of two months) to assess oxygen levels. An IDE is not needed for ink injections (or for EPR oximetry). Per protocol it is not necessary to remove injected ink. This clinical case prompted the investigations described below.
Retrospective review. A retrospective review was performed on patients enrolled in two studies investigating EPR oximetry using India Ink at Dartmouth. The retrospective review was approved by the Institutional Review www.nature.com/scientificreports/ Board at Dartmouth (Study 00031637). The studies reviewed used different inks as paramagnetic sensors; one used Carlo Erba and the other, Printex ink. These studies cover all patients enrolled for tumor oximetry and injected with either ink at Dartmouth. Other studies involving EPR oximetry were not evaluated, due to the required removal of the sensor (OxyChip) or to a low expectation of PET/CT scans at the injection site (foot). Patients enrolled in the two included studies were reviewed for the use of PET/CT imaging performed for reevaluation of their cancer after ink had been injected. Available images were reviewed for FDG avidity and/or CT findings co-localizing with available photographic images of injected ink (taken on protocol at the time of injection). Co-localizing imaging findings were scored as present or absent.
Preclinical study. Sensor injections. The study was carried out in compliance with the ARRIVE guidelines. Twelve female Sprague-Dawley rats (Janvier, Le Genest-Saint-Isle, France) were used for this study. Animal studies were conducted under an approved protocol (2014/UCL/MD/026) at Catholic University of Louvain, Brussels, Belgium. All applicable institutional and/or national guidelines for the care and use of animals were followed. Animals were divided into 4 groups: (1) CARBO-REP (n = 3): 50 µl of CARBO-REP suspension (40-mg charcoal/ml, Sterylab, Milan, Italy) was administered in the right gastrocnemius muscle using 0.5-ml insulin syringe with 29G needle. The contralateral muscles were assigned for sham injection with needle insertion but no suspension; (2) Carlo Erba (n = 3): 50 µl of charcoal suspension [Carlo Erba (Carlo Erba Reagents, Milano, Italy) 100 mg/ml in saline containing 3% Arabic gum] was administered in the right gastrocnemius muscle of animals using 0.5-ml insulin syringe with 29G needle. The contralateral muscles were assigned for sham injection with needle insertion but no suspension; (3) Printex (n = 3): 50 µl of carbon-black suspension [Printex U (Degussa AG, Frankfurt-Germany), 100 mg/ml in saline containing 3% carboxymethylcellulose] was administered in the right gastrocnemius muscle of animals using 0.5-ml insulin syringe with 29G needle. The contralateral muscles were assigned for sham injection with needle insertion but no suspension; and (4) OxyChip (n = 3): OxyChip was implanted in the right gastrocnemius muscle of animals using a 18G brachytherapy needle. The contralateral muscles were assigned for sham implantation with needle insertion but no OxyChip. The suspensions used (Carlo Erba, Printex, and CARBO-REP) are identical to those used in humans. The OxyChip implanted is identical to that used in humans. The injection protocol for all four particulates used the same standard methods as for humans. All implantations were performed under aseptic conditions while rats were anesthetized with a mixed solution of ketamine and xylazine (doses of 80 and 10 mg/kg, respectively).
Micro-PET imaging of rats. Micro-PET imaging was carried out at 4, 47, 97, and 181 days after oxygen-sensor implantation. Before every PET scan, animals were fasted overnight. FDG (Betaplus Pharma, Brussels, Belgium) was injected intraperitoneally with a dose of 400-600 µCi. PET acquisitions were performed 1 h after tracer injection on a dedicated small-animal PET scanner (MOSAIC, Philips) with a spatial resolution of 2.5 mm (FWHM). Rats anesthetized with 2% isoflurane underwent first a 10-min emission scan followed by a 10-min transmission scan using a 370-MBq 137Cs source 12 . After the correction with attenuation factors obtained from the transmission scan, images were reconstructed using a fully 3D iterative algorithm (3D-RAMLA) in a 128 × 128 × 120 matrix, with a voxel of 1 mm 3 . Regions of interest (ROI) were delineated using PMOD software (PMOD Technologies Ltd, Zurich, Switzerland). The 2D ROIs were established on consecutive transversal slices using a 30% iso-contour tool that semi-automatically created a 3D volume of interest (VOI) encircling the tissue of interest. The result of tracer uptake was expressed as SUV means, calculated as the FDG uptake normalized to injected dose and body weight of animals.
Statistics and data analysis. The two-way analysis of variance (ANOVA) was performed for each sensor as shown in Fig. 3
Results
Clinical event. The patient, a 62-year-old male, presented in 2016 with irregular pigmentation on his right mid back. A wide local excision and sentinel lymph biopsy demonstrated a pT3bN3 (Stage IIIC) melanoma with three positive nodes. A restaging PET/CT scan (prior to participating in the oximetry studies) following the surgery revealed a new hypermetabolic right lower cervical lymph node. He was scheduled for surgical removal of this lymph node and consented to participate in the OxyChip clinical study. He was implanted with an Oxy-Chip, underwent successful measurements, and had his scheduled surgery (which simultaneously removed his OxyChip). The pathology report unexpectedly revealed that the lymph node was a squamous cell carcinoma (SCC); no primary SCC malignancy was found. Adjuvant radiation therapy to the post-operative bed, for a total of 66 Gy in 33 fractions, was recommended. Immediately prior to beginning his radiotherapy the patient consented to participate in a second oximetry study, and Printex ink was injected into the post-operative bed (Fig. 1a). Nine successful EPR oximetry measurements were subsequently made 5 . In keeping with the study protocol, the Printex ink was not removed. Approximately 25 weeks after the Printex injection and 16 weeks after completion of radiation therapy, the patient returned for a routine follow-up visit. In a physical examination, the physician reported that "the lesion where the Printex ink was placed appears to be more prominent and elevated". A reading of a PET/CT scan from the same day reported "A 6 mm FDG avid cutaneous lesion in the medial right shoulder, corresponding to a lesion noted on recent dermatology examination, suspicious for malignancy. An inflammatory etiology is not excluded. No evidence of residual active disease in the neck and no evidence for distant metastatic disease. " The SUV max was calculated at 1.57. See Fig. 1b- www.nature.com/scientificreports/ An excisional biopsy was performed on the same day. Pathology revealed "dense nodular deposition of blackto-brown pigment in the dermis associated with pigment-laden macrophages and focal dermal scar. The findings are consistent with tattoo in the appropriate clinical setting. " The case was discussed with the patient's oncologist and no further treatment was recommended at that time. As of the time of this report, the patient has had no evidence of a local recurrence in the head and neck region, although he did develop a subsequent lung metastasis.
The study investigators reported this incident to the Dartmouth Institutional Review Board immediately. Six types of actions were taken in response to this case: (1) All participants in the ink studies were notified that, if they were to have an FDG-PET/CT scan involving the region with the ink, they may experience a falsepositive reading and therefore should bring their 'tattoo' to the attention of their treating physician and (2) this was added as a potential risk to the written informed consent. (3) The investigators on all studies involving oximetry at DHMC or other institutions were notified about this incident and the possibility of a false positive reading on an FDG-PET/CT and asked if they had been aware of any other cases (they had not). (4) A review of the literature was conducted regarding the circumstances surrounding positive and false-positive rates for FDG-PET/CT scans and whether India ink 'tattoos' , carbon particulates, or EPR oxygen-sensing devices may be related to an increased risk of false positives. (5) A retrospective chart review of appropriate ink patients in Dartmouth's clinical studies was performed to search for other instances of an FDG-PET/CT in patients injected with ink. (6) The investigators in Brussels initiated a preclinical investigation to learn more about whether the hypermetabolic activity on the scan appeared to be time-dependent and/or sensor-dependent. Results from the last two efforts are reported below.
Retrospective chart review. All patients participating in EPR tumor oximetry studies using Printex
or Carlo Erba ink at DHMC were included in the retrospective chart review. In addition to the case report described above a total of 22 patients were eligible for review (19 Printex patients, and 3 Carlo Erba patients). No Carlo Erba patient had a follow-up FDG-PET/CT with ink present. Six Printex patients had FDG-PET/CT exams following injection. Of these, two were not relevant to this review (one had had the ink removed in a cancer surgery conducted prior to the FDG-PET/CT and the other's FDG-PET/CT did not include the area of the ink). Of the remaining four, three had no suspicious nodules on the CT, or FDG avidity above background, corresponding to the areas of ink injection. These three patients had ink injected into different areas: the first was into the skin overlying the right groin, the second was into the skin overlying the stump of his remaining leg after an amputation, and the third was into the skin overlying the scapula. A total of seven FDG-PET/CTs were performed in follow-up for these three patients. One patient with a squamous cell carcinoma of the pyriform sinus had FDG avidity in the area of ink injection (injected into a cervical lymph node). The FDG-PET/CT scan was performed 5 months after injection (and 3 months after definitive chemoradiotherapy) and revealed www.nature.com/scientificreports/ persistent nodal disease as well as distant metastases, including areas of FDG avidity in the injected lymph node. In the context of active cancer in the area of ink injection no definitive conclusions can be drawn regarding whether FDG avidity was associated with the ink. However, due to the extent and depth of FDG avidity in the node it was felt by the physicians performing the review (Dr. Siegel and Dr. Schaner) unlikely to be related to the superficially injected ink.
Preclinical study. Figure 2 shows typical micro-PET images recorded at days 4, 47, 97, and 181 days after the administration of Printex, the oxygen sensor that was at the origin of the clinical case report. The accumulation of FDG was similar in both muscles (the muscle injected with the sensor and the sham injected muscle); data are presented in Supplementary Table 1. The analysis of FDG SUV mean was similar for all groups of rats. As shown in Fig. 3, there were no significant differences in FDG SUV means between sensor-injected muscles and sham-injected muscles, whatever the oxygen-sensor used (CARBO-REP p = 0.98, OxyChip p = 0.94, Printex p = 0.98, Carlo Erba p = 0.81). This again suggests that there was no evidence that false positives would be commonly expected for the oximetry sensors or carbon particulates in general. Of note, there appeared to be a trend for the SUV mean to increase at 6 weeks compared to the mean at the beginning of imaging, and then for the mean to decrease over the remaining several months. This is consistent with the aging of the animals 14 . Of particular note for this study, these trends occurred consistently in all sensors and in all sham injections, i.e., it was not linked to presence of the oxygen sensor in the muscle.
Discussion
FDG-PET/CT imaging has been in use clinically for more than 25 years and is one of the fastest growing technologies in nuclear medicine. It is used to diagnose, stage, and restage many types of malignancies including melanoma and head and neck cancer 9,10,15-17 . The current investigation stemmed from one case, in which a patient with both melanoma and head and neck SCC participating in clinical oximetry studies had an FDG-avid lesion felt to be concerning for recurrence. It was biopsied but was determined in fact to be a false-positive PET finding, presumably related to his ink injection. This prompted an investigation into the impact of clinical EPR oximetry sensors on FDG-PET/CT imaging outcomes.
The preclinical data presented here demonstrated no evidence that any of the four clinically used EPR oximetry sensors investigated increase FDG avidity when injected into rat muscle. Importantly, the oxygen sensors and placement techniques used in these experiments were identical to those approved for human use. There was no difference between the FDG reading for any sensor compared to the sham injection, at any of the four time points measured, up to six months. These data suggest that other factors, beyond the sensor in and of itself, contributed to the false-positive FDG-PET/CT in this patient.
A retrospective analysis of 22 patients enrolled at Dartmouth in two tumor EPR oximetry studies injected with Printex or Carlo Erba ink yielded only four patients who met the inclusion criteria for analysis, all of whom were injected with Printex ink. Of these, none had compelling evidence of FDG avidity associated with the ink injection.
Although clinical oximetry studies use three different types of particulates (carbon black, charcoal, and LiNc-BuO) this case involved carbon black, which is the same ingredient commonly used in black tattooing and black medical markings, particularly in the US. Therefore, in addition to investigating clinical oximetry sensors, we wanted to investigate whether there was any evidence in the literature that black tattooing or medical markings were implicated in any false positive readings for cancer patients. FDG is not tumor-specific in its identification of glucose consumption and therefore there is a significant potential for avid areas being associated with other factors, which can result in false-positive identification of malignancies [18][19][20][21][22][23][24] . False-positives have been widely reported secondary to factors ranging from the injection of imaging or contrast agents, differences in healthy www.nature.com/scientificreports/ and diseased tissues that cause variations in FDG uptake, artifacts related to the presence of metal and other objects in tissues, and due to inflammatory and other responses to injury (particularly when the test is used for restaging following surgery, chemotherapy, and/or radiotherapy where up to 40% of the agent may be taken up by non-tumor tissue). For these reasons an interim of 12-16 weeks after completion of therapy before imaging with PET/CT is usually recommended 10 . Concurrent diseases can also impact the uptake of the FDG including diabetes, tuberculosis, sarcoidosis and autoimmune diseases. Age, body mass and factors such as variation in the amount of the imaging reagent can also impact the images, and some advocate use of standardized uptake values in order to normalize and minimize any impact from these variations 24 . Patient activities at the time of imaging can cause some variations in uptake of the agents, with recommendations regarding patient care and history taking prior to imaging to minimize these artifacts 25 . We found no articles talking about India ink tattoos or medical markers and FDG uptake; one false-positive reading in a lung that was apparently related to carbon particulates was reported in a woman who had chronic exposure to wood burning 26 . However, there is typically a mild inflammatory-like response in the body's reaction to tattoos and markers (which generally involve a typical reaction to inert foreign body materials with macrophage cells surrounding the particulates), and macrophages typically are permanently present at most or all sites of India ink markings 27 . This finding is consistent with the histopathology found in this patient. Inflammatory processes are well known to be potential causes of false-positive readings, and it is likely that this reaction, rather than the ink in and of itself, resulted in the FDG avidity associated with the lesion [20][21][22] .
It is important to note that the injection of particulate inks for clinical EPR oximetry results in variable ink deposition, ranging from a focused concentration of ink to a much more diffuse pattern of distribution. A constellation of findings in the patient likely led to a biopsy and subsequent confirmation of a false positive. Firstly, the ink coalesced into a firm bleb (perhaps in the context of ink injection into a post-operative bed) that was subsequently evident on the CT component of the FDG-PET/CT imaging and led to concerns regarding a dermal tumor deposit. Secondly, the patient had had a previous diagnosis of melanoma, raising concern for skin metastases, and the dark color of the ink confounded the physicians' ability to clinically distinguish a recurrence from ink. Thirdly, the injected ink appears to have generated an inflammatory response sufficient to result in clinically identifiable FDG avidity. Lastly, the radiologist reading the scan was aware of the clinical skin finding and melanoma diagnosis, calling attention to a lesion that may have otherwise been interpreted as more likely benign given its size and relatively low SUV max of 1.57.
Conclusions
In the patient event reported here, the confluence of an ink injection associated with an inflammatory response, FDG avidity associated with an area of increased density on CT imaging, and clinical concerns about a dark lesion in the context of an advanced melanoma on the back, appears to have led to an unnecessary biopsy. The data presented support a low risk for a false positive finding using FDG-PET/CT imaging due to the oximetry sensors evaluated. However, as ink particulates are increasingly being used in medical applications, it will be important to be aware that, although unlikely, in certain circumstances they could contribute to a false-positive finding on an FDG-PET/CT scan. This FDG avidity is likely secondary to an associated inflammatory response. Awareness of the clinical context (including involvement in research studies) and discussion with providers is critical in order to make appropriate clinical decisions.
Data availability
Authors will make materials, de-identified data and associated protocols promptly available to readers without undue qualifications in material transfer agreements. | 5,579 | 2021-02-24T00:00:00.000 | [
"Medicine",
"Engineering"
] |
DeepPrecip: A deep neural network for precipitation retrievals
. Remotely-sensed precipitation retrievals are critical for advancing our understanding of global energy and hydro-logic cycles in remote regions. Radar reflectivity profiles of the lower atmosphere are commonly linked to precipitation through empirical power laws, but these relationships are tightly coupled to particle microphysical assumptions that do not generalize well to different regional climates. Here, we develop a robust, highly generalized precipitation retrieval algorithm from a deep convolutional neural network (DeepPrecip) to estimate 20-minute average surface precipitation accumulation using near- 5 surface radar data inputs. DeepPrecip displays high retrieval skill and can accurately model total precipitation accumulation, with a mean square error (MSE) 160% lower, on average, than current methods. DeepPrecip also outperforms a less complex machine learning retrieval algorithm, demonstrating the value of deep learning when applied to precipitation retrievals. Predictor importance analyses suggest that a combination of both near-surface (below 1 km) and higher-altitude (1.5 - 2 km) radar measurements are the primary features contributing to retrieval accuracy. Further, DeepPrecip closely captures total pre- 10 cipitation accumulation magnitudes and variability across nine distinct locations without requiring any explicit descriptions of particle microphysics or geospatial covariates. This research reveals the important role for deep learning in extracting relevant information about precipitation from atmospheric radar retrievals.
1 Additionally, the size and availability of both vertically pointing and space-borne remote sensing datasets have expanded 20 greatly in recent decades as a result of technological instrument improvements and new satellite missions (Quirita et al., 2017).
These radar-based retrievals are powerful tools for filling current observational gaps and have been applied to great effect 25 in previous literature (Levizzani et al., 2011;Hiley et al., 2010). However, these relationships demonstrate an inability to generalize well to unseen validation data as a consequence of the microphysical particle assumptions (e.g. shape, diameter, particle size distribution (PSD), terminal fall velocity and mass) used in each relationship's unique derivation (Jameson and Kostinski, 2002).
Recent machine learning (ML) approaches have demonstrated improvements in estimating surface precipitation from remotely-30 sensed data compared to traditional nowcasting methods (Shi et al., 2017;Kim and Bae, 2017). Deep learning models have benefited greatly from the increased observational sample provided by remote sensing missions and have shown skill in learning complex spatiotemporal characteristics of the underlying datasets (Chen et al., 2020b). However, a deep learning convolutional surface precipitation retrieval using vertical column radar data with no spatiotemporal covariates has yet to be developed to our knowledge. Previous ML studies have typically focused on passive microwave and infrared datasets which lack a detailed 35 analysis of the vertical column structure, or suffer from a limited sample for model training across multiple, distinct regional climates (Xiao et al., 1998;Adhikari et al., 2020;Ehsani et al., 2021).
In this work, we evaluate the abilities of a novel deep learning precipitation retrieval algorithm trained on vertically pointing radar (up to 3 km above the surface). The regression model we present (DeepPrecip) is a hybrid deep learning neural network consisting of a feature extraction convolutional neural network (CNN) front-end and a regression feedforward multilayer per-40 ceptron (MLP) back-end. The combination of these two architectures allows DeepPrecip to recognize and learn the nonlinear relationships between different layers in the vertical column of radar observations and produce an accurate surface precipitation estimate. Through an analysis of feature input combinations, DeepPrecip performance is examined to identify regions within the vertical column that contain the most important contributions to retrieval accuracy (Lundberg and Lee, 2017). The relationships that exist between different layers of the vertical profile (and each atmospheric covariate) can be used to help 45 inform current and future active radar retrievals of surface precipitation.
b indicate periods
where non-zero surface precipitation was recorded. Study sites were selected based on the required presence of a micro rain 50 radar (MRR) and collocated Pluvio2 weighted precipitation gauge. Rain, snow and mixed-phase precipitation were recorded, with each site's precipitation phase and intensity distribution of observations differing based on the regional climate. For instance, Marquette experienced strong lake-effect snowfall while Cold Lake received mostly light, shallow snowfall. Further, due to the warmer temperatures recorded at OLYMPEx, these sites were classified as primarily experiencing liquid precipitation, while ICE-POP received only solid precipitation.
Pluvio2 precipitation weighing gauge
Reference surface precipitation observations were collected by OTT Pluvio2 weighted gauges at each site. The Pluvio2 gauge records the precipitation accumulation from falling hydrometeors with a minimum time resolution of 1 minute (Colli et al., 2014). It includes a 200 cm 2 heated surface orifice (400 cm 2 at Ny-Ålesund) to prevent snow and ice buildup, along with site-specific wind shielding implemented as described in Table 1. These fence setups include a Double Fence Intercomparison 60 Reference (DFIR) shield which is a large, double fenced wooden structure which helps significantly reduce the impact of wind on surface precipitation measurements (Rasmussen et al., 2012;Kochendorfer et al., 2022). The Alter shield system consists of multiple freely hanging, spaced metal slats around the gauge top opening which also helps mitigate undercatch issues during strong winds (Colli et al., 2014). Sensitivity analyses of different rolling temporal windows indicated an optimal temporal resolution of 20-minute non-real time accumulation (measurement results 5 minutes after precipitation accumulation), with 65 minimum observational thresholds of at least 0.2 mm over the course of an hour from the Pluvio2 gauge.
Micro rain radar
Vertical pointing MRRs (developed by METEK) were located nearby the Pluvio2 gauges at each site to record complementary atmospheric observations. The MRR is a K-band (24 GHz) continuous wave Doppler radar which provides information related to hydrometeor particle activity up to 3.1 km above the surface (or 1 km for Ny-Ålesund) as a function of spectral power 70 backscatter intensity. The MRR provides 29 vertical bins (of size 100 m) spanning 300 m to 3100 m above the surface as shown for each site in Fig. 2.a. Raw radar measurements were preprocessed using Maahn's improved MRR processing tool Alter (Boudala et al., 2021) (IMProToo) for noise removal, dealiasing and for extending the minimum detectable dBZ to -14 which allows for improved measurements of solid precipitation. This data was then temporally averaged to align to the same 20-minute windows generated for the Pluvio2 observations and used as a model input (Maahn and Kollias, 2012).
ERA5
European Centre for Medium-Range Weather Forecasts Reanalysis version 5 (ERA5) hourly temperature (TMP) and vertical wind velocity (WVL) on pressure levels from 0 to 3 km were also included as additional input covariates to DeepPrecip (Hersbach et al., 2020). These inputs allow the model to more accurately recognize different precipitation event structures, large-scale atmospheric dynamics and hydrometeor phases during training. Note that WVL units (Pa/s) are defined using the 80 ECMWF Integrated Forecasting System (IFS) which adopts a pressure based vertical co-ordinate system (i.e. negative values indicate upwards air motion, since pressure decreases with height). Each of these variables were linearly interpolated to align with the MRR data over 20 minute intervals and at 100 m vertical resolution.
Surface meteorology
Collocated surface temperature (degrees Celsius ( • C)) and 10-meter wind speed (m/s) meteorologic observations were also 85 collected from instruments installed at each site and temporally aligned to the Pluvio2 and MRR datasets. Surface wind data acts as an additional observational constraint for mitigating the effects of undercatch on unshielded measurement gauges (Rasmussen et al., 2012). Undercatch occurs when precipitation falling in the presence of wind can cause hydrometeors to pass over the gauge top orifice. This effect has been shown to bias reported precipitation quantities by up to 10% (Ehsani and Behrangi, 2022). We therefore limit the available training dataset to periods when surface wind speeds are < 5 m/s, as this 90 restricts the analysis to low-medium wind speed events at each location to maintain a high gauge-catch efficiency (Yang, 2014). This preprocessing step reduces the average size of our total observational pool by 16% across all stations, however we note that maximum intensity precipitation events are not removed using this technique.
Surface meteorologic station temperature data is used for precipitation-phase partitioning at 5 • C to allow for Z e − S/R comparisons with DeepPrecip. Additional dry surface air temperature thresholds of 0 • , 1 • and 2 • C were also examined, but 95 Z e − S/R performance for both rain and snow appeared optimal when classified using a 5 • C threshold (where temperatures < 5 • C are considered as solid precipitation and temperatures >= 5 • C are considered as rainfall). This simple temperature threshold is an additional source of uncertainty in our comparisons with the Z e − S/R relationships due to the influence of mixed-phase precipitation on power law accuracy, along with uncertainties in the location of the active melting layer (Jennings et al., 2018). A more sophisticated phase partitioning system (e.g. using wet-bulb temperature as described in Sims and
100
Liu (2015)) could also be linked to DeepPrecip as an additional predictor to further improve classification of mixed-phase precipitation in future work.
Radar-precipitation power laws
Relating radar reflectivity observations to surface accumulation has been done extensively in past surface and spaceborne 105 radar missions through Z e − S/R power law relationships (Skofronick-Jackson et al., 2017;Liu, 2008). These power law relationships are empirically defined by relating reflectivity values in a near surface bin to observed surface accumulation under a set of assumed particle microphysics (e.g. size, shape, density and fallspeed) (Matrosov et al., 2008). While these techniques have been used to great success in previous studies from Schoger et al. (2021) and Levizzani et al. (2011), the assumptions about snowfall and rainfall particle microphysics makes the generalization of these power laws less robust, which 110 contributes to high uncertainty when applied across large areas with unique regional climates (Jameson and Kostinski, 2002).
We examine an ensemble of 12 Ka-and K-band Z e − S/R relationships in this work to compare with model output from DeepPrecip ( Table 2). As a consequence of the short temporal period (20 minutes) used in this analysis, MSE values are typically small (< 0.1 mm 2 ). Each Z e − S/R relationship was applied to a near-surface bin in the reflectivity profile (bin 5 for DP f ull and DP near , and bin 11 for DP f ar ) to derive a corresponding surface precipitation estimate. These bins were selected 115 based on a sensitivity analysis where we examined the performance of multiple near-surface high-importance regions of the vertical column (not shown). The best performing regions were identified as the above bins (5 and 11) based on the respective region of the vertical column being considered (near or far). More information regarding the derivation of each Z e − S/R relationship can be found in Table 2.
To further evaluate the performance of DeepPrecip, we also include model comparisons to a set of six site-derived Z e − P 120 (reflectivity precipitation) power law relations. Each Z e − P relationship is empirically derived from the collocated MRR and Pluvio data at each each observational site examined in this work (excluding Cold Lake and Ny-Ålesund due to the limited available sample and vertical extent of each site, respectively). Each Z e − P relation is fit via a non-linear least-squares approach for finding optimal a and b coefficients in Eq. 1 using SciPy's curve_f it optimization algorithm (Virtanen et al., (Jash et al., 2019) 2020). Each Z e − P relationship was then applied to bin 5 reflectivities at each site (i.e. the same process as is used for 125 Z e − S/R relationships) and compared with in situ observations to assess their general accuracy.
Neural network architecture
DeepPrecip is a feedforward convolutional neural network that takes as input a vector of 115 atmospheric covariates (Table 3), performs a feature extraction of the vertical column and outputs a single surface precipitation estimate using a fully connected multilayer perceptron. While the structure of this final version of DeepPrecip is complex, the retrieval evolved from a much 130 simpler initial state based on a multiple linear regression (MLR) model. Due to clear nonlinearities between observed reflectivity data and surface precipitation accumulation, the MLR model was unable to capture in situ variability and provided estimates near the mean accumulation value. Similar radar-based precipitation retrieval studies by Chen et al. (2020a) and Choubin et al. (2016) have demonstrated much better performance using an ML-based approach which led to the development of a random forest (RF) model, an MLP and finally the CNN.
135
The 1D convolutional layers perform a feature extraction of the vertical column of inputs to reduce the total number of parameters being fed into DeepPrecip's fully connected dense layers. This 1D-CNN structure can identify relationships within the vertical column, save on memory and lower computational training time requirements. To perform a 1D feature extraction, the forward propagation step between the previous convolutional layer (l − 1) to the input neurons of the current layer (l) are expressed in Eq. 2 (Abdeljaber et al., 2017). 140 Where k and l refer to the k th neuron for layer l with x as the resulting input and b as the scalar bias. s and w terms represent the neuron output and kernel weight matrix respectively, from the i th neuron of layer l − 1 (and to the k th neuron of layer l for w). The function 'f ()' represents the activation function used to transform the weighted sum into an output to be used in the following network layer.
145
The RF model tested in this study was based on previous work from King et al. (2022) where a RF was used to retrieve surface snow accumulation from a collocated X-band and Pluvio2 instrument at a single experiment site (GCPEx). The RF developed in said study demonstrated good skill in estimating surface accumulation, and so we incorporate the same model here (retrained on the MRR and ERA5 data from this study) as a baseline comparison to other ML retrieval methods (i.e. DeepPrecip).
150
The final DeepPrecip model structure is outlined in Fig. 2.b. It includes two 1d-convolutional layers, a 1d max pooling layer, dropout layer, flattening layer and concludes in a dense MLP regressor with 3 hidden layers. The total number of trainable model parameters in DeepPrecip is 3,937,793. Model training and testing was performed using a 90/10 (non-shuffled) split on each site to generate training and testing datasets for each location. As an additional preprocessing step, we standardize all input covariates to remove the mean and by scaling inputs to unit variance. The non-shuffled nature of this splitting process allows for 155 DeepPrecip estimates to be validated against unseen data and prevents overfitting from training on temporally autocorrelated vertical column inputs. Additionally, this stratified selection process guarantees that an equal percentage of data is included from each site during training.
Retrieval accuracy is primarily assessed using a mean squared error (MSE) skill metric calculated between each model's estimated surface accumulation values and the total Pluvio2 non-real-time reference accumulation observations over 20 min-160 utes. Performance statistics are reported from the average skill of the test portion of a non-shuffled 90/10 train/test CV split (i.e. DeepPrecip trained and tested 10 times on different contiguous portions of the full available sample). Note that each split is stratified to include 10% of each station's sample in every test split. Uncertainty estimates are calculated from running each CV split 50 times using dropout to gain additional insight into model variability (resulting in 500 total model instances).
The dropout layers simulate training a large number of models with differing architectures in a highly parallelized manner 165 by randomly deactivating (or dropping) a certain fraction of nodes within the network to provide a distribution of retrieval estimates.
Hyperparameter optimization
DeepPrecip was developed, trained and optimized on Graphcore intelligence processing units (IPUs) MK2 Classic IPU-POD4 (Louw and McIntosh-Smith, 2021), which significantly sped up the training time by a factor of 6.5 compared to a state-of-the-170 art NVIDIA Tesla V100 GPU. Additional training throughput comparisons are included in 175 Hyperparameters do not change value during training (in contrast to model parameters like internal node weights), but they play a critical role in the neural network learning process to map input features to an output. Selecting optimal hyperparameter values is an important part in constructing a model which minimizes loss, improves model efficiency and quality, and mitigates overfitting. Multiple steps were taken to address concerns of model overfitting. In addition to the use of non-shuffled training, we employ multiple regularization methods including early stopping, dropout, the application of layer weight constraints and 180 L2 regularization (details in Table 5). L2 regularization (or ridge regression) adds an additional penalty term to the MSE loss function which helps to create less complex models when dealing with many input features to improve model generalization. To select the optimal values for the aforementioned hyperparameters, and to optimize DeepPrecip's general structure, we use a form of hyperparameterization known as hyperband optimization (Li et al., 2017). Hyperband is a variation of Bayesian optimization which intelligently samples the parameter space to find hyperparameter values that minimize loss while learning 185 from previous selections. Hyperband adds an additional component to the analysis by also slowly increasing the number of epochs run during each phase of the optimization process to sample in a more efficient manner. DeepPrecip hyperparameters were derived by running a 10-fold CV hyperband optimization continuously on a single Graphcore IPU for approximately two weeks. The final hyperparameter values (and their respective parameter search spaces) can be found in Table 5.
Unsupervised classification layer 190
An unsupervised k-means clustering preprocessing step is also applied using MRR reflectivity profiles as input to provide DeepPrecip with insights into distinct profile group (PG) vertical column structures (Fig. 2.b). Minimizing within-cluster sum of squares between each vertical column radar estimate results in k = 4 PGs being selected using the within-cluster-sum of squared errors elbow criterion method (Fig. 3). The elbow method is a clustering heuristic which allows for an optimal number of clusters to be selected as a function of diminishing returns of explained variation (i.e. finding the elbow or "knee of the 195 curve"). K-means clustering was applied using Python's scikit-learn package on all input reflectivity data to generate four profile clusters which were included as additional input parameters to DeepPrecip. These clusters are useful for partitioning the precipitation data into groups based on different precipitation intensity-classes (trace, low, medium and high intensity) to identify where DeepPrecip finds the most important contributors to high retrieval accuracy for each category of storm intensity.
Derived cluster groups are useful for interpreting feature importances from model output (Section 4.2).
DeepPrecip retrieval performance
We first examine the differences in performance between DeepPrecip and an RF that has demonstrated good performance in our previous work (not shown) to assess the capabilities of a less-sophisticated ML-based approach over a CNN. DeepPrecip demonstrates improved skill in capturing most of the peaks and troughs in observed precipitation variability ( Fig. 4.a). These 205 differences are most clearly demonstrated in Fig. 4.a at OLYMPEx and JOYCE, where DP more accurately predicts Pluvio2 precipitation extremes compared to the RF. Both models appear to struggle in capturing accumulation intensities during periods of mixed-phase precipitation when temperatures are near zero degrees C (i.e. Marquette, JOYCE and the tail end of OLYMPEx 1) due to a lack of training data with similar climate conditions and the complex nature of such events. DP does demonstrate improved skill at capturing light intensity precipitation at the beginning of the JOYCE period (compared to the RF), however 210 this is with some uncertainty as noted by the wider shaded region (1 standard deviation). Performance statistics (Fig. 4. old is imposed where retrievals recorded during periods with temperatures below five degrees C are classified as snow and periods equal to or warmer than five degrees C as rain. DeepPrecip more accurately captures surface precipitation quantities when compared to the Z e − S/R estimates, with a total accumulation curve similar in shape to that of in situ indicating that DeepPrecip more closely captures the observed precipitation variability and magnitude. Log-scale MSE statistics are calculated between each model and in situ records in Fig. 4.d and indicate that DeepPrecip consistently outperforms traditional 220 Z e − S/R power-law methods by 200% on average. As a general precipitation retrieval algorithm, we do not explicitly train a DeepPrecip snow and DeepPrecip rain model for different precipitation phases with unique regional atmospheric microphysical conditions. While the Z e − S/R models shown in Fig. 4.c/d are bespoke for rain or snow, DeepPrecip is trained on all data with no a priori knowledge of the underlying physical precipitating particle state. DeepPrecip estimates of accumulated rain display a lower MSE than that of snow (Fig. 4.d). We believe these differences to 225 be twofold: 1) the larger sample of rainfall events in the training data (3 times that of snowfall); and 2) the more complex nature of snow particle microphysics. Unlike the uniform properties of a rain droplet, the shape, size and fallspeed of solid precipitation is much more dynamic and challenging to model (Wood et al., 2013). Continued issues with interference from wind may have also impacted the accuracy of in situ measurements of snow accumulation leading to higher uncertainty and error (further discussions on these uncertainties in Sect. 5) (Kochendorfer et al., 2017). To visualize the range in uncertainty from the 230 CNN model estimates, we display confidence intervals showing 1 standard deviation in Fig. 4.b/d from 50 DeepPrecip model realizations using dropout. Both ML-based models exhibit the highest uncertainty during periods of mixed-phase precipitation at GCPEx and Marquette along with high intensity precipitation at OLYMPEx.
To further evaluate DeepPrecip's retrieval skill over traditional methods, we compare model performance to a set of six custom Z e − P site-derived power laws (derivation details in Sect. 3). While Z e − P relationships typically perform well in the 235 regional climate under which they were derived, they do not generalize well outside of said climate. This lack of robustness is visible in the differences between in situ and Z e − P estimates of accumulation in Fig. 5.a, where each Z e − P (light gray line) displays consistent positive or negative biases and no single power law captures the high variability in accumulation across multiple sites. For instance, OLYMPEx 1 and OLYMPEx 3-derived relationships produce a strong positive bias at JOYCE, and the JOYCE-derived Z e − P power law is quite negatively biased when applied at OLYMPEx. The mean of all six custom 240 power laws is shown in bold gray, and while it closely captures total mean accumulation across all sites, it is unable to model the high variability in precipitation intensity.
The resulting MSE from the application of each custom Z e − P relationship to each site (along with DeepPrecip) further demonstrates DeepPrecip's improved robustness (Fig. 5.b). In all other cases, DeepPrecip either outperforms all Z e − P power laws or is only slightly worse than the power law derived for the site in which it is being tested. On average, DeepPrecip 245 retrievals result in 160% lower MSE values than all Z e − P site-derived power laws estimates when applied to the testing data across the full spatiotemporal domain (Table 6). Figure 5.b also displays a model intercomparison of each Z e − P relation, where we can clearly see how Z e − P relations like those derived at OLYMPEx 1 and 3 are clearly unable to capture the vastly different snowfall regimes at sites like ICE-POP, GCPEx and JOYCE with their much larger MSE values for these sites.
The robustness of DeepPrecip was further evaluated using a leave-one-out cross validation (CV) for each site of training 250 observations. This approach tests the skill of DeepPrecip at predicting precipitation for a location that was not included in the training data, which is a strong indicator of the generalizability of the model. Log-scale MSE results of this test for each site are shown in Fig. 6 for each precipitation-phase subset, along with the corresponding average Z e − P/S/R estimate when applied at that site. These findings demonstrate similar performance to the baseline DeepPrecip model skill, which continues to outperform all traditional power law techniques on average. The large range in skill in the power law relationships at most sites 255 (wide error bars) further demonstrates the relative lack of generalizabiltiy of Z e − P/S/R relationships to different regional climates. Further, the site-derived power law fits (gray dots) perform worse on average than DeepPrecip for locations that are close in proximity (i.e. the OLYMPEx sites).
Predictably, DeepPrecip performance degrades compared to the baseline model when the testing site is left out since the model is no longer trained using data representing the regional climate of the site being tested. This difference in performance is 260 most notable at the set of OLYMPEx sites, and while DeepPrecip performance is still improved over the Z e −S/R relationships, we note a substantial percentage increase in MSE (375% on average) at these locations. OLYMPEx measurements were the only observational datasets without any gauge shielding and which is a likely source of uncertainty further contributing to this increase in error when the site is removed from the training set (Kochendorfer et al., 2022).
Quantifying sources of retrieval accuracy 265
Identifying regions within the vertical column that are the most important contributors towards retrieval accuracy is critical for informing future satellite-based radar precipitation retrievals. The ground-based radar instruments used in this work do not suffer from the same ground clutter contamination issues typical of satellite-based radar observations and we are therefore able to quantify the contributions to model skill arising from the included boundary layer reflectivity measurements in DeepPrecip.
Separating the training data into three subsets based on vertical extent and generating new models with this data, allows us to 270 examine changes in performance as a function of information availability. These subsets include: DP f ull (all 29 vertical bins, i.e. the baseline model), DP near (the lowest 1 km; 8 bins), and DP f ar (1-3 km; 21 bins). DeepPrecip MSE results (Table 6) 13 for each subset suggest that the information provided by a combination of both near-surface and far-profile data results in the highest accuracy.
Since Ny-Ålesund MRR observations were recorded with a maximal vertical extent of 1 km, they are only included 275 in DP near . Model skill when including/excluding Ny-Ålesund training data (19, 000 samples) was examined to determine whether it was confounding comparisons between the aforementioned vertical profile subset models. The results of these tests suggested that the impact on overall performance is negligible across both precipitation phases when Ny-Ålesund is included or excluded in the training set.
Distributions of surface precipitation anomalies appear distinct for rain and snow (Fig. 7), with the full column model 280 more closely capturing accumulation recorded by in situ gauges. Anomaly frequencies are derived by removing the mean accumulation estimate for each phase at each site. We attribute the structural differences between the anomaly distributions of of snow and rain to the more complex particle size distributions (PSDs) of snowfall coupled with the more variable particle water content of snow compared to that of rain (Yu et al., 2020). Additional uncertainties in the surface Pluvio2 measurement gauge observational records of snowfall due to gauge undercatch is another likely contributor of increased error (Kochendorfer 285 et al., 2022). In Fig. 7.a, both DP f ar and DP near exhibit higher anomaly values with a flattened curve top and heavy tails.
Using a combination of information from both near and far bins reduce these biases and tightens each accumulation anomaly distribution around zero. A similar trend is also present for rain in 7.b, where we again most closely capture the in situ anomaly distribution using DP f ull . A major challenge in deep learning is interpreting model output. SHapley Additive exPlanations (SHAP) (Lundberg and 290 Lee, 2017), is a game theory approach to artificial intelligence model interpretability based on Shapley values that has previously been used to great effect in the Geosciences (Maxwell and Shobe, 2022;Li et al., 2022). Shapley values quantify the contributions from all permutations of input features on retrieval accuracy to identify which are the most meaningful. While computationally expensive (with exponential time complexity), this process provides local interpretability within the model by examining how each possible combination of all input features impacts model accuracy (Jia et al., 2020). Here, the calculated
295
Shapley values give insight into the regions of the vertical column that are contributing the most useful radar information in the precipitation retrieval.
Shapley values for the entire dataset used in DP f ull indicate that the most important model predictors comprise a combination of both near-surface and far profile bins (Fig. 8). Reanalysis variable model inputs are generally the least influential, except for the trace precipitation case where low-mid level TMP and WVL bins appear highly important (Fig. 8). In all cases, TMP for different subsets of vertical column reflectivities separated into all profiles, trace intensity, low intensity, medium intensity, and high intensity precipitation events based on a k-means clustering of input data (more in Sect. 3.2). Areas of dark color indicate a high feature importance at that location within the vertical column.
Discussion and Conclusions
DeepPrecip not only demonstrates considerable retrieval accuracy without the need for physical assumptions about hydrome-310 teors or spatio-temporal information, but also provides insight into the regions of the vertical column which are most important for improving predictive accuracy. The results from Sect. 4.2 suggest that while the exact altitudes providing predictive information from the vertical column may shift up or down under different precipitation intensities, there exists a consistent combination of both near-surface and far profile bins that always appear as highly important contributors to model skill. Furthermore, while RFL is typically considered as the most important predictor in radar-based precipitation retrievals (Stephens 315 et al., 2008;Skofronick-Jackson et al., 2015), we find that contributions from RFL, DOV and SPW provide a near-equal level of importance, with respective average percent contributions to model output of 30%, 31% and 28%, while ERA5 TMP and WVL variables have a total combined importance of 10%.
The combined insights from DeepPrecip's multi-model vertical extent evaluations and feature importance analyses demonstrate a potential to influence current and future remote sensing precipitation retrievals using deep learning. Instruments like 320 CloudSat's Cloud Profiling Radar (CPR), or the Global Precipitation Measurement (GPM) mission's Dual-frequency Precipitation Radar (DPR) also use active radar systems to perform similar, radar-based precipitation retrievals based on data from vertical column reflectivities (Stephens et al., 2008). While CPR and GPM-derived products use a more sophisticated Bayesian retrieval to the Z e − S/R relationships evaluated here, the resulting precipitation estimates are still tightly coupled to a priori physical assumptions of particle shape, size and fallspeed which is a substantial source of uncertainty (Hiley et al., 2010;Wood 325 et al., 2013). Additionally, the results of this study further support prior inference regarding the existence of regions of high importance in the < 1 km (near-surface) region of the vertical column relating to shallow-cumuliform precipitation strongly influencing retrieval accuracy. This is an area that is typically masked in satellite-based products (i.e. the radar "blind-zone") due to surface clutter contamination, and has been shown in previous work to likely be a major source of underestimation from missing shallow cumuliform precipitation (Maahn et al., 2014;Bennartz et al., 2019). This work motivates the importance of 330 continued research towards obtaining high-quality, non-cluttered near surface radar data to use as additional model inputs in future space-based retrievals of precipitation.
DeepPrecip is not without uncertainty and error which will reduce its accuracy when tested against new data. Uncertainties present in the training data (stemming from the MRR, ERA5 or Pluvio2 observations), will propagate through the model and bias the output estimates (Kochendorfer et al., 2022;Jakubovitz et al., 2019). We have taken steps to mitigate the impact of 335 these uncertainties through multiple data alignment and preprocessing decisions (details in Sect. 3), however precipitation gauge undercatch, wind shielding configurations, MRR attenuation and differences in site-specific vertical extent cannot be eliminated as contributors of retrieval error. While 60% of the power laws examined in this work were MRR-derived K-band relationships, the remaining 40% where either Ka-band or the Marshall-Palmer (MP) Rayleigh relationship. While K and Ka are similar radar frequencies, the differences between the two can bias the resulting precipitation estimate when a Ka-derived 340 power law is applied to K-band data (especially during periods of intense precipitation). Furthermore, while the collection of data from multiple sites provides us with a robust training set under multiple regional climates, due to the unique experimental setups at each site, calibration biases between study locations may further reduce DeepPrecip's skill when applied to new data.
As the MRR instrument has a limited 3 km maximum vertical range, we also miss possible precipitation events occurring outside of this region, which may contribute to further surface precipitation underestimation. Internal CNN model uncertainty 345 is likely driven, in part, by a combination of the high variability that is typical of precipitation and the limited sample from nine measurement sites over 8 years, which does not fully capture all different forms of possible precipitation structure and occurrence.
Code and data availability. | 7,800.8 | 2022-10-21T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
PIMGAVir and Vir-MinION: Two Viral Metagenomic Pipelines for Complete Baseline Analysis of 2nd and 3rd Generation Data
The taxonomic classification of viral sequences is frequently used for the rapid identification of pathogens, which is a key point for when a viral outbreak occurs. Both Oxford Nanopore Technologies (ONT) MinION and the Illumina (NGS) technology provide efficient methods to detect viral pathogens. Despite the availability of many strategies and software, matching them can be a very tedious and time-consuming task. As a result, we developed PIMGAVir and Vir-MinION, two metagenomics pipelines that automatically provide the user with a complete baseline analysis. The PIMGAVir and Vir-MinION pipelines work on 2nd and 3rd generation data, respectively, and provide the user with a taxonomic classification of the reads through three strategies: assembly-based, read-based, and clustering-based. The pipelines supply the scientist with comprehensive results in graphical and textual format for future analyses. Finally, the pipelines equip the user with a stand-alone platform with dedicated and various viral databases, which is a requirement for working in field conditions without internet connection.
Introduction
Recent advances in next-generation sequencing (NGS) technologies and computational methods are revolutionizing scientific research in public health [1]. One such application of NGS is metagenomics. Metagenomic sequencing (mNGS) is an unbiased, culture-independent approach that analyzes the nucleic acid content of any clinical or environmental sample [2][3][4][5]. Clinical metagenomics (CMg) is a method of choice for detecting and identifying infectious etiologies [6,7]. In addition to pathogen detection and identification, infectious disease surveillance also provides information on disease transmission, strain type, virulence profile, antimicrobial susceptibility, and other information relevant to outbreak investigation and treatment guidelines [3,[7][8][9][10].
Considering that most emerging infectious diseases (EIDs) in humans originate from wildlife known to harbor many zoonotic pathogens, such as bats, NGS are now considered essential tools for the molecular characterization of viral communities that could help determine the origin of outbreaks and discover new pathogens [10,11]. Due to the incessant emergence of zoonotic diseases, a constant search for emerging infectious etiological agents is deemed necessary. Two main approaches are currently available for the search of these infectious agents by NGS, either using short (e.g., with the 2nd generation technology with sequencers marketed by Illumina) or long (e.g., with the 3rd generation technology with sequencers marketed by Oxford Nanopore Technologies) reads. Beyond the difference in read size, the use of 2G (Illumina) allows for a much higher coverage and sequencing depth but can take 20 to 60 h, whereas sequencing based on nanopore sequencing technology allows a direct and real-time availability of sequencing data and a reduction of sequencing time from several days to only a few hours. Although the sequencing depth is less than that of 2G technology, 3G technology has tremendous potential in clinical sequencing applications at the point of care, whether at the bedside or in the field, due to its portability, speed, flexibility and relatively low cost of the device [12][13][14]. However, whatever sequencing technology is used for virological investigation, it requires the development and use of dedicated workflows for processing these data. Several workflows for the analyses of viral metagenomic data obtained from 2nd and 3rd generation sequencers exist in the literature [15][16][17]. These can be distinguished to perform either time-constrained diagnostics, surveillance and monitoring of epidemics, remote homology detection (discovery), and biodiversity studies [18].
In the field of metagenomics for pathogen research, workflows are divided into three main areas that make up the methodological approaches of workflows: read-based approaches, assembly-based approaches, and clustering-based approaches. The first readbased strategy analyzes unassembled short reads to identify the overall taxonomic/functional composition of samples. The usual main steps of this approach are: reading the QC [19], merging the reads [20], mapping to the NR for taxonomic data [21,22], and analyzing the summaries of the taxonomic and functional distributions. The MG-RAST server is a most representative example of this approach based on short read analysis [23]. The second strategy that is based on assembly attempts to assemble reads from one or more samples, and "classify" the contigs from these samples into genomes to analyze genes and contigs. It identifies the functional and metabolic capabilities of specific microbes in the samples. As before, the workflow includes classical steps such as quality control of reads [19] and read merging, but there are additional assembly steps [24,25], such as mapping of reads from each sample to contigs for quantification and clustering [26]. The genome clustering, contig composition, and mapping data are used to group contigs into "genomic bins" [27,28], eventually moving to de novo gene annotation [29,30] and performing gene annotation in read-centric approaches. The IMG from JGI is an example of a workflow that is based on a short read assembly-centric strategy [31]. The last approach, which is based on clustering, includes the same steps of quality control, merging, and filtering as previously described, but there is an additional clustering step [32,33]. This last step results in "centroid" sequences, i.e., sequences representative of each group, which are transmitted downstream of the pipeline. The VirIdAl box is an example of a workflow based on a clustering approach [34].
Although there are differences that are required for the analyses of long reads versus short reads, the analytical approaches are divided into the same three clusters. Regardless of the chosen analytical approach, the preprocessing steps always start with the steps of base calling [35], demultiplexing [35,36], filtering and quality control [37]. The read-only strategy relies on the identification of reads by taxa using an algorithm with Centrifuge software [38] and the NCBI RefSeq sequence database, as is the case with the Metrichor/EPI2ME cloud platform (Metrichor Ltd., Oxford, UK). Strategies based on assembly and clustering share, with the previous strategy, the pre-processing steps to which the assembly [39,40], polishing [41][42][43], or clustering [32,44] steps must be added as described in the MicroPIPE [45], NanoCLUST [46], and mothur [47] workflows.
Nowadays, it is very common in a research laboratory to combine the different sequencing technologies available to perform metagenomic studies. Therefore, it is normal to use different pipelines, based on a specific strategy, to perform taxonomic classifications of large amounts of sequencing data depending on the strategy adopted. As we are not aware of any bioinformatics pipeline that can combine the three analysis strategies in a single workflow, the establishment of a connection between the different existing workflows may require time and qualified human resources due to the problems that may be encountered stemming from the lack of compatibility between various workflows. In this technical note, we present PIMGAVir (PIpeline for MetaGenomic Analysis of Viruses) and Vir-MinION (Viral MinION pipeline), which are two viral metagenomic pipelines designed to provide scientists with a complete baseline analysis of viral sequences from 2G and 3G sequencers. Both pipelines are freely downloadable (Supplementary Materials) and allow the user to run one or more of the three approaches independently.
Objectives of PIMGAVir and Vir-MinION Pipelines
The main objectives of the PIMGAVir and Vir-MinION pipelines are to provide a complete taxonomic classification basis for reads from either 2G or 3G sequencing, respectively, to hide the complexity of the process from the end user, to save runtime and to be usable in offline conditions. They have been designed to automatically perform both pre-processing and contaminant removal tasks of a defined host and bacterial genomes as well as to execute one or more strategies in parallel to perform taxonomic classification, thus obtaining a wide range of results. Finally, PIMGAVir and Vir-MinION all present results in text/tabular and graphical plots.
Description of the Two Pipelines
Given that the objectives of these two pipelines are similar, they share the use of identical computer packages as for example with the use of megahit for the assembly step (Table 1). However, they also differ in the use of dedicated packages (Table 1), such as the use of filtering or demultiplexing tools that are specific to 3G data processing. On the other hand, both pipelines use the same database that will be used for the filtering steps of the contaminants, e.g., Silva to remove bacterial sequences or NR refseq for the identification of viral sequences ( Table 2). [52] 1.3 PIMGAVir filtering megahit [24] v1.2.9 PIMGAVir/Vir-MinION assembly flye [40] v2.9 Vir-MinION assembly quast [53] v5.0.2 PIMGAVir assembly spades [25] 3.13.1 PIMGAVir assembly bowtie2 [26] 2.4.4 PIMGAVir assembly samtools [54] 1.10-3 PIMGAVir assembly pilon [42] 1.23 PIMGAVir assembly Prokka [30] 1.14.6 PIMGAVir assembly kraken2 [55] 2.1.2 PIMGAVir/Vir-MinION taxonomy kaiju [56] 1.8.2 PIMGAVir/Vir-MinION taxonomy blastn [21] 2.9.0+ PIMGAVir/Vir-MinION taxonomy seqkit [57] 2.0.0 PIMGAVir clustering vsearch [32] v2.18.0 PIMGAVir clustering guppy_basecaller [35] 5.0.13 Vir-MinION basecalling NanoFilt [58] 2.3.0 Vir-MinION filtering guppy_barcoder [35] 5.0.13 Vir-MinION demultiplexing NGSpeciesID [44] 0.1.2.1 Vir-MinION clustering medaka [43] 0.11.5 Vir-MinION clustering As shown in Figure 1A, the pipeline executes the pre-processing task to trim the raw data and remove contaminants. Then, according to the user option, the reads_filtering script will filter out the reads not belonging to desired taxa. At this point, the pipeline will execute one or more strategies (namely, read_based, ass_based, and clust_based) in parallel to proceed with the taxonomic classification. Double applications perform both the clustering and assembly methods to present the user with a pool of comparable results. The pipeline builds a specific data structure following the logical schema "strategy-application" to be easily surfable. For example, Figure 1B depicts the data structure created during the analysis step. The PIMGAVir pipeline uses a set of local-viral databases to perform both the filtering and taxonomic tasks ( Figure 1C). The pipeline runs under the Ubuntu 20.04 operative system, and a set of bash scripts performs the workflow. Each strategy, once called, executes a few scripts and produces a collection of results (text, HTML, and pdf) and log files (Figure 2A). Most of the scripts lean on a group of applications and databases to accomplish their task. Figure 2B shows the databases and applications used by every script. Finally, the user has the freedom to run every one of the mentioned scripts as an autonomous process as long as the input format is respected. The three strategies have been designed to run independently, allowing the user to run on parallel computing systems.
The PIMGAVir pipeline has been tested on a cluster configured on the SLURM workload manager, running on multiple samples at once. The following is an example of the SLURM script to run the PIMGAVir pipeline ( Figure 2C).
Regarding the Vir-MinION pipeline, after the pre-processing step, which is executed as the default step, the pipeline runs one or more methods in parallel, according to the user choice. The read_based strategy carries out the taxonomic classification using the demultiplexing results as input to generate an overall view of what the sample contains. The clust_based approach, as the name suggests, identifies the clusters obtained from the metabarcoding step and executes the taxonomic classification on them. In the ass_based mode, the pipeline performs the assembly step from the shotgun, producing their taxonomic classification. As in the same case of PIMGAVir, the Vir-MinION pipeline relies on localviral DBs to guarantee its capability in connection-less conditions and save runtime. The use of both taxonomic classifiers (Kraken2 and Kaiju) gives the user the possibility to compare the results. The outcomes are presented in graphical and text tabular layouts for further analysis. The Vir-MinION pipeline runs under Ubuntu 20.04 and uses NVIDIA technology to make its processing. The Vir-MinION utilizes a collection of bash scripts to perform the workflow. For its part, the bash scripts invoke a group of applications and databases to accomplish their task. Figure 3B shows the databases and applications used by every script. As shown in Figure 3C, the pipeline builds a specific data structure following the logical schema "strategy-application" to be easily surfable.
Test Pipelines
Validation of both pipelines was performed using simulated data using CAMISIM [59] to generate unique communities for 2G and 3G data. DeepSignal [60] was used to simulate the MinION signal from the already available community. The community consists of a bacterial genome, Helicobacter hepaticus ATCC 51449, and two viral RNA genomes, Hepatitis A virus (HAV) (NC_007905.1) and Ippy virus, which consists of two S and L segments Viruses 2022, 14, 1260 5 of 12 (NC_007906.1). The distribution of the reads is described in Table 3. Table 4 presents the data flow followed by PIMGAVir, starting from the simulated data to the data treated with the different approaches, ready to be classified. The percentage associated to the ribosomal removing and to the filtration step of the non-viral reads of PIMGAVir showed that a low ratio of ribosomal contaminants has been removed, while the reads corresponding to Helicobacter hepaticus were correctly discarded in large part and that most of the reads corresponding to the viral genomes were used as input for the classification step with different tools (Kraken, Kaiju and BLASTN) ( Table 5). At the same time, a high percentage of reads has been discarded during the assembly/clustering step, showing how much the pipeline is sensible to the automatic improvement of the draft assemblies (performed by PILON) and to the de-replication and chimera removing (performed by vsearch), during the assembly and clustering step, respectively. An average percentage of reads of 0.30% (0.07% to 0.55%) and 0.33% (0.11% to 0.55%), respectively, for HAV and Ippy virus, were correctly classified. Indeed, the percentage of coverage (mapping analysis based on reference sequences) detains the average value of 92% for HAV and the values of 97.25% and 47.3% for Ippy S and L, respectively. The high score of accuracy calculated as the percentage of alignment with the reference genome supports the correctness of the reads' classification. A retrospective analysis of the unclassified reads shows that they corresponded to 96.5%. Both after the clustering steps and by the two different assembly approaches, the number of clusters and contigs is lower than the number of initial reads (4 to 247, for contigs and clusters, respectively). However, the coverage of both genomes is bigger than 72% and 90%, respectively, for the Ippy and HAV genomes. As shown in Figure 1A, the pipeline executes the pre-processing task to trim the raw data and remove contaminants. Then, according to the user option, the reads_filtering script will filter out the reads not belonging to desired taxa. At this point, the pipeline will execute one or more strategies (namely, read_based, ass_based, and clust_based) in parallel to proceed with the taxonomic classification. Double applications perform both the clustering and assembly methods to present the user with a pool of comparable results. The pipeline builds a specific data structure following the logical schema "strategy-application" to be easily surfable. For example, Figure 1B depicts the data structure created during the analysis step. The PIMGAVir pipeline uses a set of local-viral databases to perform both the filtering and taxonomic tasks ( Figure 1C). The pipeline runs under the Ubuntu 20.04 operative system, and a set of bash scripts performs the workflow. Each strategy, once called, executes a few scripts and produces a collection of results (text, HTML, and pdf) and log files (Figure 2A). Most of the scripts lean on a group of applications and databases to accomplish their task. Figure 2B shows the databases and applications used by every script. Finally, the user has the freedom to run every one of the mentioned scripts as an autonomous process as long as the input format is respected. The three strategies have been designed to run independently, allowing the user to run on parallel computing systems. The PIMGAVir pipeline has been tested on a cluster configured on the SLURM workload manager, running on multiple samples at once. The following is an example of the SLURM script to run the PIMGAVir pipeline ( Figure 2C). Regarding the Vir-MinION pipeline, after the pre-processing step, which is executed as the default step, the pipeline runs one or more methods in parallel, according to the user choice. The read_based strategy carries out the taxonomic classification using the demultiplexing results as input to generate an overall view of what the sample contains. The clust_based approach, as the name suggests, identifies the clusters obtained from the meta-barcoding step and executes the taxonomic classification on them. In the ass_based mode, the pipeline performs the assembly step from the shotgun, producing their taxonomic classification. As in the same case of PIMGAVir, the Vir-MinION pipeline relies on local-viral DBs to guarantee its capability in connection-less conditions and save runtime. The use of both taxonomic classifiers (Kraken2 and Kaiju) gives the user the possibility to compare the results. The outcomes are presented in graphical and text tabular layouts for further analysis. The Vir-MinION pipeline runs under Ubuntu 20.04 and uses NVIDIA technology to make its processing. The Vir-MinION utilizes a collection of bash scripts to perform the workflow. For its part, the bash scripts invoke a group of applications and
Test Pipelines
Validation of both pipelines was performed using simulated data using CA [59] to generate unique communities for 2G and 3G data. DeepSignal [60] was u simulate the MinION signal from the already available community. The communi sists of a bacterial genome, Helicobacter hepaticus ATCC 51449, and two viral RN nomes, Hepatitis A virus (HAV) (NC_007905.1) and Ippy virus, which consists of two L segments (NC_007906.1). The distribution of the reads is described in Table 3. T presents the data flow followed by PIMGAVir, starting from the simulated data data treated with the different approaches, ready to be classified. The percentage ated to the ribosomal removing and to the filtration step of the non-viral reads o
Discussion and Conclusions
PIMGAVir and Vir-MinION are free, connection-less, and modular automated metagenomics pipelines that provide the user with a complete baseline analysis for the taxonomic classification of the reads. The PIMGAVir pipeline works on data from the 2nd generation technology, while Vir-MinION works on 3rd generation technology. We designed the applications to be easily used by biologists and generally by users without particular computer skills. Although the pipelines do not have a graphical or web interface, both of them need only a few command line parameters. The required parameters are easy to understand, such as the input files to be analyzed, the strategy to carry out the analysis, or the number of cores you would like to allocate. We tested the pipelines on the Desktop equipped with i9-12900KF as CPU, 64 GB DDR5 of RAM, and GeForce Nvidia 3080Ti with 12 GB of RAM. The PIMGAVir pipeline required about 14 h of execution to generate the results from all the three approaches with an input of coupled fastq files of six million reads per file, while the Vir-MinION pipeline took two hours to complete the three strategies, using an input of 94 GB of fast5 files from 12 barcodes with a total of four million long reads. The short run time for the Vir-MinION pipeline emphasizes its utility as a valuable support for field applications, such as "quasi-real-time" pandemic monitoring.
The PIMGAVir and Vir-MinION pipelines, which approach metagenomic analysis from these three different angles, will provide the user with a potentially complementary set of information, as each approach will answer specific questions. Indeed, metagenomics based on unassembled reads is valuable for quantitative analysis, while assembly-based workflows will be used to identify the different organisms residing within the samples. The assembly-based strategy groups metagenomic contigs into potential genomes to study the functional roles of microbial populations. Thus, the combined analysis of these results can help to better define the most plausible viral metagenomic composition of samples. In addition, the adoption of multi-solution software specific to viral genome analysis has increased the reliability and computational efficiency of these pipelines where possible. For example, the choice of the assembler is fundamental before executing the taxonomic classification, and many software progams/algorithms exist to perform this task. Moreover, when working with a new dataset, it is common to generate a few assemblies testing different programs with different parameters, to compare the results and thus be more confident we are doing the best with the data. From this perspective, SPAdes and MEGAHIT are the two most commonly used assemblers today. SPAdes uses much more memory than MEGAHIT, so it is often more suitable for working with one or a few genomes (such as from an isolate or enrichment culture). However, if working with high-diversity metagenomic samples, sometimes, the memory requirements for SPAdes are too high, and MEGAHIT (which uses much less memory) can handle the task instead. Since the PIMGAVir pipeline uses both assemblers to produce the assembled genomes, the Vir-MinION pipeline (upon the same philosophy) accomplishes the assembly steps using either MEGAHIT and Flye assemblers. Of course, the consistency of the databases is also a crucial point during the taxonomic classification, and we have chosen to classify every "object" (whether reads, clusters, or assembled genome) with different software (kraken2, kaiju, or blastn) querying to several viral databases.
Each pipeline has to continue to evolve through further studies of comparison with other current or new pipelines or other new tools that will be developed in the future, as exemplified by VIBRANT [61] and VirSorter [62]. Another point is to phase out concerns to the running time needed by the PIMGAVir pipeline. As mentioned before, the PIMGAVir pipeline has been tested on a small cluster of seven worker nodes, communicating over Ethernet and equipped with shared remote storage. The cluster was configured on a SLURM workload manager with shared user's home and password-less access. Being the DBs instantiated on the remote storage, the queries over NFS required a relevant amount of running time. Further investigation can be performed to optimize the communication between the processes and the DB's performances.
In conclusion, these two pipelines, PIMGAVir and Vir-MinION, have already been used in our laboratory for the search and identification of known or new pathogens from meta-transcriptomic data obtained from a wide variety of hosts such as bats, arthropods, ectoparasites or wild and domestic rodents. However, they can be used by many other researchers, whose applications require metagenomic classification of their 2nd or 3rd generation data. | 5,119.8 | 2022-06-01T00:00:00.000 | [
"Computer Science"
] |
Co-doped p-type ZnO:Al-N Thin Films Grown by RF-Magnetron Sputtering at Room Temperature
This study reports the structural properties of zinc oxide thin films co-doped with aluminum and nitrogen (ZnO:Al-N) grown by RF magnetron sputtering from an AZO (ZnO with 2 wt% Al2O3) target under nitrogen (N2) atmosphere at room temperature (RT). Nitrogen partial pressures of 0.00, 0.10, 0.25 and 1.00 mTorr were used. The film thickness was around 270 nm. Ultraviolet-Vis-NIR transmittance (T) spectra of the films revealed T values of 80 to 85% in the 400 to 700 nm wavelength range. XRD results indicated that the films had a hexagonal wurtzite structure and were preferentially oriented in the (002) plane. Analyses by EDS indicated that the N atoms tend to be incorporated into the ZnO matrix at the expense of oxygen atoms. The ideal [N]/[Al] was obtained at a N2 partial pressure of 0.25 mTorr, producing a p-type film. For a [N]/[Al] of 1.53, the film also exhibited p-type conduction with an electrical resistivity of 31.92 Ω cm, mobility of 18.65 cm2/V s and carrier density of 1.22 x 1016 cm-3. The low carrier density is attributed to the energetically favorable formation of inactive nitrogen phases instead of acceptor-receiver-acceptor complexes, even at the ideal [N]/[Al].
Introduction
Zinc oxide (ZnO) is biodegradable, non-toxic, and composed of elements abundant in the Earth's crust (Zn -132 ppm in the Earth's crust, O -49,4%), making it important for large-scale applications. Indeed, ZnO is widely used industrially as an additive for rubber, paints, cosmetics and medicines, amongst others 1 . In addition, since it is a semiconductor with a wide direct bandgap of 3.3 eV and high binding energy excitons (60 meV), it is a strong candidate in new generations of optoelectronic devices, including semiconductors, light-emitting-diodes (LEDs) and lasers 1,2 .
As confirmed by electrical measurements, ZnO is an intrinsic n-type semiconductor 3 . Following the success of GaN as a blue light emitter, however, efforts have been remade to obtain p-type doped ZnO, which would permit the fabrication of LEDs, for example. The following strategies have been used to produce p-type doping of ZnO: (i) group VA element atoms substitute oxygen atoms; (ii) group IA element atoms substitute Zn atoms; (iii) co-doping with donors and acceptors 4 .
In the case of group VA elements, nitrogen is most adequate considering the atomic radius and valence energy of the 2p states, which are closer to those of oxygen in comparison with other elements whose difference in radius is greater than 50% 5,6 . Some theoretical studies 7 , however, suggest that nitrogen is a deep acceptor with a high ionization energy (1.3 eV), producing a reduced concentration of holes.
Moreover, Yan et al. 8. predicted that the N atom acts in place of O as an acceptor, but N 2 acts as a donor. Already the theoretical study by Lee et al. 9. indicated that the mechanism of compensation of N acceptors is energetically favorable in the ZnO matrix. This implies that even at low doping levels, the N acceptors are compensated by oxygen vacancies. Based on this reasoning, we conclude that the effects of self-compensation and the low solubility of acceptors in the ZnO matrix are the prime factors responsible for the instability observed experimentally in p-type ZnO films, which convert to n-type upon ageing 5,10-12 .
To obtain a greater level of incorporation of N into the crystalline structure of ZnO, Yamamoto suggested a co-doping of elements that donate charge, acting as an activator of the acceptor element. More specifically, Yamamoto and Katayama-Yoshida suggested that codoping of ZnO:N with Al or Ga produces acceptor-energetically favorable donor-acceptor complexes, leading to a reduction in the Madelung energy of delocalized nitrogen atoms and increasing the density of acceptor sites. Thus, the ideal concentration relation between acceptor and donor atoms would be 2:1 to obtain p-type ZnO 13,14 .
Experimentally, studies have used different nitrogen sources 15 and codopants such as Ga 16 , B 17 , P 18 , Ag 19,20 , In 21 and Al 22 . For the co-doping with Al using magnetron sputtering, Cho 23 deposited ZnO:Al-N films at 300°C, varying the N 2 flow rate. Reductions in the carrier density and electronic mobility were observed when the N 2 flow rate *e-mail<EMAIL_ADDRESS>was increased. As a consequence, the electrical resistivity increased from 1.2 x 10 -3 Ω cm to 0.13 Ω cm for 30% of N 2 in the N 2 +Ar mixture. Chou et al 24 . used N 2 O as a reactive gas and the deposition temperature was held at 500 °C. In this case, p-type conduction was observed with a carrier density of 2.5 x 10 17 cm -3 and electrical resistivity of 2.6 Ω cm for a p N2O /(p N2O + p Ar ) value of 20%. Raising the partial pressure of N 2 O, however, changed the electrical conduction to n-type.
Zeng et al 25 . employed Zn-Al targets containing different concentrations of Al (0 at.%, 0.08 at.%, 0.4 at.%, 1 at.% and 4 at.%). The N 2 O pressure was fixed at 3 Pa and the deposition temperature was 500 °C. As a result, films with different types of conduction (p-type and n-type) were obtained, the best result being 28.3 Ω cm at a carrier concentration of 2.52 x 10 17 cm -3 .
Hence, depending on the concentrations of Al and N, p-type or n-type doping may be produced. In none of these studies, however, were the electrical properties of the ZnO:Al-N films correlated with the ratio [N]/[Al]. Therefore, in this work the correlation between [N]/[Al] in ZnO:Al-N films and their structural and electrical properties were investigated. For this, thin films were synthesized by RF magnetron sputtering using N 2 as the reactive gas and a ZnO:Al 2 O 3 (2 wt.%) target. Specifically, the partial pressure of N 2 was varied to obtain films with the ideal ratio of 2:1. At this proportion, the formation energy of the co-doping N-Al-N system is smaller than that of N mono-doping, facilitating the increase in the number of holes 26 . In addition, the Fermi level shifts to positions closer to the top of the valence band, allowing a more stable p-type ZnO film to be obtained 27 .
Experimental Methods
Glass substrates were used as substrates for the depositions. Each substrate was cleaned for 480 s in each of distilled water, acetone and isopropanol in an ultrasonic bath. Film deposition was by RF magnetron Sputtering using a 3 inch diameter ceramic AZO (ZnO with 2 wt.% of Al 2 O 3 ) target in an Ar and N 2 atmosphere. The partial pressure of Ar was held at 1.00 mTorr, and the partial pressure of N 2 was varied (0.00, 0.10, 0.25, and 1.00 mTorr). An applied power of 60 W was used for 30 min. per deposition. The target-substrate separation was 3 cm and the substrate holder was neither intentionally heated nor polarized.
Film thicknesses, measured using a DEKTAK 150 profilometer, were ~270 nm. An Energy-Dispersive X-ray Spectrometer (EDS) attached to a JSM-6010LA Scanning Electron Microscope was used to estimate film chemical composition. Film structural properties were investigated with a Panalytical X'Pert Powder Diffractometer at grazing incidence (2 o ), using the K α emission of Cu (1.5406 Å).
Crystallite size, D, was estimated using the Scherrer For the calibration of the instrumental FWHM a ceramic ZnO sample produced by sintering, containing sufficiently large grains, such that the FWHM measured could be attributed to the X-ray beam divergence, was used.
Electrical properties such as resistivity, mobility and carrier density were measured using the Hall Effect with an ECOPIA 3000 equipment employing the Van der Pauw method.
The optical transmittance and reflectance measurements were made using a Perkin Elmer UV-Vis-NIR model Lambda 750 spectrometer over the 190 a 2400 nm wavelength range. Room temperature photoluminescence measurements (PL) were carried out by exciting the samples with the 325 nm line of a He-Cd laser. Table 1 shows the elemental composition of the AZO target and of the ZnO:Al and ZnO:Al-N films. Comparing the composition of the AZO target with that of the ZnO:Al film, there is a reduction in Zn concentration while the concentration of Al does not change significantly. This may be attributed to the greater rate of re-sputtering of Zn atoms compared to that of Al atoms. This difference is related to the greater binding energy of Al-O compared to that of Zn-O, as well as the enthalpy of formation of Al 2 O 3 (~ -1676 kJ/mol) being greater than that of ZnO (~ -348 kJ/mol). In addition, the ZnO:Al film is richer in oxygen than the target material.
Composition and structural properties
The concentration (at.%) of Zn and Al did not change with the addition of N 2 . On the other hand, [O] fell as [N] increased with the increase in the partial pressure of N 2 . This may indicate that the incorporation of nitrogen occurs preferentially by the substitution of O atoms in the ZnO matrix. Table 1 shows the ratio [N]: [Al], which allows the evaluation of the formation of acceptor-donor-acceptor complexes of the type N-Al-N 14,27 . The samples produced at partial pressures of nitrogen of 0.10 and 0.25 mTorr present ratios close to the ideal of 2:1. The sample deposited at 1.00 mTorr has a value much greater than the theoretical value. These data alone, however, do not reveal the form of the incorporation of N atoms in ZnO. Figure 1a presents X-ray diffractograms of the ZnO:Al and ZnO:Al-N samples described in Table 1. For comparison, a diffractogram of bulk ZnO produced by sintering is also shown. All the diffractograms show peaks related to the (002) and (103) With the incorporation of nitrogen into the matrix, the peak shows a tendency to shift to greater angles. This causes a compensation effect in the distortion of the crystalline lattice when N:Al proportions are close to 2:1, since other defects like Al and N interstitials tend to increase the lattice parameters. This behavior is supported by the almost constant value of the lattice parameter a, except for the sample obtained at 1.00 mTorr, while the lattice parameter c increases from ZnO powder to the ZnO:Al film, decreasing again upon co-doping with N. On the other hand, the films present slightly greater crystallite sizes when the partial pressure of N 2 increases. Table 3 shows the electrical resistivity, mobility, carrier density and conduction type for the ZnO:Al and ZnO:Al-N films. According to the Hall Effect, the ZnO:Al film presented an n-type electrical conductivity, as expected [31][32][33] . The films grown in a N 2 atmosphere, with a [N]/[Al] close to 2, showed a p-type behavior. There was, however, an increase by up to four orders of magnitude in the electrical resistance compared to the ZnO:Al film, owing mainly to the reduction in carrier density. Finally, the ZnO:Al-N film, grown in excess N 2 (partial pressure of 1.00 mTorr) demonstrated n-type behavior and a carrier density of 5.34 x 10 15 cm -3 . Figure 2 depicts the values of Table 3
Electrical measurements
Al Zn + is 3.59%. Consequently, the expected electron density is 1.49 x 10 21 cm -3 . The carrier density, however, measured by the Hall Effect is 9.72 x 10 19 cm -3 . These data imply that the ionization efficiency ( is only 6.53% (i.e. 6.53% of the Al atoms effectively act as donors of charge in n-type ZnO:Al). A priori, this result may be caused by the compensation mechanism or by the inactivation of Al caused by the formation of inert complexes, such as the homologous phase ZnO/Al 2 O 3 34 . Such formations are strongly dependent on the quantity of O available during deposition.
For the ZnO:Al-N film with [N]/[Al] = 1.53, considering the N-Al-N complex as an electron receptor, the hole density would be about 1.46 x 10 21 cm -3 . Supposing a compensation mechanism between substitutional N O and Al Zn , the hole density would be 1.21 x 10 21 cm -3 . However, comparing the effective density with the density measured using the Hall Effect, reveals a difference of 5 orders of magnitude (1.22 x 10 16 cm -3 ) lower than expected. This difference is explicable if not all the Al and N in the ZnO matrix act as donors or acceptors, respectively, as proposed by Yamamoto 13 . Instead of forming acceptor-donor-acceptor complexes, the N O and Al Zn substitutional atoms bind between themselves as N O -Al Zn 35 . Another possible mechanism is the formation of secondary structures (such as Zn 2 N 3 ) 36 , which do not act as acceptors. No such structures, however, were apparent in the X-ray diffractograms. Figure 3a shows the optical transmittance and reflectance spectra of the samples studied here. Interference fringes and optical transparencies of around 80% in visible range are observed. For wavelengths above 1500 nm, in the spectrum of the ZnO:Al film there is an absorption associated with plasmon oscillations 37 , characteristic of materials with a high carrier density. The value measured for this film was 9.72 x 10 19 cm -3 . This absorption is not observed in the spectra of the films co-doped with N, confirming the low carrier densities (of about 10 16 cm -3 ) observed in the Hall Effect measurements. Figure 3b shows the absorption spectra of the ZnO:Al and ZnO:Al-N films calculated from the transmittance and reflection measurements. The optical absorption edge of the ZnO:Al film is at about 3.4 eV. This value is slightly above the observed value (3.3 eV) for undoped ZnO films and is due to filling of the bottom of the conduction band by charge carriers, known as the Burstein-Moss effect 38 . The presence of a step between 2.6 and 3 eV for the ZnO:Al-N samples indicates the creation of localized states within the bandgap for the samples containing N. Localized states are normally related in ZnO films to the presence of defects that present deep excitation levels, such as those of oxygen vacancies. In our case, the localized states may be related to the formation of N O -Al Zn complexes, which also reduce the carrier density as discussed previously. In Figure 3b, the step increases with the concentration of N incorporated into the ZnO:Al-N films. Figure 4 shows a photoluminescence (PL) spectra in the 1.5 to 3.7 eV range of ZnO:Al and ZnO:Al-N (1.00 mTorr) films. A spectrum of bulk ZnO, also included, exhibits a peak centered at 3.31 eV, related to transitions between the bottom of the conduction band and the top of the valence band (NBE). The spectra of the ZnO:Al and ZnO:Al-N films show a peak at 3.06 eV and another, wider peak superimposed in 2 to 3 eV range. The first is related to the NBE transition and its shift to lower energies is caused by the influence of the energy levels by the atomic orbitals of the impurities Al Zn or N O in the ZnO matrix. The band between 2 and 3 eV derives from localized levels between the conduction and valence bands, which are V O or N O -Al Zn complexes in ZnO:Al and ZnO:Al-N films, respectively. For the ZnO:Al-N film there is an extra peak centered at 1.75 eV. This transition may indicate the presence of a secondary phase such as Zn 2 N 3 , which is not detected by the X-rays analyses. However, the formation of this phase could explain the reduction in the alignment of ZnO polycrystals in the [002] direction.
Conclusions
Films of ZnO:Al-N were obtained using RF magnetron sputtering at different N 2 partial pressures. All of the films exhibited a Wurtzite structure with a preferential orientation along the (002) plane. For a N 2 partial pressure of 1.00 mTorr the film presented a more random orientation of the crystals. From EDS measurements, it was concluded that the N atoms tend to be incorporated into the ZnO matrix by the suppression of oxygen atoms even at low N 2 pressures. The ideal value of [N]/[Al] was obtained at a N 2 partial pressure of 0.25 mTorr, producing a film with p-type conduction. For the partial pressure of 0.10 mTorr (equivalent to a [N]/[Al] of 1.53), the film also exhibited p-type conduction with an electrical resistivity of 31.92 Ω cm, a mobility of 18.65 cm 2 /V s and a carrier density of 1.22 x 10 16 cm -3 . PL measurements indicate the formation of defects such as N O -Al Zn complexes and Zn 2 N 3, since the the formation of nitrogen phases is energetically more favorable than the formation of acceptor-donor-acceptor complexes, even at the ideal [N]/[Al]. | 3,915.8 | 2020-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Differential arousal regulation by prokineticin 2 signaling in the nocturnal mouse and the diurnal monkey
The temporal organization of activity/rest or sleep/wake rhythms for mammals is regulated by the interaction of light/dark cycle and circadian clocks. The neural and molecular mechanisms that confine the active phase to either day or night period for the diurnal and the nocturnal mammals are unclear. Here we report that prokineticin 2, previously shown as a circadian clock output molecule, is expressed in the intrinsically photosensitive retinal ganglion cells, and the expression of prokineticin 2 in the intrinsically photosensitive retinal ganglion cells is oscillatory in a clock-dependent manner. We further show that the prokineticin 2 signaling is required for the activity and arousal suppression by light in the mouse. Between the nocturnal mouse and the diurnal monkey, a signaling receptor for prokineticin 2 is differentially expressed in the retinorecipient suprachiasmatic nucleus and the superior colliculus, brain projection targets of the intrinsically photosensitive retinal ganglion cells. Blockade with a selective antagonist reveals the respectively inhibitory and stimulatory effect of prokineticin 2 signaling on the arousal levels for the nocturnal mouse and the diurnal monkey. Thus, the mammalian diurnality or nocturnality is likely determined by the differential signaling of prokineticin 2 from the intrinsically photosensitive retinal ganglion cells onto their retinorecipient brain targets.
Introduction
The temporal organization of activity/rest or sleep/wake rhythms for mammals is regulated by the interaction of light/dark cycle and circadian clocks. Several lines of evidence indicate that the known master circadian clock, suprachiasmatic (SCN), operates quite similarly in nocturnal and diurnal mammals. The oscillations of clockwork genes, such as Bmal1, Per1, and Per2, are in the same phase in the SCN, regardless whether the mammals are diurnal or nocturnal [1,2]. The firing rate and the glucose utilization of SCN neurons are also in the same phase for both the nocturnal and the diurnal mammals [3,4]. The same phase oscillation has also been shown for the two SCN output molecules, vasopressin and prokineticin 2 (PK2) [5][6][7]. Therefore, the divergent mechanisms that confine the active phase to either day or night period for the diurnal and the nocturnal mammals have been postulated to lie downstream of the SCN clock [4,8,9]. However, no such divergent mechanism has been identified.
Besides modulating the activity/rest or the sleep/wake rhythms indirectly via its ability to phase shift and entrain the SCN circadian clock to the ambient lightdark cycle, light also exerts a direct effect on the activity or arousal levels. In the nocturnal animals, light strongly suppresses activity and induces sleep (photosomnolence) [10,11]. In the diurnal mammals, such as monkeys and humans, light produces opposite effects of inducing the arousal or increasing the activity levels [12][13][14]. For the nocturnal animals, the direct light effect of activity suppression is commonly referred to as masking [10,11,15]. Light-induced activity suppression and circadian clock entrainment appears to utilize the identical photic input pathways from the retina. Both classical (rod/cone) photoreceptors and intrinsically photosensitive retinal ganglion cells (ipRGC), the retinal ganglion cells that express melanopsin (OPN4), participate in the masking and the circadian clock entrainment, as well as other non-image-forming visual responses such as the pupillary reflex [11,[16][17][18][19]. Masking is attenuated in Opn4-deficient mice [18], and is essentially abolished in the mice deficient in both Opn4 and rod photoreceptors [20]. Masking and circadian clock entrainment to light/dark cycle is completely abolished in the mice lacking the ipRGC generated by genetic or chemical ablation [21][22][23][24][25]. Thus, the ipRGC are the only channels that relay the photic information to brains for the masking and the circadian clock entrainment. The neural pathway of mediating the light masking downstream of the ipRGC is thought to act through the retinohypothalamic tract (RHT) projection to the SCN [9,11], the same neural pathway that mediates the phase shifting and the entrainment of the SCN clock. The light masking was abolished with complete transection of RHT [26]. SCN lesion together with the loss of RHT eliminates the circadian rhythmicity as well as the light masking [27]. Transplantation of embryonic SCN to arrhythmic adults restores some locomotor rhythmicity without restoring the masking effect [28]. Thus, the ipRGC-SCN neural pathway appears to be critical for the light masking in the nocturnal animals.
In the current study, we show the oscillatory expression of PK2, previously demonstrated as a critical SCN output signal, in the ipRGC in a clock-dependent manner. We further show that PK2 signaling is required for the sustained light-induced activity suppression and sleep induction in mice. Blockade with PK2 antagonist demonstrated the opposite effects of the PK2 signaling on the arousal levels in the nocturnal mouse and the diurnal monkey. Together with the observed differential expression of a PK2 signaling receptor in the retinorecipient brain targets of the ipRGC between the nocturnal mouse and the diurnal monkey, the mammalian diurnal/ nocturnal determination lies among the ipRGC-brain pathways, upstream of the SCN clock.
Results
PK2 signaling is required for the sustained light-induced suppression of the locomotor activity and the arousal in mice Mice deficient in genes of either PK2 (PK2−/−) or its receptor (PKR2) had reduced circadian rhythms of locomotor activity under constant darkness condition [29,30]. We observed that the PK2−/− mice also displayed increased daytime locomotor activity under light/dark (LD) cycle (data not shown), which is consistent with prior observation of increased wakefulness in the PK2−/− mice during light period [31]. These observations suggested that the light suppression effect is abnormal in the absence of PK2 signaling. We thus examined the suppression effect of light pulses on the locomotor activities and the arousal levels in the PK2 −/− mice. When a light pulse (150 lux) was administrated to the wild type mice during the middle of dark period (ZT16-ZT18.5), the locomotor activities were significantly suppressed (Fig. 1a). In contrast, only marginal suppression of the locomotor activities by light pulses was observed for the PK2−/− mice ( Fig. 1a and Insert). As expected, EEG/EMG recording revealed that light pulses suppressed wakefulness in the wild type mice ( Fig. 1b and Insert). For the PK2−/− mice, the light suppression on the wakefulness was only significant for the first 30 min (Fig. 1b). Thus, light could still suppress the wakefulness in the PK2−/− mice, but the suppression effect was not maintained.
We further investigated the light suppression effect on the locomotor activities and the arousal levels with dim light. Just before the ending of the regular~150 lux illumination condition at ZT12, the light intensity was dimmed to~30 lux during the four hour period corresponding to ZT12-ZT16. This reduced illumination continuously suppressed the locomotor activities of the wild type mice (Fig. 2a). In contrast, the PK2−/− mice displayed quite robust locomotor activities in response to the light intensity reduction, achieving the activity levels quite comparable to that were under darkness (Fig. 2a). This observation indicated that, at the high homeostatic drive for the activities during late day, dim light was no longer able to markedly suppress the locomotor activities in the PK2−/− mice. EEG/EMG recording confirmed the corresponding arousal levels affected by this dim light treatment. For the wild type mice, the time staying awake during these four hours of dim light was much more reduced than under darkness, revealing a strong sleep induction effect of the dim light (Fig. 2b). For the PK2−/− mice, the wakefulness time under dim light and under darkness were quite similar, indicating only marginal arousal inhibition by the dim light in the absence of PK2 signaling (Fig. 2b). Together, these results indicate that PK2 signaling is required for the sustained light-induced suppression of the locomotor activity and the arousal in mice.
Clock-dependent oscillatory expression of PK2 in the intrinsically photosensitive retinal ganglion cells (ipRGC)
The diminished suppression effect of light on the arousal levels and the locomotor activities in the PK2 −/− mice implicates that PK2 signaling is likely involved in ipRGC-brain neural pathways, as the ipRGC have been shown as the only photic channels for the central transmission of the non-visional functions of light, including the locomotor activity suppression and sleep induction [22-25, 32, 33]. We thus examined the likely expression of PK2 in the ipRGC. As shown in Figs. 3a and 4, PK2 is quite roburstly expressed in some retinal ganglion cells. Further co-immunostaining studies indicated that all OPN4-positive retinal ganglion cells express PK2 (28/28 cells, Fig. 3). The~100 % coexpression of PK2 and OPN4 in the ipRGC indicated that PK2 of the ipRGC projects to the SCN and other non-visional light functional areas of brain, such as the superior colliculus (SC) [34][35][36].
The PK2 expression in the ipRGC was shown to oscillate in a Bmal1-dependent manner under light and dark conditions (Fig. 5). In wild type mice, the peak and trough PK2 levels in the ipRGC were around ZT4 and ZT20, respectively. In contrast, PK2 levels in the ipRGC of Bmal1-deficient mice were constantly low, and no apparent PK2 oscillation was observed. Further, the oscillatory phase of PK2 levels in the ipRGC, including the peak and trough timing at~ZT4 and~ZT20, respectively, is quite similar to the PK2 oscillation observed in the SCN clock [6], indicating the regulation by same Fig. 1 The effects of light pulses on the locomotor activities and the arousal levels. The white bar shows the light pulses (150 lux) administered to WT (N = 6) and PK2−/− mice (N = 7) during ZT16-ZT18.5 (2.5 h). The analysis bin sizes were 30 min, with values being means ± sem. a. Light pulses significantly inhibited the locomotor activity in the WT mice (P < 0.01, Two-way ANOVA, *, P < 0.05, **, P < 0.01 by Bonferroni's post hoc test). The effect of light pulses on the locomotor activity of the PK2−/− mice was not significant (P > 0.05, Two-way ANOVA). The insert shows the locomotor activity in the entire 2.5 h (*, P < 0.05, paired t-test). b. Light pulses significantly deceased the wake time in the WT mice (P < 0.0001, Two-way ANOVA, *, P < 0.05, **, P < 0.01, ***, P < 0.001 by Bonferroni's post hoc test). The inhibitory effect of light pulses on the arousal levels of the PK2−/− mice was only significant for the first 30 min (P < 0.05, Two-way ANOVA, **, P < 0.01, by Bonferroni's post hoc test). The insert shows the wake minutes of the entire 2.5 h (***, P < 0.001, paired t-test). The inhibitory effect of light pulse on the arousal levels in the entire 2.5 h was not significant for the PK2−/− mice (P > 0.05, paired t-test) molecular oscillators. Consistent with the essential role of PK2 for the activity suppression by light, the PK2 expression levels in the ipRGC correlated with the extent of light suppression effect (Fig. 6). About 75 % of locomotor activity was suppressed by light pulses (300 lux) delivered at ZT14-16.5, with the suppression effect of light pulses decreasing to about 33 % when delivered at ZT19-ZT21.5, during the trough PK2 expression period in the ipRGC. Taken together, these results indicate that the oscillatory expression of PK2 in the ipRGC is a clock-dependent process.
Differential expression of PK2 receptor in the brain targets of the intrinsically photosensitive retinal ganglion cells between the nocturnal mouse and the diurnal monkey As with the nocturnal mouse, the PK2 expression was also detected in the retinal ganglion cells of the diurnal Fig. 2 The effects of dim light on the locomotor activities and the arousal levels. The grey bar shows the four hours of dim light (30 lux) that was administered during ZT12-ZT16 to the WT (N = 6) and the PK2−/− mice (N = 7). a. Compared to darkness, dim light significantly decreased the locomotor activity in the WT mice (P < 0.0001, Two-way ANOVA, *, P < 0.05, **, P < 0.01, ***, P < 0.001 by Bonferroni's post hoc test). The inhibitory effect of dim light on the locomotor activity was not significant for the PK2−/− mice (P > 0.05, Two-way ANOVA). The inserts show the locomotor activities in the entire four hours (**, P < 0.01, paired t-test). b. Compared to darkness, dim light significantly decreased the arousal levels in the WT mice (P < 0.0001, Two-way ANOVA, *, P < 0.05, **, P < 0.01, ***, P < 0.001, by Bonferroni's post hoc test). The inhibitory effect of dim light on the arousal levels of the PK2−/− mice was only significant for the first 30 min (P < 0.05, Two-way ANOVA, **, P < 0.01, by Bonferroni's post hoc test) monkey (Fig. 7a). Also identical to mice, PK2 is coexpressed with OPN4 in the ipRGC of monkeys ( Fig. 7b/7d). Differential expression of PKR2, the brain PK2 receptor, in the SCN compartments was observed for the nocturnal mouse and the diurnal monkey. In the mouse brain, PKR2 is expressed in the entire SCN, The immunoflurescence intensity of PK2 was quantified and shown as mean ± SEM. The fluorescence intensity of PK2 levels in the ipRGC were oscillatory in wild type mice (a), but not in Bmal-deficient mice (b) (two way ANOVA, P < 0.001 genotype effect, P < 0.05 time effect). Peak and trough levels in the ipRGC of wild type mice were around ZT4 and ZT20, respectively. The PK2 levels in the ipRGC of Bmal1-deficient mice (BMAL KO) were consistently low, and displayed no apparent oscillation (b). Inserts above the columns show representative images of PK2 immunostaining of the ipRGC (green). The nuclear counterstaining is shown as blue. Bar size 10 μm covering both the ventral and dorsal compartments of the SCN (Fig. 7b) [6,37]. It has been shown that ventral SCN is retinorecipient, i.e., receiving the retinal inputs for the light masking and circadian clock entrainment [9,38]. The expression of PKR2 in the retinorecipient SCN in the mouse indicates that the mouse ventral SCN likely responds to PK2 signal from the ipRGC. Our previous electrophysiological studies have shown that PK2 increases the electric activities of neurons that express PKR2 [39,40]. In the monkey brain, PKR2 is only expressed in the dorsal SCN, but is not detected in the ventral SCN (Fig. 8a). As with the nocturnal animals, the ventral SCN has been shown as the retinorecipient compartment of the SCN clock [38,41]. The absence of the PKR2 expression in the ventral SCN indicates an inability of this branch of monkey SCN to respond to the PK2 signal from the ipRGC. Importantly, distinct expression of PKR2 between the mouse and monkey brains was also observed in the superior colliculus (SC), another critical non-vision brain target of the ipRGC [34][35][36]. As shown in Fig. 8c, PKR2 was robustly expressed in the superficial layer of the SC in the monkey brain. The superficial layer of the SC is known to receive inputs from the retinal ganglion cells, including the ipRGC [34][35][36]. In contrast, PKR2 expression was not detected in the SC of the mouse brain (Fig. 8d), indicating the absence of PK2 signaling via the ipRGC-SC in the mouse.
Opposite effects of PK2 blockade on the arousal levels in the nocturnal mouse and the diurnal monkey We next examined the effect of a synthetic PK2 antagonist, PKR#7, on the arousal levels in the mice and the monkeys. As shown in Fig. 9a and b, administration of PKR#7 significantly increased the locomotor activity and [31]. Thus, PK2 signal is overall inhibitory for the arousal levels for the nocturnal mice. In contrast, administration of the PK2 antagonist in the monkeys resulted in a significant increase (>70 min) of the sleep time ( Fig. 9c), indicating that the PK2 signal is stimulatory for the arousal level in the diurnal monkeys.
Discussions
In the current study, we have shown that PK2 is expressed in the centrally projecting ipRGC. As the ipRGC are the being inhibitory for the nocturnal mouse and stimulatory for the diurnal monkey. Between the nocturnal mouse and the diurnal monkey, PK2 receptor is differentially expressed in the retinorecipient SCN and SC, brain projection targets of the ipRGC. Taken together, these results indicate a likely mechanism of the nocturnal and diurnal divergence lies in the differential PK2 signaling of the ipRGC onto their brain targets (Fig. 10). The PK2 signaling of the ipRGC likely regulates the arousal levels via impinging on the brain targets of the ipRGC, particularly the SCN and the SC. The differential expression of the PK2 receptor (PKR2) in the retinorecipient compartment of the SCN and the retinorecipient superficial layer of the SC may then underlie the opposite effects of light on the arousal levels between the nocturnal mouse and the diurnal monkey. In the mouse brain, PKR2 is robustly expressed in the retinorecipient SCN, but absent in the SC, and the PK2 signaling of the ipRGC thus funnels through arousal-inhibitory ipRGC-SCN pathway in the nocturnal mouse. In contrast, PKR2 is not expressed in the retinorecipient ventral SCN of the monkey brain, but strongly expressed in the retinorecipient superficial layer of the SC, and thus the ipRGC-SC pathway dominates in the diurnal monkey.
The PK2 signaling of the ipRGC-SC pathway is likely to be stimulatory for the arousal levels. SC has previously been indicated as a critical nucleus for the light-induced arousal or other higher brain functions, such as attention, that are closely tied with increased arousal [42][43][44][45]. In the monkeys, bilateral lesions of the SC have been found to drastically affect the arousal levels, including response to light [42]. Lesion studies in rats have revealed that the SC is required for EEG desynchronization (arousal) in response to light flashes [43]. This rat lesion study suggests that the ipRGC-SC pathway may be stimulatory for the arousal levels for the nocturnal animals, at least briefly in response to light flashes. Over a longer duration, light is inhibitory for the arousal levels of the nocturnal animals as the inhibitory ipRGC-SCN pathway dominates. In diurnal mammals such as monkeys, SC may mediate the lightdriven arousal via the ascending projections to cortices that are routed through the lateral posterior/Pulvinar complex of the thalamus [46,47]. The lateral posterior/ Pulvinar complex of the thalamus is known to play a critical role for higher function such as attention [48]. In this regard, wakefulness may be viewed as low level attention. It is well known that, compared to the nocturnal animals, the overall size of the SC, the lateral posterior/Pulvinar complex of the thalamus, and the associated cortices are all significantly enlarged and expanded in the diurnal mammalian species, such as the primates [49,50]. As the mammalian species are believed to start being nocturnal [51], diurnality of the mammals may evolve via the expansion of the arousal-stimulatory ipRGC-SC pathway and the simultaneous diminishment of the arousal-inhibitory ipRGC-SCN pathway. Our model (Fig. 10) indicates that the nocturnal/diurnal determination of arousal levels lies in the upstream of the SCN circadian clock, and divergent signaling downstream of the SCN clock may not be necessary. The melatonin rhythms, known to lie downstream of the SCN clock, will operate in the same phase of the SCN clocks and thus no difference will be exhibited in the nocturnal and diurnal mammals. Our model also argues against the previously presumed central role of SCN as Fig. 10 Diagram showing the differential arousal regulation by the PK2 signaling of ipRGC onto brain targets. Clock-controlled PK2 is expressed in the ipRGC, the only photic channels that transmit central non-vision functions of light. Overall, the PK2 signaling is stimulatory for the diurnal monkey and inhibitory for the nocturnal mouse, as shown by the antagonist blockade and PK2-deficiency. The differential expression of PKR2 in the retinorecipient ventral SCN and the superficial layer of SC indicates that the PK2 signaling of the ipRGC dominantly funnels through ipRGC-SCN and ipRGC-SC for the mouse and the monkey, respectively. For the nocturnal animals, the arousal stimulation via the ipRGC-SC pathway by light is minor (transient, not sustained), consistent with the absence of PKR2 in the SC of the mouse brain. The PK2 signaling of the ipRGC-SCN pathway is clearly inhibitory for the nocturnal mouse, although it is unclear whether it is inhibitory or stimulatory for the diurnal animals. SC may mediate the light-driven arousal via the ascending projections to cortices that are routed through the lateral posterior/Pulvinar complex of the thalamus. Alternatively, SC may promote arousal via the descending projections to the mesencephalic reticular formation, an important component of the ascending activation system (ref [42][43][44][45]. Our model indicates that the mammalian diurnal/ nocturnal determination is mediated by the differential signaling of the ipRGC onto their brain targets, and thus divergent signaling mechanisms downstream of the SCN may not be necessary. In diurnal animals, upstream clocks such as the ones in the ipRGC likely play more dominant roles than the SCN, at least for arousal regulation under light and dark condition the master clock for diurnal mammals, such as monkeys, at least for the regulation of arousal levels under light and dark conditions. Although the supporting evidence for the SCN as the master circadian clock is overwhelmingly strong for the nocturnal mammals, such claim has actually limited supporting evidence in the case of the diurnal animals [8]. The well cited work of increased sleep by the SCN lesion in squirrel monkeys [52], interpreted as the arousal-promoting of the SCN for the diurnal animals, could be due to the concurrent lesions to the retinohypothalamic tract, which would damage projections to both SCN and SC. Under light and dark condition, it is likely that the ipRGC play central roles for the regulation of arousal levels regardless whether the mammals are diurnal or nocturnal. In nocturnal mammals, ipRGC and SCN act in sequential neural projections (and with the similar oscillatory phase) to regulate arousal levels, thus nocturnal activity patterns are displayed. In diurnal mammals, the arousal-stimulatory ipRGC-SC projections overcome the diminished arousal-inhibitory ipRGC-SCN projections, and thus diurnal activity patterns are exhibited Methods Animals PK2−/− mice and their littermate wild type controls in mixed genetic background were generated as described [29,31]. Bmal1−/− mice were produced by crossing from heterozygous mice that were procured from Jackson laboratory. Mice were fed at libo and housed at regular light/dark cycle, with lights (~150 lux white light) on at 7:00 a.m. (Zeitgeber Time ZT0, light period ZT0-ZT12) and lights off at 7:00 p.m. (ZT12, dark period ZT12-ZT0). All animal procedures were approved by appropriate institutional animal use committee.
Measurement and analysis of the locomotor activity in mice
Monitoring of the locomotor activity was carried out as described [29]. Briefly, mice were individually housed with cages equipped with infrared beams for the monitoring of the locomotor activity (AccuScan Instrument Inc. Columbus, OH). Mice were housed at regular 12 h Light (~150 lux white light): 12 h Dark cycle. The locomotor activities were recorded as counts per 10-min interval and were analyzed in 30 or 60 min pins. Light pulses or dim light at the indicated intensities were administered.
Measurement and analysis of the arousal level in mice
Electrodes for recording the electroencephalographic (EEG) and electromyogram (EMG) signals were implanted as described [29,31]. The mice were connected to a swivel system of tether/commutator system (Plastics One, Roanoke, VA) for the collection of the EEG/EMG signals.
The EEG/EMG signals were amplified using a Grass Model 78 (Grass Instruments, West Warwick, RI) and filtered (EEG: 0.3-100 Hz, EMG: 30-300 Hz) before being digitized at a sampling rate of 128 Hz, stored on a computer. After sleep data were collected, EEG/EMG records were scored with SleepSign software sleep scoring system (Kissei Comtec America, Irvine, CA) as described [31]. Mice were housed at a regular 12 h light/12 h dark cycle. Light pulses or dim light at the indicated intensities were administered.
In situ hybridization
Procedures for In situ hybridization were carried out similarly as described [6,7]. Tissue sections were cut at −20°C, and then fixed with 4 % paraformaldehyde, followed by three washes of 0.1 M phosphate buffer, air-dried, and stored at −20°C until use. For In situ hybridization, sections were dried at room temperature, followed by pretreatment of proteinase K (1 μg/ml). Sections were then air-dried and hybridized with S [35]-labelled riboprobes by incubation at 60°C for 18 h. After hybridization, tissue sections were treated with RNAase (20 μg/ml) (Sigma-Aldrich, St. Louis, MO), decreasing salinity washes and high stringency (68°C) wash. After dehydration and air-drying, tissue sections were exposed to Kodak Biomax film. Images were captured with image analysis system (MCID, Imaging Research, Ontario, Canada).
Immunohistochemistry
Immunohistochemistry was performed according to previous publications [53,54]. Retinal sections were mounted onto coated glass slides. Sections were rehydrated in PBS for 20 min then immersed in a blocking buffer containing 2 % BSA, 0.5 % Tween-20 and 0.05 % Triton-X 100 for 1 h. Primary antibody for PK2 (Hamster monoclonal, 1:200, Roche Inc.) or OPN4 (Affinity purified rabbit polyclonal, 1:200, Millipore Inc.) was added to the sections overnight at 4°C. Slides were washed with PBS containing 0.5 % Tween-20 five times for 5 min each. Anti-rabbit or anti-hamster secondary antibodies (Alexa Fluor 488 or 555 1:2000; Invitrogen Inc.) were then applied, followed by incubation with 10 μg/ml Hoechst 33342 (Invitrogen Inc) for 5 min at room temperature to stain the nucleus. Sections were viewed under a Nikon inverted fluorescence microscope (Model TE-2000U; Nikon Inc, Tokyo, Japan). Images were captured with a SPOT digital camera (Diagnostic Instruments, Inc, Sterling Heights, MI). Immunofluorescence intensity was quantified with Image J. For DAB (3,3′-diaminobenzidine) immunostaining, sections were incubated with anti-PK2 antibody (Hamster monoclonal, 1:500 dilution) antibody, followed by incubation with biotinylated anti-hamster secondary antibody. Color development of DAB immunostaining was carried out with the standard ABC method [52].
Pharmacological experiments of examining the effect of a PK2 antagonist on the activity or arousal levels in the mice and the monkeys A PK2 antagonist (PKR#7) was prepared similarly as described [55]. PKR#7 (40 mg/kg) was administered to the mice intraperitoneally at ZT6. PKR#7 (10 mg/kg) was administered to the monkeys intramuscularly at ZT10. For the pharmacological experiments, animals were treated with either the vehicle or antagonist and then crossed over with the opposite treatments 1 week later to form paired controls.
Sleep and activity data of the PK2 antagonist or control-treated mice were acquired and analyzed as described for the PK2−/− mice. For the sleep studies of the monkeys, young adult monkeys (Macaca fascicularis) were housed under standard light (white light~250 lux) and dark cycle. The measurement and analysis of the arousal levels in the monkey were carried out as follows. A wearable wireless sleep tracker, similar to described previously for human subjects [56][57][58][59] and for nonhuman primates [60], was used. This wireless system enabled remote monitoring of the sleep/wake status of the monkeys for an ambulatory setting for a long time with minimal disturbing of the monkeys. The sleep data obtained from the wireless sleep tracker were verified with with concurrent recording of infrared video camera. The sleep data of the sleep trackers were retrieved daily with mobile phones that were seated about ten meters away from the animal cages, without physical contact with the monkeys. Previous studies have shown excellent agreement of sleep data obtained by the sleep tracker, video camera and classical sleep/wake data obtained by the EEG/EMG method [56,59,61].
Statistics
To reduce the impact of data variations due to ultradian rhythms, the measurements of mouse locomotor activity and EEG/EMG were performed two times that were separated by 3 or 4 days, and the average values of these two measurements were used in statistical analyses. Statistical analyses were performed with 1 or 2 ways ANOVA by using GraphPad Prism Software Version 5.0 (San Diego, CA), followed by appropriate post tests. | 6,402.2 | 2016-08-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Magnification and evolution bias of transient sources: GWs and SNIa
Third-generation gravitational wave (GW) observatories such as the Einstein Telescope and Cosmic Explorer, together with the LSST survey at the Vera Rubin Observatory, will yield an abundance of extra-galactic transient objects. This opens the exciting possibility of using GW sources and Supernovae Type Ia (SNIa) as luminosity distance tracers of large-scale structure for the first time. The large volumes accessible to these surveys imply that we may need to include relativistic corrections, such as lensing and Doppler magnification. However, the amplitude of these effects depends on the magnification and evolution biases of the transient sources, which are not yet understood. In this paper we develop comprehensive frameworks to address and model these biases for both populations of transient objects; in particular, we define how to compute these biases for GW sources. We then analyse the impact of magnification and evolution biases on the relativistic corrections and on the angular power spectrum of these sources. We show that correct modelling and implementation of these biases is crucial for measuring the cross-correlations of transient sources at higher redshifts.
Along with the sheer number of objects seen, this new generation of observatories will push the horizon of detections further: whilst LSST will observe SNIa events at z < 4 [21,22,28], ET and CE are predicted to access virtually all binary black hole (BBH) mergers up to z ∼ 10 [28, [30][31][32]35].Greatly increasing the source distances also increases the relevance of relativistic correction effects.This is due to both the impact of lensing on the larger redshifts covered, and due to tracers evolving with cosmic time, thus not distributing across different redshift bins equally [18,36].
These future prospects indicate that the kinds of clustering analyses carried out with galaxies and IM will soon be applicable to transient events such as SNIa and GWs too.A key subtlety is that the former are tracers living in redshift space, whilst the latter carry distance information only under the form of a luminosity distance.In a previous work [37] we explored the difference between clustering analysis in redshift space and in luminosity distance space for a generic tracer.We found that the two can be significantly different, leading to large discrepancies in the corresponding angular power spectrum, up to 50% at large scales.Therefore, any analysis utilising tracers such as GWs or SNIa should be carried out in luminosity distance.
Primarily, one is required to build an expression for the observed number counts in luminosity distance space, which not only traces the underlying density of matter, but also includes distortion effects along our past light-cone.One can write the generic expression for the density contrast in luminosity distance space as [37,38]: where γ ≡ rH/1 + rH with comoving distance r and comoving Hubble parameter H, δ n is the density contrast in the Newtonian gauge, D L the luminosity distance, and V the volume.
The distortion is then found to be dependent on two extra functions, the evolution bias [39] b e ≡ ∂ ln n ∂ ln a d th , where d th is the detector's observation threshold (taken as the luminosity cut L c in the standard galaxy case and signal-to-noise ratio threshold ρ th for GWs), and the magnification bias s, defined as the change in the comoving number density n at fixed redshift/distance with respect to the luminosity cut.Instead, b e is the change in the comoving number density with respect to the scale factor, while keeping the detector's threshold fixed [39].Physically, the magnification bias accounts for objects being magnified in or out of the detector's flux limit as a result of a perturbation, and the evolution bias describes the impact on clustering of a (possibly) non-conserved comoving number density through cosmic time.A null evolution bias would imply a constant observed number density through redshift, whilst a non-zero one corrects each redshift bin for the effect of an evolving population.In other words, it traces how the observed number of objects per unit volume changes as the universe expands.We define the magnification bias for GWs and SNIa differently in order to preserve the general expression in eq.(1.1): the former in terms of signal-to-noise ratio (SNR) threshold and the latter in terms of magnitude cut, respectively.In the following sections we will provide the expression for each tracer, as well as specify the magnification bias for different surveys and thresholds.
Here, we propose a theoretical framework to model both GWs and SNIa, thus aiming at consistently translating terminology and modelling previously used in galaxy clustering into equivalent expressions applicable to these new transient tracers.One should note that some of these biases, at least for GWs, were recently explored for the first time in [24][25][26]40], particularly in the context of cross-correlations, as these have become increasingly exciting with the prospect of future detectors.
The paper is structured as follows.In section 2 we describe the modelling of the biases for GWs, justifying our theoretical formalism and producing the required event rate and chirp mass distributions.Further, we model the biases for SNIa in section 3. Finally, we apply these biases to the kernels of the number count fluctuation and the angular power spectrum in section 4, and study their impact.Section 5 is then devoted to summary and conclusions.
GW Biases
Here we illustrate the modelling of the magnification and evolution biases for GWs.We first explain the reasoning behind choosing the signal-to-noise ratio instead of luminosity as discriminant for the magnification bias; we then describe both the modelling for the event rate and the chirp mass distribution, which we derive using the primary mass distribution from [41][42][43].An interested reader can find the distributions of primary and secondary masses in Appendix A, and the derivation of the chirp mass probability distribution function in Appendix B. We then evaluate the GW biases for the different mass models used.
For galaxy surveys the magnification bias is generally defined from [39] as: where L c is the luminosity threshold of our survey at each redshift and n g (a, L c ) is the comoving number density of sources above the threshold.The latter is defined by integrating the (comoving) luminosity function Φ over luminosity: where L * is a characteristic luminosity in the luminosity function.The number of sources that are above the corresponding luminosity cut is the same as the number of sources that are observed.This implies that regardless of the choice of luminosity cut, n g can be zero at a certain redshift, as there will be no sources emitting at a sufficiently high luminosity.Therefore, the value for the magnification bias is expected to increase with redshift up until its validity limit, i.e.where sources are not detectable anymore, meaning that observing fewer and fewer sources leads to a (positive) larger value of s.
We can now transport this concept from the galaxy case to the GW scenario.However, for GWs we can't produce a detector-independent quantity analogous to EM luminosity, as the quality of a detection (the SNR, ρ) is inherently tied to the one-sided power spectral density of the detector, i.e. its sensitivity, P SD(f ) [44]: In galaxy surveys the samples are also telescope dependent, as different telescopes target different parts of the spectrum or optimize for different types of galaxies.As an example Euclid will target Hα emitting galaxies, while DESI will have broader target to any emission line.The response function of a GW detector is equivalent to the instrumental design of optical and near-infrared telescopes.Using the characteristic strain h(f ) of a GW event (which depends on chirp mass M and redshift) is not possible either, as it still has to be compared to the detector's sensitivity curve, and their ratio integrated to determine whether the event is detected and at which level of confidence.The two scenarios would be similar if we assumed the sensitivity curve of GWs detector to be flat across all frequencies, and the GWs signal to be a single-frequency burst.All signals would then see the same response from the detector, as they would not move across frequency space.We therefore define the magnification bias for GW merger events in term of the SNR as The factor of 1 5 comes from implicitly fixing the expression eq.(1.1) and setting s accordingly; this was done to keep eq.(1.1) general, particularly for coding purposes.We therefore need to model the number density of observed events as a function of the detector signal-to-noise ratio ρ.Eq. (2.4) defines the magnification bias for any GW source.However, for the purpose of this paper we will only focus on GWs from binary black hole mergers.The same method can be applied to binary neutron stars and neutron star black hole pairs, if the appropriate number density is provided.
Event rate for GWs
The number density of observed sources is the number above a certain SNR threshold, which is usually assumed to be ρ th = 8 for a single detector.In the case of multiple detectors, ρ th is added in quadrature.
From [45], we can model the number density of BBH mergers in comoving volume as: where τ is the observation time of the detector, R GW (z) is the intrinsic merger rate, ϕ(M) is the chirp mass M distribution, with Selection effects of the signal-to-noise threshold are included through the implementation of a survival function S(ρ th ; M, z), which is defined as the fraction of sources that are above the SNR threshold, ρ th , at any given mass and redshift bin1 .We note that for physical experiments, the sensitivity curve should be defined on the detected SNR, rather than the optimal one.The former has a non-central χ 2 distribution [46] which depends on the optimal SNR, which by itself is a function of the PSD.Although our simplified assumption is not expected to qualitatively change the result of this study, a rigorous analysis with real data would have to take this into account for a more realistic experiment.To evaluate S(ρ th ; M, z), we start from an expression for the SNR ρ for a compact binary merger [44,45]: where θ is the orientation of the binary with respect to the detector, and ρ 0 (M, z) encapsulates signal and detector's response.For simplicity, here we only use a 0PN approximation for the signal to compute the SNR.A more rigorous analysis would include higher order corrections to describe merger and ringdown parts without this approximation, thus including a boost to the SNR.Assuming random orientations, the PDF of θ can be well approximated by [44]: Left: Fraction of sources detected, S(ρ th /ρ 0 ) as a function of ρ 0 , i.e. the characteristic SNR of the source.Low values of S imply either a low SNR threshold or loud events, whereas large S signifies a high threshold or quiet events.Center: Fraction of sources undetected at each random orientation.θ is equivalent to the ratio between the SNR threshold ρ th and the characteristic SNR of the source, ρ 0 .Right: Orientation PDF.
for 0 < θ < 4 and P (θ) = 0 otherwise.Note that this is valid for a single L-shaped detector.In the case of three detectors such as LVK, the SNR can be added in quadrature, although for a triangular configuration such as ET, this would change.However, for simplicity we assume the same function applies to an ET-like experiment.Further, we assume that f max corresponds to the frequency at the innermost stable circular orbit: From eq. (2.7) we have θ = ρ/ρ 0 , thus by fixing ρ = ρ th we can obtain a PDF for the sources that produce a sufficiently high SNR to be detected.The corresponding cumulative distribution function is the integral of eq.(2.9), evaluated from θ c to 4. Thus, the survival function S(ρ th ; M, z) from eq. (2.5) is defined as: where T (θ) is the integral of the angular orientation PDF P (θ). Figure 1 shows plots for these functions.Combining these results, the expression for the number density of sources becomes assuming an observation time τ and a redshift bin z.While the survival function is detector dependent, the chirp mass distribution depends on the mass distribution of black holes.To fully compute the biases we need to specify how likely are chirp masses of value M.
The full derivation of the chirp mass PDF can be found in Appendix B2 .Here we report only the final result for the PDF ϕ(M): where f and g are the distributions of secondary and primary masses respectively.x 1 is a real solution for the secondary mass m 2 in terms of chirp M and primary m 1 masses, given by the expression Note that this expression is obtained by solving the cubic equation (B.1) and is valid for We then employ current phenomenological distributions of primary and secondary masses from the catalogues of the LIGO-Virgo-KAGRA (LVK) collaboration [42,43], to describe f and g in eq.(2.13).Their full prescriptions can be found in Appendix A.
Magnification and evolution biases for GW events
Using the definition of magnification bias described in eq. ( 2.1), we can write: Initially, we plug in the sensitivity curve for a network of aLIGO-like interferometers to mimic the LVK detectors3 , adding the SNR in quadrature for three detectors.We find that the values of the magnification bias in this case are strongly dependent on the low number of sources detectable.In fact, figure 2 shows very large values of s for all models considered.Clearly, assuming a higher threshold of detection drastically decreases the number of observed sources even at lower redshift, and, consequently, increases the magnification bias.Conversely, a lower value of ρ th (e.g.blue line) yields lower values of s across a larger redshift range.Further, it is interesting to note that, at the same detector's threshold, different distributions of chirp masses have similar magnification biases.This suggests that for the chirp mass models assumed in this paper, s does not vary significantly.We note that this conclusion is not necessarily true for any chirp mass distribution.
The large values of s for LVK shown in figure 2 can be further explained with figure 3, investigating the dependence of the observed number density of GWs as seen by a LVKlike experiment on both redshift and SNR threshold.We calculate the observed number densities using (2.12), assuming an observation time of 1 year and that the intrinsic merger rate R GW (z) follows directly the Madau-Dickinson rate [47,48]: Magnification bias for an LVK-era experiment.We stop calculating s for each model when the number of observed sources is below the arbitrary value of 100 events Gpc −3 ; this is to suggest a limit of validity for the bias, as clustering analysis requires a larger number of sources.Solid lines indicate a Power Law + Peak model for the primary BH mass, whilst dashed ones show the Broken Power Law model.Fainter lines represent the range where the biases become more uncertain, as the number of observed sources crosses 120, getting closer to our arbitrary cut.
with R 0 providing the merger rate at z = 0 and is given by [42,43] as R 0 = 23.9Gpc −3 yr −1 .In particular, the left plot shows how a larger value of ρ th will yield a steeper curve at low redshift, and, conversely, a lower value of ρ th produces a more gentle one.Therefore, as redshift increases, so does the distance between each curve, showing the number of sources missed/detected when changing the minimum SNR.This is made more explicit in the plot on the right, where curves at fixed redshifts are plotted against ρ th .Notably, the magnification bias can be read off directly here, as it is the slope at a given value of SNR threshold for a fixed redshift.Thus, one can see that approaching higher thresholds and redshifts, the slope increases drastically; in particular taking ρ th = 8 when going from z = 0.3 to z = 0.4 the slope is evidently steeper, which results in the much steeper values of the bias in figure 2.
We then plot in figure 4 the magnification bias for third-generation detectors ET and CE.The increased sensitivity of these future experiments ensures that almost all sources are observed, thus the bias will be extremely small.The two detectors differ only towards higher redshifts, with CE showing a steeper tail similar to the LVK case.This could be explained by the different sensitivity curves.CE is sensitive from 5Hz, whereas ET from 1Hz.However, GWs emitted at larger distance will result in a lower merging frequency shown in eq.(2.10), thus a smaller frequency ranges observable by the detector.Therefore, at higher distances, this might affect the observed number of sources and give rise to the difference seen in the two biases in 4.
For further convenience, we provide functional forms for the magnification biases calculated so far.We fit a polynomial of order 3 with coefficients y = a + bx + cx 2 + dx 3 , and note the fitting values in table 1.Additionally, we opted to fit only the initial part of the curve for LVK, thus before the sudden steepening.As the curves all steepen when the number of observed sources is n obs ∼ 120, we chose this value as upper cut for the functional forms.The appropriate redshift intervals are listed in the final column in table 1.The evolution bias follows similarly.Using its definition in eq. ( 1.2), we can find: We plot b e for a LVK-like experiment in figure 5, and for third-generation detectors ET and CE in figure 6.For the former, the lower sensitivity results in a lower number of detections and thus a much stronger correction.In fact, a strongly negative evolution bias implies that the survey observes a population which appears to become more sparsely distributed as the universe evolves.However, this is actually an effect of the low sensitivity of the experiment, which results in a lower number of detections, and thus it appears as if the sources are intrinsically decreasing with redshift, whilst we should expect a larger volume of sources as we approach cosmic dawn.In fact, the improved sensitivity of 3G detectors ET and CE results in correctly tracing the evolution of BBH mergers.Considering a Madau-Dickinson rate peaking around cosmic dawn as in eq.(2.16), i.e. z c ∼ 2, the evolution bias shows a population of tracers becoming more densely distributed as the universe expands, until it reaches z = 2, where the trend inverses; then, as the tracers start to become more sparsely populated as the universe expands (i.e. the scale factor grows), the evolution bias becomes negative towards present time.Finally, we fit cubic polynomials to the models for b GW s e calculated so far.As with the magnification bias for LVK, we only fit curves in redshifts for which n obs > 120, which we report in the final column of table 2. This was to avoid the final part of the curves, which are drastically impacted by lower observations and thus more uncertain.should be defined in the same manner as for galaxy samples, i.e.: Therefore one only needs to model the number density of SNIa as a function of time and magnitude threshold.
SNIa event rate
In order to model the number density of SNIa, we employ a parameterisation described in [49], constructing a luminosity function Φ(M, z), with M being the peak absolute magnitude of the event.A key assumption in this analysis is that M is given by a narrow Gaussian distribution [50,51].
The starting point of the expression is the star formation rate (SF R).Initially we pick a standard cosmic star formation rate produced [52,53]: Further, the delay time between formation of a binary and subsequent SNIa explosion is modelled by [54] as a power law distribution: Combining the above, we have the SNIa rate in units of yr −1 Mpc −3 [49]: where the factor C SN Ia = 0.032M −1 ⊙ can be computed from the stellar mass range of 3M ⊙ < M < 8M ⊙ for SNIa and the initial mass function [55].Additionally, the explosion efficiency η is taken as the canonical value of 0.04 [52].In the left panel of figure 7 one can see the flat rate density of SNIa as a function of redshift.
Finally, assuming the absolute magnitudes of supernovae are Gaussian-distributed, we can produce the magnitude distribution of sources [49]: where G is a Gaussian distribution, M * = −19.06 in the B-band and σ = 0.56.Φ(M, z) is the magnitude-equivalent of a luminosity function for SNIa.In the right panel of figure 7 one can see the distribution for different redshifts.Here we assume that the dispersion of the distribution is the same remains constant.
The limiting absolute magnitude of detection is found using the limiting apparent magnitude of the detector in a specific band, the redshift of the event and the cross-filter correction K. Recall that at peak brightness: with m and M being, respectively, the apparent and absolute magnitude, and where we assume that the single or cross-filter K-correction is simply a constant offset (note that we neglect errors on the measured apparent magnitude).Thus, the number of sources that have an absolute magnitude smaller than the limiting value -and so are bright enough to be observed -is just the integral of the luminosity function over the allowed magnitudes: Table 3. Coefficients of a third-order polynomial fit to the magnification bias for SNIa surveys with different limiting magnitudes.We report the redshift interval in the final column.Note, the bias is 0 below the redshift range reported for each model.
Magnification and evolution biases of SNIa
As the magnification bias is computed at a fixed redshift, the K-correction can be safely considered constant, thus neglected when differentiating.Thus, the magnification bias can be evaluated by using (3.1): Finally, we find: Thus, using eq.(3.6), we can compute the limiting absolute magnitude at each given redshift from a value of the limiting apparent magnitude of a telescope and compute the magnification bias as prescribed in eq.(3.8) (see figure 8).The evolution bias for SNIa is computed similarly, from eq. (1.2): Both biases are computed up to a value of redshift for which we can observe less than 2σ of the magnitude distribution; this was an arbitrary cut to select only the portion of observations which would be statistically significant.
From figure 8 we note that the magnification bias is fixed to zero until a value of redshift dependent on the magnitude cut.This is due to the fact that before such distance, all SNIa can be observed; however, at such redshift the limiting absolute magnitude approaches the Gaussian distribution of SNIa, thus increasing the number of objects near the threshold (and consequently, of objects being magnified in or out).
As we did for the GWs biases, we fit a cubic polynomial of equation y = a+bx+cx 2 +dx 3 to the values of s SN Ia calculated here, and we report them in table 3.
If we then relax the assumption that SNIa have a fixed brightness across all redshifts [56][57][58][59][60][61] then we can modify (3.6) and add an extra nuisance term ∆m evo (z) on the righthand side, accounting for a potential evolution of the SNIa intrinsic luminosity with redshift. .Left: Magnification biases for a SNIa survey.Solid lines represent the biases for a SNIa with fixed luminosity, whilst dash-dot lines illustrates the bias for SNIa with an intrinsic luminosity redshift dependence.Right Evolution biases for a SNIa survey.Coloured lines show models for an intrinsic SNIa luminosity evolving with time, whilst the black line describe SNIa of fixed peak magnitude M peak = −19.06.In particular, dashdot lines represent an evolution with a power law of index δ 2 = 2, whilst dashed ones have δ 1 = 0.29.For the magnification bias, the two overlap almost completely, whilst this is not the case for the evolution bias, due to extra redshift dependence.Note that the legend is shared between the two plots.
Different models have been proposed to model this evolution, however we will consider only Model B from [57], also illustrated in [62]: We can explore two sets of parameters for this, still in agreement with standard ΛCDM [59], i.e. ϵ 1 = 0.013 ± 0.06 and δ 1 = 0.29 ± 0.22, and ϵ 2 = 0.029 ± 0.052 and δ 2 = 2 ± 1.7.Both imply a population of SNIa which becomes intrinsically dimmer as the redshift increases.The biases are then calculated as before, and then are plotted in figure 8.
If the intrinsic luminosity of SNIa is allowed to vary with redshift, the effect is particularly relevant for the evolution bias, whilst the magnification one is only slightly affected.The reason is due to the nature of the luminosity function dΦ dM in (3.5).For a fixed luminosity kind of SNIa, the magnitude distribution function is separable in redshift and absolute magnitude; hence computing its evolution bias will result in simplifying out the integral over a Gaussian in magnitude (in eq.(3.10)) and only terms related to the intrinsic redshift distribution of SNIa, and its derivative, will remain.These have a small contribution as the redshift distribution is assumed to be relatively flat, as seen in figure 7. Notably, the magnitude cut disappears and thus the evolution bias becomes independent of it.
However, relaxing the assumption of a fixed luminosity brings in an extra redshift derivative of the Gaussian, as now the peak magnitude is dependent on redshift.This allows for the evolution bias to then depend on the magnitude cut as the integral is not simplified out anymore.
Finally, we fit a cubic polynomial to these models for the evolution bias, reporting the coefficients of the functional forms in
Impact on observables
In this section, we will show the relevance of these biases in the luminosity distance clustering power spectra in two ways.Initially, we will investigate their impact on the relativistic corrections to the number counts which are most likely to be detected, namely the Doppler and lensing terms.Then, we will examine how these biases affect the angular power spectrum, exploring both auto-correlation and cross-bin correlations at different redshifts.
Impact on the number counts
The modelling of these biases is extremely important to investigate the relativistic corrections in the number density fluctuation in eq.(1.1).As shown in [37], when analysing clustering in luminosity distance space several terms are dependent on these two parameters, such as the lensing and Doppler terms.Notably, the lensing term in luminosity distance is dependent on both bias parameters, as opposed to the redshift-space case which depends only on the magnification bias.
In particular, by computing the appropriate perturbation in luminosity distance and isolating each correction to the underlying matter density δ n , the number density fluctuation in eq.(1.1) is recast into [37]: where we only report the main correction terms, with coefficients Here r is the source's position, r the comoving distance, γ ≡ rH/(1 + rH), and where we defined Note that β contains the magnification bias s and evolution bias b e of the source type in question.Thus, we explore the impact of the biases we calculated in Sections 2.2 and 3 on the Doppler and lensing terms.We plot the Doppler amplitude A D in figure 9, comparing the results for GWs (for an ET-like experiment) on the left panel and SNIa on the right one.We opt to show only one model of the biases for each population (i.e.Power Law + Peak chirp mass distribution for GWs, and SNIa with fixed intrinsic luminosity) for simplicity.Furthermore, the difference in the Doppler correction with the other models was found to be negligible.Additionally, we plot in grey the Doppler term with, respectively, the biases set to zero (solid line), only magnification (dashed) and only evolution (dash-dot).This clearly shows that the biases dominate the amplitude of the Doppler correction and are crucial for its analysis.In particular, in the case of GWs from an ET-like experiment, the correction traces the evolution bias significantly, whilst for SNIa, the higher values of magnification bias impact the Doppler term and become the main contribution.
We then focus on the lensing amplitude, addressing the same set of biases as in figure 9. On the left of figure 10 we fix the source at z = 2 and explore A L as a function of distance r(z) to the object, thus looking at the integrand on the right-hand side of eq.(4.1).As we previously suggested in [37], the shape of this curve is strictly dependent on the value of the biases, together with the zero-crossing separating magnification near the source from de-magnification away from it.This is clear from the bottom left panel of figure 10 showing the lensing amplitude for SNIa surveys with different limiting apparent magnitudes: higher values of m lim (lower threshold), thus lower magnification bias, shift the zero crossing further from the observer, shrinking the redshift range in which magnification occurs (i.e.A L > 0).We plot the percentage difference between angular power spectra with biases as opposed to without them.The bottom left panel shows models with limiting magnitude m lim > 25 overlapping; bottom right is missing m lim = 25 as this is outside our definition of validity limit for the bias for SNIa (i.e. less than 2σ of the magnitude distribution is observed).Dashed lines represent negative values.
Impact on the angular power spectrum
After examining the impact of the biases on the amplitudes of certain corrections to the number count, we explore their relevance in the angular power spectrum.Using a modified version of the code CAMB4 we produced in earlier work [37], we can compute angular power spectra supplying different values of both magnification and evolution bias.Whilst for GWs we use broad Gaussian windows (σ = 0.2) for the sample bins, motivated by the large uncertainty in the estimation of the luminosity distance of GWs [42,43], for SNIa we chose smaller bins (σ = 0.1), as the related distance uncertainties for LSST will be smaller [63][64][65].
Hence, we plot the percentage difference in the angular power spectrum at each scale when accounting for the biases compared to the case where they are both set to zero for both GWs and SNIa, respectively top and bottom of figure 11.We do this for three different redshifts, noting that z = 1.5 is outside what we defined as the validity range of the bias for SNIa assuming a limiting apparent magnitude m lim = 25, and thus this particular case is not shown in the bottom right panel of figure 11.We recall that the SNIa validity limit was fixed at the distance at which 2σ of the magnitude distribution is observed.
One can immediately note the difference between the two tracers.Whilst SNIa show an increasing strength of the biases at higher redshifts, the models considered here for GWs impact differently.In fact, at low redshift the effect of the biases is simply at a percentage level, approaching a negligible one around z = 1.5, and rising steeply after.This follows directly from the models of the biases studied here: the evolution bias (see figure 6) is close to −2 at very low redshifts, crosses 0 around z = 1.5, and then rises further.
We then examine the impact of the models of the biases considered when cross-correlating different redshift bins.We show two different examples of this: setting a background tracer at z b = 1.5 and a foreground one at z f = 0.5, and similarly with z b = 2.0 and z f = 1.0.This is shown in figure 12, with GWs in the top panels and SNIa in the bottom ones.Similarly to figure 11, the plots report the percentage difference between the angular power spectra with biases compared to those with the biases set to zero.It is clear how not accounting for the magnification and evolution biases can lead to large differences, depending on the tracer considered.On the top left panel, cross-bins angular power spectra for GWs show percentage level difference, although the contrast increases drastically by roughly an order of magnitude when shifting to higher redshifts (on the top right).On the other hand, angular power spectra built using SNIa already show differences of several times those with biases set to zero; going to higher redshifts can push the percentage difference to a few order of magnitude.In all cases, the difference tends to a constant value at a larger value of ℓ which depends on the redshift considered.
The large differences clearly show that cross-correlations require considering the impact of the magnification and evolution biases for both types of transient tracers.Additionally, cross-bin correlations allow for the different bias models to be distinguishable even at lower redshifts.This is clear from the plots on the left-hand side in figure 12, where each model produces a distinct difference with respect to the angular power spectrum without biases.Such contrast is not seen in the auto-correlations with GWs in the top panels of figure 11 even at z = 1.5, where the two models are both roughly of the same order of magnitude.SNIa models already showed substantial differences in auto-correlations at high redshift, as shown in the bottom right panel of figure 11.However, the distinction becomes more significant in the cross-bins correlation, with differences of even an order of magnitude arise between different limiting magnitude models, as shown in the bottom plots of figure 12.
Finally, we explore the impact of the biases modelled in this paper on the auto-correlation angular power spectrum across redshift.We plot the percentage difference between the nonzero biases case with respect to the zero bias case as a function of redshift in figure 13.The step in the plot represents the binning used for both GWs and SNIa, respectively z = 0.2 and z = 1; thus, we decided to avoid a smooth curve for clarity.On the LHS we plot the difference when calculated for GWs observed with an ET-like detector, and on the RHS the same for SNIa from a magnitude-limited survey.The former shows (positive) deviation from zero from around z = 1, growing almost linearly with redshift.It also shows that before such a point, the angular power spectrum with biases accounted for grows slightly and turns back towards zero.An explanation of this could be traced to the behaviour of the magnification and evolution biases for ET and CE (shown in figure 6).Since s GW is significantly close to zero, the largest contribution will be given by the evolution bias; and considering that b GW e crosses zero around z < 1.5, this might explain the dip in the percentage difference for GWs in figure 13.
On the other hand, as shown in figure 8, the evolution bias of SNIa was very small, whereas s SN Ia was substantially larger than the GWs case.The Doppler kernel in figure 9 and the lensing amplitude in figure 10, the correction terms for SNIa were dominated by the impact of the large magnification bias; similarly, the difference in the auto-correlation angular power spectra between the case with biases and the one with them set to zero, is significantly As before, we plot the percentage difference between the C ℓ with biases as opposed to those without.
impacted by the magnification bias.This is clear by the shape of the curves shown in RHS of figure 13: the difference is zero until the redshift value at which the corresponding s SN Ia starts growing.
Figure 13 further stresses that for SNIa the models can be clearly distinguished from z ∼ 1; however, whereas auto-correlation angular power spectra with GWs do highlight the need for accounting for the biases, they do not provide large enough differences between the models considered.Such distinction is only seen when cross-correlating separate redshift bins, as seen in figure 12.
Summary and conclusions
By modelling and ultimately measuring the bias properties of tracers, we can use them as probes of large-scale structure and cosmology.The bias properties of galaxies is a well-studied topic; with the advent of large data sets of transient sources, we need to learn how to describe their biases equally well.As we have eluded to, there are particular fundamental physics motivations for wanting to use tracers who most naturally 'belong' in luminosity distance space, such as GW sources and SN1a.
The observed fluctuation of the source number counts, ∆ O , is the basis of many clustering analyses [7,8].When considering transient tracers such as GWs or SNIa, this has to be computed in luminosity distance space, as the two types of object do not carry direct information of their redshift [37].The expression for the number counts contains several relativistic corrections, notably Doppler magnification and lensing.These depend on the magnification and evolution biases, which can significantly alter their amplitudes.
In this paper, we first described the modelling of magnification and evolution biases of gravitational waves from binary black hole mergers.We initially carefully described the necessity of defining them with respect to a specific detector and not to any single "intrinsic" quantity, as opposed to the traditional case in galaxy clustering.We also employed two different distributions of chirp masses, namely a Power Law + Peak and a Broken Power Law consistent with population analysis of the GWTC-3 catalogue by the LVK collaboration [42,43].Furthermore, we examined GW biases for three different SNR thresholds, noting that a higher ρ th results in larger values of the biases.The lower sensitivities of the present terrestrial detectors (relative to 3G expected sensitivity curves) yields a strongly negative evolution bias.This implies that sources are becoming sparser with redshift; however, given the expectation of a peak around cosmic dawn (z ∼ 2), it is clear this is due to a decline in the detector's sensitivity at these redshifts.In fact, third-generation observatories ET and CE should instead be able to perfectly trace the evolution of GWs from BBH mergers.Consequently, their related magnification biases are particularly small up to high redshift.We note that the method described for GWs from BBH mergers can be equally applied to different sources of GWs, such as neutron stars mergers or black hole-neutron star binaries, provided the appropriate chirp mass distribution and merger rate are supplied.Further, we explored the biases for SNIa, investigating the impact of changing the magnitude threshold.Whilst the magnification bias is especially sensitive to this, the evolution bias for the SNIa models considered is independent of it.Additionally, the former can reach much larger values than the ones seen for GWs, while the latter is close to flat around zero, implying a population density that remains close to constant across redshift.We also explored the possibility of a population of supernovae of intrinsic luminosity evolving with redshift, using two different models proposed in the literature.We found that the magnification bias is only slightly altered, while the evolution one changes significantly depending on the model adopted (i.e.whether the intrinsic luminosity of SNIa decreases or increases with redshift).
Finally, we investigated the effects of these biases on relativistic corrections to the observed number counts and on the angular power spectrum.In both cases, the impact of the biases produces a significant difference with respect to cases where the biases were set to zero.We found that the lensing magnification is strongly dependent on the magnification bias, as expected.However, the relativistic Doppler correction shows a different behaviour depending on the tracer.For GWs it is sensitive to the evolution bias, whilst for SNIa the dominant bias is the magnification bias.This is explained by the different values taken by both s and b e for the two different tracers: the former being significantly small for GWs, the latter for SNIa.Therefore, with one of the two parameters closer to zero, the other has a stronger impact.
The same effect occurs then when comparing angular power spectra with biases included to ones with biases set to zero, as in figure 13.This in turn would have an impact on any derived constraints from the angular power spectra in luminosity distance.For GWs, the difference between the angular power spectra is initially small, rising after the redshift for which b GW s e > 0. For SNIa, the difference is null until the distance at which s SN Ia > 0. Furthermore, when analysing the impact of the biases on the angular power spectrum, we found a percentage level difference in auto-correlations with GWs between spectra with biases accounted for and spectra with biases set to zero.When the same analysis is applied to SNIa, the differences are greatly increased at high redshifts, i.e. when s SN Ia > 0. This shows the importance of accounting for the biases when computing angular power spectra, however, at least for GWs, the different models of the biases are distinguishable only at high redshifts.The separation between models is instead achieved more clearly when investigating cross-bin correlations between different redshifts.Figure 12 not only shows greatly increased differences with respect to the unbiased spectra, but also highlights the possibility of distinguishing the specific model of the bias.As these parameters are strictly dependent on population properties, distinguishing between them could help us infer details of the mass distribution of these tracers.
Having set up frameworks with which to model transient biases, our next step will be to understand how these impact constraints from cross-correlations in the era of stage IV galaxy surveys, and the 3G era of gravitational wave detection.This impacts not only 3G cosmology, but may also be highly relevant for astrophysics and compact object formation channels, if we find that population properties can be simultaneously constrained.no.URF\R1\180009).CC is supported by the UK Science & Technology Facilities Council Consolidated Grant ST/T000341/1.
B Chirp Mass distribution
We want to compute the PDF h(z) of the chirp mass M given the distributions of primary and secondary masses, g(m 1 ) and f (m 2 ).Let us recast these random variables (for simplicity) in m 2 = x,m 1 = y, M = z, and their relation is given by the chirp mass definition: Now, looking at (B.1), we need to find the solutions for x(z, u).We can recast the equation as a third order equation for x of the form
Figure 1 .
Figure1.Left: Fraction of sources detected, S(ρ th /ρ 0 ) as a function of ρ 0 , i.e. the characteristic SNR of the source.Low values of S imply either a low SNR threshold or loud events, whereas large S signifies a high threshold or quiet events.Center: Fraction of sources undetected at each random orientation.θ is equivalent to the ratio between the SNR threshold ρ th and the characteristic SNR of the source, ρ 0 .Right: Orientation PDF.
Figure 2 .
Figure2.Magnification bias for an LVK-era experiment.We stop calculating s for each model when the number of observed sources is below the arbitrary value of 100 events Gpc −3 ; this is to suggest a limit of validity for the bias, as clustering analysis requires a larger number of sources.Solid lines indicate a Power Law + Peak model for the primary BH mass, whilst dashed ones show the Broken Power Law model.Fainter lines represent the range where the biases become more uncertain, as the number of observed sources crosses 120, getting closer to our arbitrary cut.
Figure 3 .
Figure 3. Modelling the number densities of GWs observable with LVK using (2.5).Left: as a function of redshift for different (fixed) SNR threshold values; Right: as a function of ρ th at different (fixed) values of redshift.At each given z, the value of the magnification bias is given by the slope of the corresponding line in the right-hand plot.Increasing ρ th starts to change significantly around z = 0.5, explaining the behaviour of s in figure 2.
Figure 4 .
Figure 4. Magnification bias for third-generation detectors, left ET, right CE.As before, solid lines are for a Power Law + Peak distribution of the primary BH mass, and dashed ones are for the Broken Power Law model.
Figure 5 .
Figure 5. Evolution bias for a LVK-like survey.As previously, solid lines indicate a Power Law + Peak model for the primary BH mass, dashed for the Broken Power Law model.Similarly to figure 2, faint lines represent a regime with observed number of sources 100 < n obs < 120, thus approaching our arbitrary cut for the biases.
Table 2 .
04 −1.76 × 10 1 1.05 × 10 2 −4.36 × 10 2 [0.1, 0.22] Coefficients of a third-order polynomial fit to the evolution bias for GWs detectors for different SNR threshold.Given the negligible difference in the biases between ET and CE over the redshift interval z ∈ [0.1, 3.5], we only report the fits to ET.
Figure 6 .Figure 7 .
Figure 6.Evolution bias for third-generation ground based GWs observatories ET (left) and CE (right).These will be able to trace the evolution of BBH mergers up to high redshift.
27 Figure 8
Figure 8. Left: Magnification biases for a SNIa survey.Solid lines represent the biases for a SNIa with fixed luminosity, whilst dash-dot lines illustrates the bias for SNIa with an intrinsic luminosity redshift dependence.Right Evolution biases for a SNIa survey.Coloured lines show models for an intrinsic SNIa luminosity evolving with time, whilst the black line describe SNIa of fixed peak magnitude M peak = −19.06.In particular, dashdot lines represent an evolution with a power law of index δ 2 = 2, whilst dashed ones have δ 1 = 0.29.For the magnification bias, the two overlap almost completely, whilst this is not the case for the evolution bias, due to extra redshift dependence.Note that the legend is shared between the two plots.
. 4 )= 27 Figure 9 .
Figure 9. Dimensionless Doppler kernel A D including the contribution of the biases.Left: the effect when considering GWs observed by ET (given the similarities of the biases between ET and CE we plot only the former); Right: the effects when considering SNIa with different magnitude cuts.For both cases, we plot in grey the contribution in the case of the biases set to zero (solid line), only non-zero magnification bias (dashed) and only non-zero evolution bias (dash-dot); in particular, the latter two are set for ρ th = 8 on the left, and m lim = 26 on the right.
Figure 11 .
Figure 11.Impact of the models of the biases considered on auto-correlation angular power spectra for GWs as seen by an ET-like experiment (top) and for a magnitude limited SNIa survey (bottom).We plot the percentage difference between angular power spectra with biases as opposed to without them.The bottom left panel shows models with limiting magnitude m lim > 25 overlapping; bottom right is missing m lim = 25 as this is outside our definition of validity limit for the bias for SNIa (i.e. less than 2σ of the magnitude distribution is observed).Dashed lines represent negative values.
27 Figure 12 .
Figure 12.Impact of the models of the biases considered on cross-bins correlations, plotting the percentage difference between angular power spectra with biases and those without.Left: correlating a background bin at z b = 1.5 with a foreground one at z f = 0.5; right: the same with background one set to z b = 2 and foreground one to z f = 1.As in figure11, top is for GWs for an ET-like experiment, and bottom for SNIa.The bottom plots lack the model with limiting magnitude m lim = 25 as it is outside the validity limit.
Figure 13 .
Figure 13.Impact of the biases considered on the angular power spectrum at ℓ = 10 across redshift.As before, we plot the percentage difference between the C ℓ with biases as opposed to those without.
2 )
The joint PDF p(z, u) of the new variables z, u is related to that of x, y by the conservation of the volume in the space of probability f (x)g(y)dxdy = p(z, u)dudz ,(B.3)where we have assumed that f (x), g(y) are independent.The differentials are related by the determinant of the Jacobian matrix of the transformation:dxdy = |J(z, u)|dudz .(B.4)Taking x i as a root of the equation z = z(x, y), and the fact that, from (B.2), y(z, u) = u, we can write the Jacobian as:|J(z, u)| = ∂x i (z, u) ∂z .(B.5)Substituting (B.5) and (B.4) into (B.3), and summing over the possible roots, we find the combined PDFp(z, u) = x i f (x i (z, u))g(u) ∂x i (z, u) ∂z , (B.6)and to obtain the PDF for z, h(z), we marginalise over u: h(z) = x i du f (x i (z, u))g(u) ∂x i (z, u) ∂z .(B.7)
x 3 + 5 u 3 , c = − z 5 u 2 . (B. 8 )
bx + c = 0 , with b = − z Depending on the sign of the delta of the cubic equation above
Table 1 .
Coefficients of a third-order polynomial fit to the magnification bias for GWs detectors for different SNR threshold.
Table 4 .
Coefficients of a third-order polynomial fit to the evolution bias for SNIa surveys with different limiting magnitudes.As before, We report the mean squared error (MSE) and redshift interval in the final two columns.Note, the first model refers to a population of SNIa of fixed intrinsic luminosity, and yields a result independent of the limiting magnitude.Models δ 1 and δ 2 describe SNIa with intrinsic luminosity evolving with redshift at different rates. | 11,440.8 | 2023-09-08T00:00:00.000 | [
"Physics"
] |
A Dynamical Model of Equatorial Magnetosonic Waves in the Inner Magnetosphere: A Machine Learning Approach
Equatorial magnetosonic waves (EMS), together with chorus and plasmaspheric hiss, play key roles in the dynamics of energetic electron fluxes in the magnetosphere. Numerical models, developed following a first principles approach, that are used to study the evolution of high energy electron fluxes are mainly based on quasilinear diffusion. The application of such numerical codes requires statistical models for the distribution of key magnetospheric wave modes to estimate the appropriate diffusion coefficients. These waves are generally statistically modeled as a function of spatial location and geomagnetic indices (e.g., AE, Kp, or Dst). This study presents a novel dynamic spatiotemporal model for EMS wave amplitude, developed using the Nonlinear AutoRegressive Moving Average eXogenous machine learning approach. The EMS wave amplitude, measured by the Van Allen Probes, are modeled using the time lags of the solar wind and geomagnetic indices as inputs as well as the location at which the measurement is made. The resulting model performance is assessed on a separate Van Allen Probes data set, where the prediction efficiency was found to be 34.0% and the correlation coefficient was 56.9%. With more training and validation data the performance metrics could potentially be improved, however, it is also possible that the EMS wave distribution is affected by stochastic factors and the performance metrics obtained for this model are close to the potential maximum.
coefficients. There are a number of different approaches used to calculate the diffusion coefficients based on quasilinear theory, which require an estimate of the amplitude of various wave types Summers (2005); Albert (2008); ; . Most of these models are statistical distributions of the wave amplitudes, parameterized by the location of observations and current values for geomagnetic indices Meredith et al. (2001); Glauert and Horne (2005); Pokhotelov et al. (2008); Li et al. (2011); Agapitov et al. (2011);Meredith et al. (2012) ;Horne, Kersten et al. (2013); Mourenas et al. (2013); Gao et al. (2014); Mourenas et al. (2016). Such a parameterization has an underlying assumption that only the instantaneous activity of geomagnetic indices and solar wind values influence the wave distribution and the preceding state of the magnetosphere has no role. However, many studies have shown that the electron fluxes at Geostationary Earth Orbit (GEO) are influenced more by changes in the solar wind than geomagnetic indices Paulikas and Blake (1979); Blake et al. (1997); Reeves et al. (2011); Balikhin et al. (2011); Boynton et al. (2013Boynton et al. ( , 2015; and also that these parameters are temporally lagged with respect to evolution of the electron fluxes Li et al. (2005); Balikhin et al. (2012); Boynton et al. (2013); Boynton, . Thus, such parameters that are statistically related to the fluences of electrons should also be included in the development of wave models. This motivated the development of the wave models by Aryan et al. (2014), in which the waves were parameterized according to time delayed observations of the solar wind, and were subsequently extended to multi-parameter chorus and hiss wave models Aryan et al. (2016Aryan et al. ( , 2017. Boynton et al. (2018) investigated the most significant solar wind and geomagnetic index control parameters for lower band chorus (LBC) waves using the error reduction ratio (ERR) technique. The ERR is able to assess a wide range of nonlinearities from the solar wind and geomagnetic indices and their respective time lags. Boynton et al. (2018) found that the AE index coupled with the solar wind velocity controlled the evolution of the LBC waves throughout most of the inner magnetosphere, especially in the regions where LBC waves are generally observed.
In this study, equatorial magnetosonic (EMS) waves measured by the Van Allen Probes spacecraft are modeled using Nonlinear AutoRegressive Moving Average eXogenous (NARMAX) machine learning techniques.
The EMS waves are whistler mode emissions that propagate almost perpendicular with respect to the external magnetic field and are observed both inside and outside the plasmasphere Russell et al. (1970); Laakso et al. (1990). It is widely accepted that EMS waves are predominantly confined to approximately 3° of the magnetic equator Russell et al. (1970); Laakso et al. (1990);Cornilleau-Wehrlin et al. (2003); Nemec et al. (2005), though, it has been shown that some EMS waves may be observed at higher latitudes Aryan et al. (2019). They are observed between the proton gyrofrequency and the lower hybrid resonance frequency and generated as a result of proton ring distributions formed during magnetic storms at ring current energies of the order of 10 keV Perraut et al. (1982); Boardsen et al. (1992); Chen et al. (2011);Ma et al. (2014); Balikhin et al. (2015). It has been shown that EMS waves are able to interact with electrons through Landau resonance and accelerate electrons to relativistic speeds Horne et al. (2007). Horne et al. (2007) found that the bounce and drift averaged energy diffusion rates for magnetosonic waves are comparable to those for whistler mode chorus. Boardsen et al. (2016) performed a statistical survey of the fast magnetosonic wave mode detected by the Van Allen Probes mission and found that the overall intensity of EMS waves increases with AE index. Mourenas et al. (2013) presented simplified analytical expressions of the pitch angle and momentum quasi-linear diffusion rates of magnetospheric electrons in the presence of fast magnetosonic waves and demonstrated a good precision over a wide energy range between 100 keV and 2 MeV.
The motivation of this paper is to develop the first dynamic spatiotemporal model of the EMS waves. This could potentially be employed in numerical codes that involve finding solutions of the diffusion equations, replacing the statistical wave models with dynamic wave models. More accurate dynamical wave models should increase the accuracy of numerical codes that predict the radiation belt electron fluxes.
Data
The wave data were collected by the Van Allen Probes, which had a highly elliptical orbit with perigee ∼1.1 R E , apogee ∼5.9 R E , an inclination of 10.2°, and period of ∼9 h Mauk et al. (2014). The period considered in this study spanned from January 01, 2013 to December 31, 2017, where the model was trained on data from January 01, 2013 to December 31, 2015 and validated on data from January 01, 2016 to December 31, 2017. The background magnetic field measurements come from the fluxgate magnetometer (FGM), sampling at 64 Hz while the wave data comes from survey data from the WFR instrument, generating a full spectral matrix for 65 quasi-logarithmically spaced frequency channels in the range ∼2 Hz-∼11 kHz every 6 s. These field instruments are part of the Van Allen Probes Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) Kletzing et al. (2013).
EMS waves are observed close to harmonics of the proton gyrofrequency (ω p ) up to the lower hybrid frequency (ω LH ). They exhibit almost linear polarization (ellipticity ϵ < 0.2), propagate perpendicular to the local magnetic field, typically with 88° < θ Bk < 90°, and a high magnetic compressibility (δB ‖ /B 0 ) since the wave magnetic field is directed is oriented along the local magnetic field. Occurrences of EMS Waves were identified using a similar method to Boardsen et al. (2016). The search criteria used were that the emissions occurred in the frequency range ω p < ω < ω LH and that the compressibility δB ‖ /B 0 > 0.7. This latter criterion proved better at identifying EMS waves than the application of criteria based on the ellipticity or propagation direction. The search criteria were used to generate mask arrays indicating the occurrence or absence of waves. A blob analysis was then employed to determine the frequency/time limits of the waves. Blob analysis is a technique borrowed from the field of computer vision, which analyses a binary image (an image containing two states/colors) to determine the number and size of continuous areas of one of the image values. A blob analysis was used to determine the areas in the binary spectrogram where the search criteria were met. A further constraint, namely the minimum area for each blob was set to remove the "noise" due to single pixels or small areas on the binary image that fulfill the search criteria. Finally, a nearest neighbors analysis was performed to ensure that the areas detected by the blob analysis extended in both time and frequency space, which removes any single bad spectra. The output is a list of times of individual spectra and the frequency range in which EMW were observed. For each spectra identified as containing EMS waves the maximum amplitude and its occurrence frequency were recorded. For spectra with no EMS waves, fill values of 10 −2 pT were recorded, the reason for this being that it is important to know when EMS waves do and do not occur when training the model. This spectral information was then combined with satellite ephemeris data where values for the McIlwain L-shell and Roederer L were based on the satellite location and field line mapping using the Olsen-Pfitzer quiet time model Olson and Pfitzer (1982). The Olsen and Pfitzer model was used for as it provides a good model for the average external magnetic field value in comparison to measurements Friedel et al. (2005). This model was also adopted by the Panel for Radiation Belt Environment Modeling for improving space radiation models at the time when the data was developed and has been used in in a number of studies Meredith et al. (2012Meredith et al. ( , 2018; Meredith, Horne, Kerstal et al. 2014;Meredith, Horne, Li et al. 2014. The wave amplitudes, together with spatial locations in L-shell, magnetic local time (MLT), and Magnetic LATitude (MLAT), were then resampled with a resolution of 1 h.
Both solar wind and geomagnetic index data used in this study were taken from 1-minute OMNIweb solar wind data (https://omniweb.gsfc.nasa.gov/ow_min.html). Where the solar wind data were from the Advanced Composition Explorer and WIND spacecraft, which are propagated to the bow shock. This was then averaged into a 1 h resolution. The 1 h time resolution was mainly chosen due to the error in propagation from L1 to the Earth, which increases as the time resolution increases.
Methodology
In this study, NARMAX methodology is employed to model the EMS waves. The NARMAX model was initially proposed by Billings (1985a, 1985b) and is defined by where an estimate of the output ŷ at time t is a nonlinear function F of past outputs y, inputs u, and residuals e ( ( ) ( ) ( ) e t y t y t ); m is the number of inputs; and n are the respective lags.
The nonlinear function F can be represented by polynomials, rationals, wavelets, among others Billings (2013). In this study, the nonlinear function is set to be a polynomial. When F is expanded to a high degree polynomial there will be many monomials, most of which are very similar to each other and many which may not influence the system. In a polynomial model, these monomials will consist of linear and nonlinear coupled inputs, outputs, and noise terms of different lags up to the degree of polynomial selected for the model (e.g., is used to find a small subset of monomials from the polynomial (often referred to as the term dictionary) that best represent the system. The FROLS algorithm optimizes the driving parameters and their coefficients by searching for the most influential the monomials in the expanded function F by using the ERR. In the next step, all other monomials are orthogonalized relative to the selected monomial and and the orthogonalized monomial with the highest ERR is selected. This process of selecting the monomial, which is orthogonalized to all the previously selected monomials, with the highest ERR is repeated until a criteria is satisfied. In this study, the Adjustable Prediction Error Sum of Squares criteria is employed Billings and Wei (2008). The NARMAX procedure then involves the statistical validation of the model using correlation tests Billings and Voon (1986); Billings andZhu (1989, 1995). These correlations check that the residuals of the model are unrelated to the inputs. If there is a correlation between the residuals and the inputs, the tests will indicate if there is either a biased term within the model that needs to be removed or identify any nonlinear inputs missing from the model.
The NARMAX FROLS methodology was initially developed for control systems engineering problems but has since been employed in a diverse range of scientific fields. For example, NARMAX has been applied to analyzing the adaptive changes in the photoreceptors of Drosophila flies Friederich et al. (2009) and has been used to model the tide in the Venice Lagoon Wei and Billings (2006). In space physics, NAR- The solar wind inputs were the velocity v, density n, dynamic pressure p, and the interplanetary magnetic field (IMF) factor deduced by Boynton, Balikhin, Billings, Wei et al. (2011) and θ = arctan B y /B z ). These solar wind and geomagnetic indices were chosen as inputs due to past statistical wave models. These wave models are often parameterized by AE index Meredith et al. (2001) and also by solar wind values Aryan et al. (2016). Aryan et al. (2016) also showed an asymmetry in the B z component between north and south IMF, which is why B f is chosen as an input as this variable takes into account the north-south asymmetry due to dayside reconnection. One of the advantages of the NARMAX procedure is multiple inputs can used to define the initial model. If any of these inputs has no relation to the output, this input will not be selected in the final model. Since the EMS waves are generated as a result of proton ring distributions formed during magnetic storms at ring current energies Balikhin et al. (2015); Perraut et al. (1982), the SYM-H index, a ring current index and a metric for magnetic storms, was chosen as an input to the model. The lags employed for the solar wind inputs were 0, 1, 2, …, 12, 14, 16, …, 24, 28, 32, …, 48, so, put another way the lags were every hour between 0 to 12 h, every 2 h between 12 to 24 h, and every 4 h between 24 to 48 h. These solar wind parameters were time shifted to the bowshock and thus zero time lag can be employed as inputs. The geomagnetic indices used were SYM-H and the AE index and the time lags employed were 1, 2, 3, …, 12, 14, 16, …, 24, 28, 32, …, 48, similar to the solar wind inputs with the exclusion of the zero time lag. The zero time lag was not included for the geomagnetic indices as the model is aiming to be a forecast model. A wide range of lags were chosen for the initial NARMAX model. Again, if any of these time lags do not play a role in the evolution of the EMS waves then the FROLS algorithm will not select these lags in the final model and the correlation tests in model validation will not indicate the missing inputs with these lags.
EMS are complicated to model from data as the distribution of the waves evolve in space as well as time. One solution to this problem is to include the spatial position of the measurement as an input to the model. Therefore, the instantaneous position of the spacecraft at the time of measurement was used as an input. This includes L-shell L; sine and cosine of the MLT; and cosine of the MLAT.
In this study, the nonlinear function F was set to be a third degree polynomial. Initially, a second degree polynomial was trialed, however, during the statistical validation stage of the NARMAX model training, it was evident that there were many missing nonlinearities in the model as the correlation tests for most inputs with the residuals were not satisfied. The correlation tests were satisfied when the polynomial degree was increased to three, after a number of other small adjustments to the model.
For example, the cosine of the MLAT combined to the cubic power, however, during the statistical validation stage of the NARMAX model training, the correlation tests indicated a missing nonlinear cos(MLAT) at zero time lag, even though the model had selected a cube of the cos(MLAT) in the model. Therefore, the cube of cos 3 (MLAT) was employed in the model for the next iteration of the FROLS algorithm instead of cos(MLAT) to check if higher powers of the cos(MLAT) was missing from the model. This would allow the model to select up to the ninth power of cos(MLAT). In a subsequent run cos 3 (MLAT) then combined in the NARMAX models to higher powers indicating the correlation tests were correct in identifying this missing higher power of the cos(MLAT) input. However, the correlation tests still indicated a missing cos(MLAT) at zero time lag. Eventually cos 6 (MLAT) was settled on for the input to the model instead of cos(MLAT) as this satisfied the correlation tests. This is most likely due to EMS waves occurring in a narrow band around the geomagnetic equator where intensity decreases sharply the further away from the equator Russell et al. (1970) Therefore, the NARMAX model used in this study for the EMS wave amplitude B w was of the form It should be noted that the autoregressive terms and moving average terms are excluded from this model as the spacecraft does not return to exactly the same position on each orbit. In this case, the model reduces to a volterra series model (only consists of the nonlinear exogenous input terms).
Model and Performance
The EMS wave model was trained on Van Allen Probe-A data from January 01, 2013 to December 31, 2015 and the resultant model was then validated on separate Van Allen Probe-A data from January 1, 2016 to December 31, 2017, which is just over one apsidal period of the Van Allen Probes giving a full MLT coverage. Here, solar wind and geomagnetic indices from the period were used as inputs to the model, along with the spatial co-ordinates of Probe-A to compute an estimate of the EMS waves along the track of Probe-A.
The performance of the model was evaluated using two metrics; the correlation coefficient (CC) and the prediction efficiency (PE) and performed on the logarithm of the EMS wave amplitude. The CC is defined by Equation 3 and the PE by Equation 4. These performance metrics have been applied in many previous studies for assessing geospace models Baker et al. (1990); Klimas et al. (1996); Temerin and Li (2006) The CC for the validation period was 56.9% and the PE was 34.0%. timated data that correspond to the periodic transit of the spacecraft in its orbit as it tracks from perigee to apogee, where the peaks in wave amplitude correspond to the perigee and troughs in wave amplitude correspond to a high L shell. In this 15 day period, there is an interval between October 21 and 22 during which the measured wave amplitude does not exceed 10 pT followed by an interval on October 25 in which the amplitudes increase to 10 4 pT. These amplitude changes are successfully reproduced by the model. In this period, the lower cut off, when there are no EMS waves measured, is shown by the flat periods at 10 −2 pT. If the model output includes the same on/off cut off of the EMS waves as the measured data, where model predicted EMS wave amplitude below 10 −2 pT is set to 10 −2 pT, the performance of the model improves slightly to a PE of 34.7% and a CC of 57.5%. BOYNTON ET AL. From the NARMAX EMS wave model, the wave amplitude can be mapped out to the whole inner magnetosphere by using all locations as spatial inputs for each time. From this we will be able to see how the EMS wave intensities evolve in space and time. Figure 2 shows snapshots of this reconstruction from 2 to 7 R E at an MLAT of 0°, which is also shown as a video supplied in the supporting material. Figure 2 shows the EMS wave amplitude throughout the inner magnetosphere for 8 snapshots in panels (a-h) dating from 0000 UTC on October 24, 2016 to 1800 UTC on October 25, 2016. Panels (i-n) show the solar wind velocity, density, dynamic pressure, IMF factor, AE, and SYM-H input parameters respectively. The vertical lines in panels (i-m) signifies the times of the eight EMS waves snapshots shown in panels (a-h). This figure illustrates how the the wave amplitude varies in time and space with changing solar wind and geomagnetic indices. In panel (a), there is low wave amplitude throughout the inner magnetosphere. The wave amplitude then builds through panels (a-d), from 0000 UTC to 1800 UTC October 24, 2016, where it can be seen that the AE index increases to remain over 400 nT 2 h prior to (a), the SYM-H index starts to drop below −20 nT, IMF factor spikes are over 6 nT, solar wind density and dynamic pressure increase slightly after (a), while the solar wind velocity remain approximately constant at around 400 km/s. Prior to (d), there is a decrease in IMF factor and AE index, and an increase in SYM-H, which leads to a reduction in EMS wave amplitude shown in panel (e). After (e), EMS wave amplitude build up again to panel (h) at 1800 UTC on October 25, 2016, during which IMF factor and AE index increase, SYM-H decreases, while the other solar wind variables remain constant up until after (f), at which density, pressure and velocity all increase. From panel (g-h) the EMS wave amplitudes increase for L < 4 R E on the dayside, while decreasing for L > 4 R E . This coincides with a decrease in AE index 2-3 h prior.
Discussion
The aim of this study was to investigate how machine learning can be applied to develop a dynamical model of EMS waves in the inner magnetosphere. The model has been used to map the evolution of the waves in time to help picture how they respond to changes in solar wind and geomagnetic indices, which is difficult to picture from tracks from satellite data or static statistical wave models.
As stated previously, the final EMS wave model was trained on Van Allen Probe-A data from January 01, 2013 to December 31, 2015. Initially, a smaller training set was trialed for 2013 to the end 2014, however, this resulted in the MLT terms not being selected by the FROLS algorithm. This was probably due to the lack of MLT coverage from just 2 years of the Van Allen Probes mission, so that the algorithm did not have enough data to detect any relationship between EMS waves and MLT, since the apsidal period of the Van Allen Probes is just under 2 years. When the data set was expanded to the end of 2015, MLT was selected by the FROLS algorithm. Better results could be obtained if the model could be trained on two apsidal periods, however, this would leave less data for validation.
Using 3 years of data for training still means that there is only a small data set for validating the model from Van Allen Probe-A data. Probe-B could also be used for validation for the entirety of the mission period, since it is in a different location than Probe-A, on which the model was trained. However, the Probe-B data set can only be used if it differs from the Probe-A data set. Probe-B roughly follows the same path of Probe-A, with Probe-B in a slightly slower orbit in which the rate changes over time. This could mean the measurements from the two spacecraft may not differ by much on the hourly sampled time scales when the two spacecraft are close to each other but may differ significantly when the two spacecraft are far away from each over.
To see if the readings of A and B differ, depending on the time gap between the two spacecraft, the measurements of A were binned as the spacecraft passed intervals of the L shells for outbound and inbound tracks of the orbit. These intervals were from L = 2, 2. 25, 2.5, …, 4.5, 4.75, 5. The measurement of the next pass of B at this L shell was then recorded and the time gap between A and B was noted. The wave amplitude measurements of A and B were then correlated as a function of L shell and time gap to see the autocorrelation of the EMS waves in space and time delay. Figure 3 shows the surface plot of the correlations between the A and B measurements of EMS waves, with the time delay between the measurements of A and B on the x axis, the L shell on the y axis. The figure shows a drop off in correlation after 1.5 h delay, which means that the measurements from A and B differ significantly enough to use both during periods where the spacecraft are separated by at least 1.5 h. Therefore, in future studies, datasets from A and B could be stitched together for training and validation of the model, potentially leading to models with increased performance.
It is also possible that even with more training data covering a wider spatial range the model performance may not increase. This would mean that the EMS wave distribution throughout the inner magnetosphere is not deterministic but are also largely impacted by stochastic factors.
The system may also have some highly nonlinear features that the polynomial NARMAX model struggles to capture. In this case, a different model structure could be trialed to see if there is a significant performance increase. One structure to try would be a rational NARMAX model Zhu and Billings (1993), which is defined the ratio of two polynomial NARMAX models. Another option is a wavelet-NARMAX model Billings and Wei (2005), where a polynomial NARMAX model is used to characterize any smooth varying trends and a wavelet model is used to characterize any rapid dynamics. The EMS wave model developed in this study could be integrated with numerical models of the radiation belts, such as CIMI Fok et al. (2014), BAS-RBM Horne, , and others. Instead of calculating diffusion coefficients from the static statistical wave models, the diffusion coefficients could be calculated from the dynamical NARMAX EMS wave model. One issue is that many numerical codes run in adiabatic invariant space. As such, since the current model is in real space, a co-ordinate transform from real to adiabatic co-ordinates will be required for some models.
One of the main advantages of NARMAX methodologies over other machine learning techniques is that the models are interpretable, it is possible to inspect the terms or, in this case, the monomials that make up the polynomial NARMAX model. The monomials are selected one at a time by the FROLS algorithm using the ERR, where, at each step, the monomial with the highest contribution to the output variance (ERR) is selected as the model term. When inspecting the EMS wave model, the top terms in the model will have more contribution to the evolution of the waves than those at the bottom end of the model. The top terms here are the spatial locations, with L and cos 6 (MLAT) coupled with AE index making up the first and second terms of the model. This is understandable, as it can be seen in Figure 1 how the wave amplitude changes as the Van Allen Probes tracks through the orbit with changing L. Also, since EMS waves are mainly observed within ∼20° magnetic equator Aryan et al. (2019), MLAT should be an important factor, and that it is cou-BOYNTON ET AL.
10.1029/2020JA028439 9 of 12 Figure 3. Autocorrelation of the equatorial magnetosonic (EMS) waves from Van Allen Probe-A and Probe-B measurements, with the time delay between the measurements of A and B on the x axis, the L shell on the y axis, and the correlation between the A and B measurements of EMS waves. pled to AE index could mean that larger disturbances in AE may lead to EMS waves occurring further away from the equator. The most influential non spatial input parameter is the AE index, which was also reported to be the main control parameter of the lower band chorus waves by Boynton et al. (2019).
Conclusions
The NARMAX machine learning technique has been applied to develop a dynamical spatiotemporal model of the EMS waves in the inner magnetosphere. The model was then used to map out the dynamical EMS wave response across the inner magnetosphere to changing solar wind conditions.
The performance of the model has been tested on Van Allen Probes data, which resulted in a PE of 34.0% and CC of 56.9%. Compared to other signals in space weather, such as modelling the Dst index or electron fluxes Wei et al. (2004); Boynton, Balikhin, Billings, Sharma et al. (2011);Boynton et al. (2015), the EMS wave model performance metrics are not as high. With more training and validation data, which cover a wider spatial range, the performance of a data based machine learning model could potentially be improved. This could involve stitching together various datasets from various past and present missions. However, it is also possible that the EMS wave distribution is more affected by stochastic factors than other signals and thus the performance metrics obtained by the model deduced in this study are close to the potential maximum.
Data Availability Statement
The Van Allen Probes EMFISIS data used in this study were obtained from https://emfisis.physics.uiowa. edu/data/index. The solar wind and geomagnetic index data were from OMNIweb (https://omniweb.gsfc. nasa.gov/ow_min.html). The output of the model developed during this study will be made available at http://www.ssg.group.shef.ac.uk/USSW/UOSSW.html.
Acknowledgments
The work was performed within the project RadSat and has received financial support from the UK NERC under grant NE/P017061/1. MB is grateful to ISSI for supporting the international team on "Complex Systems Perspectives Pertaining to the Research of the Near-Earth Electromagnetic Environment." H. Aryan is grateful for RBSP-ECT and EMFISIS funding provided by JHU/ APL Contract 967399 and 921647 under NASA's Prime Contract NAS5-01072. | 7,082.6 | 2021-06-01T00:00:00.000 | [
"Physics",
"Computer Science",
"Environmental Science"
] |
Smart Mobility : Services , Platforms and Ecosystems
Smart mobility is booming and comprises an important part of the development of smart cities. City bikes are already widely used in many cities and new types of vehicles, such as scooters, are entering the market. This opens new niche markets for vehicle fleet operation and maintenance, and creates challenges for effective services, due to the existence of expanding heterogeneous vehicle fleets located in large geographical areas, and the inclusion of new types of vehicles with operation and maintenance requirements.
Introduction
Smart mobility is booming and comprises an important part of the development of smart cities.City bikes are already widely used in many cities and new types of vehicles, such as scooters, are entering the market.This opens new niche markets for vehicle fleet operation and maintenance, and creates challenges for effective services, due to the existence of expanding heterogeneous vehicle fleets located in large geographical areas, and the inclusion of new types of vehicles with operation and maintenance requirements.
The market for smart mobility is increasing rapidly.Mobility as a service (MaaS) (Docherty, Marsden & Anable, 2018) is becoming an increasingly popular model that is changing transportation chains for people in many cities (Loidl, Witzmann-Müller & Zagel, 2019).This transportation disruption creates a new niche market for commercial actors operating and maintaining the whole vehicle fleet on behalf of a city.Currently, there are several different types of companies trying to gain a share in this market, starting from local small maintenance providers and ending with global technology companies that provide smart mobility systems, such as city bike systems.The winner in the new growing market will be the player who is capable of managing service operations at lower cost, while at the same time ensuring a sufficiently high quality for a fast-growing vehicle fleet (Pulkkinen, Jussila, Partanen & Trotskii, 2019).Therefore, utilizing data to fulfil the above-mentioned requirements will be one critical success factor in achieving a significant market share.In current-day practice, there are several challenges in operating and managing an expanding vehicle fleet.Small local companies cannot easily expand to other cities, and big technology companies that deliver one type of smart mobility system, may encounter problems in operating and managing other vendors' systems, among other examples.
We draw on service-dominant (S-D) logic, defined as dynamic, continuing value co-creation through resource integration and service exchange, which has been constructed by an increasingly large number of academics (Lusch, Vargo & Fisher, 2014;Vargo & Lusch, 2017)."Resources", in this context, refers to broad knowledge and skills.S-D logic is moving toward a general theory of marketing and requires more midrange theoretical frameworks and practical use cases.One S-D logic diffusion is ecosystem services (Vargo & Lusch, 2017).Business ecosystem development has been the focus of many researchers over recent decades (Järvi & Kortelainen, 2017).The development of information and communications technology (ICT) has come to be seen as a cornerstone of the digital business ecosystem (DBE), in which a digital platform is used to create In this study, we provide novel insight into building and managing growth in a new emerging market: the operation and maintenance of a heterogeneous and expanding vehicle fleet in a smart city environment.There are several different types of players in this emerging market and a dominant player is still missing.Based on our empirical findings, we identified three key characteristics of a growing business and the ability to reach a leading position: 1) co-creation through resource integration and service exchange is preferable for responding to market demands; 2) a digital platform is critical to create the necessary knowledge for resource integration and service exchange; and 3) smart services glue the ecosystem and platform together and create the outcome that solves the defined business problem.Most importantly, all three elements-ecosystem, platform and smart services-create a uniform environment in which to grow the business in a new emerging market.
If everyone is moving forward together, then success takes care of itself.
Henry Ford Founder of Ford Motor Company Smart Mobility: Services, Platforms and Ecosystems Jukka Pulkkinen, Jari Jussila, Atte Partanen, Igor Trotskii, Aki Laiho knowledge from data, and this knowledge is then utilized in business ecosystems (BEs) value co-creation (Senyo, Liu & Effah, 2019).The ecosystem can either expand or shrink, depending on the decision-making and behaviour of all of the individual actors belonging to the ecosystem.Therefore, ecosystem dynamics has been the focus of several researchers (Senyo et al., 2019;Tsujimoto, Kajikawa, Tomita & Matsumoto, 2018).
In the literature, there is a research gap relating to the operation and maintenance of smart mobility systems, due to the previous focus on developing ecosystems that connect users to smart mobility systems (Docherty et al., 2018;Faber, Rehm, Hernandez-Mendez & Matthes, 2018;Ji, Cherry, Han & Jordan, 2014;Loidl et al., 2019).On the other hand, empirical evidence on how to manage knowledge and the stability of BEs remains limited (Jacobides, Cennamo & Gawer, 2018).
Accordingly, this paper aims to address service development for commercial actors operating and maintaining vehicle fleets and to consider how the ecosystem should be utilized to gain a significant share in the new growing market.Building on existing research, we aim to answer the following research question: "What is the most effective way to build and manage a sustainable and expanding ecosystem for vehicle fleet operation and maintenance?"An ecosystem is presented in Fig. 1.The paper explores service business development that utilizes an ecosystem to improve the business' competitive position in a new market.Service business development creates artefacts to maintain and operate the expanding and heterogeneous vehicle fleet, in a way that enables the commercial actor to gain a significant market share.The results also present new practical guidelines generally to ecosystem actors for the creation of an expanding ecosystem, and some specific practical learning in this specific new market.As well, the results present one practical use case that more effectively bridges practice and theory in the area of ecosystem services, as one subdiscipline in S-D logic.
Smart Mobility
The development of a sharing economy is reshaping a (Acquier, Carbone & Massé, 2019).New business models that utilize a sharing economy model in the mobility market have gained momentum in many cities (Hamari, Sjöklint & Ukkonen, 2016;Yin, Qian & Shen, 2018).One example of a sharing economy model is the so-called MaaS model, in which consumers do not own vehicles, but are provided with transport through a service provider.A bike-sharing system is another example of a sharing economy, and this system has experienced a tremendous boom in many cities (Loidl et al., 2019).Additional mobility solutions are expected to penetrate into markets.Already, the first shared electric scooters are in commercial use and drones have been piloted for package transport in Finland.Consequently, we can state that the market for different mobility solutions using a sharing economy model is growing rapidly.
Naturally, this opens up new business opportunities.One such opportunity is vehicle fleet operation and maintenance, since companies providing transport services do not always have the capability to operate and maintain their vehicle fleet effectively.The core competence for the transport service providers is different from that required for the technical operation and maintenance of the fleet.Consequently, there are companies focusing on this new niche market, providing vehicle fleet operation and maintenance.In the literature, there are several studies on managing mobility as part of a sharing economy for mobility users, along with methods of mobility system planning (Docherty et al., 2018;Faber et al., 2018;Ji et al., 2014;Loidl et al., 2019).On the other hand, there is a gap in the literature relating to the successful operation and maintenance of mobility systems.The operation and maintenance of a vehicle fleet fits well into the platform economy model, because there are consumers for the maintenance (vehicles) and providers of the maintenance (maintenance personnel).
Digital Business Ecosystem
The origin of the term "ecosystem" comes from biology.Tsujimoto et al. (2018) define it as "a biological system composed of all the organisms found in a particular physical environment, interacting with it and each other".Later it was also applied in business, in which a "business ecosystem" is an economic community of individuals and organizations, operating outside of their traditional industry boundaries (Moore, 1993;Senyo et al., 2019).According to Iansiti and Levien (2004), the three critical success factors in a business ecosystem are productivity, robustness and the ability to create niches and opportunities for new firms (Korpela, Kuusiholma, Taipale & Hallikas, 2013).
Due to the emergence and exploitation of digital technology, new ways of co-creating value in the BE have been created.This has led to the development of the "digital business ecosystem" (DBE) concept, which combines the two main tiers of digital platforms and BEs, and is defined as a "sociotechnical environment of individuals, organizations and digital technologies with collaborative and comparative relationships, aiming to co-create value through shared digital platforms" (Senyo et al., 2019).
DBEs consist of different individuals and organizations known as actors.Typically, there are different types of actors with different goals for an ecosystem, and their different decision-making and behavioural principles are important for creating a sustainable DBE.This behaviour creates the dynamic of a DBE, which leads to either expansion or shrinkage of the system (Tsujimoto et al., 2018).If one actor has a strategic intention to design the whole ecosystem, then this actor is called the designing actor.The designing actor can manage the ecosystem strategically if the person or organization understands and manages the behaviour of the ecosystem dynamics (Tsujimoto et al., 2018).
The sustainability of restricted DBEs has been the focus of many studies.When the designing actors have tight control over other actors, the leader-follower relationship weakens the sustainability of the ecosystem (Joo & Shin, 2018).On the other hand, there are different target goals for different actors, such as those relating to shared costs, shared risks, increases in flexibility, etc. (Graça & Camarinha-Matos, 2017).In a collaborative DBE, where the behaviour and decisionmaking of different actors fits naturally into a common goal, the ecosystem behaviour is coherent, supporting the sustainability and expansion of the system (Tsujimoto et al., 2018).The customers in this ecosystem are also actors and their participation varies according to business characteristics (Joo & Shin, 2018).
The digital platform is used to create and share knowledge in the ecosystem.The interoperability of data and knowledge among all actors is therefore critical for a sustainable DBE (Figay, Ghodous, Khalfallah & Barhamgi, 2012;Selma et al., 2012;Vernadat, 2009).Finally, we can state that a sustainable and expanding DBE must have a collaboration target as a common output of the whole ecosystem, and the individual behaviour and decision-making of all actors must be aligned with that collaboration target.The designing actor has a leading role in managing the Smart Mobility: Services, Platforms and Ecosystems Jukka Pulkkinen, Jari Jussila, Atte Partanen, Igor Trotskii, Aki Laiho dynamics of the ecosystem to ensure that the behaviour of all actors supports a coherent ecosystem.
Service Ecosystem S-D logic represents dynamic value co-creation through resource integration and service exchange.This has attracted a great deal of attention from a large number of academics from various disciplines (Vargo & Lusch, 2017)."Resources" refers to the broad knowledge and skills used to create a benefit.This conceptualization of a service using S-D logic reflects resource integration for the creation of value through a network with a common purpose, rather than only connections of resources, people or product flows.This approach shifts the focus from the firm-centric production of outputs, to activities and processes in which different actors' resources are integrated to reach their common collaboration target (Ketonen-Oksi & Valkokari, 2019;Wieland, Hartmann & Vargo, 2017).
The development of ICT enables new models for cooperation in the design, production, delivery and consumption of services (Anttiroiko, Valkama & Bailey, 2014).Organizations are increasingly digitally connected to each other, leading to the construction of digital ecosystems that form the basis for new service ecosystems and BEs (Nachira, Dini & Nicolai, 2007).
One area of smart services concerns mediating the roles between providers and end customers of a service ecosystem.Digitally connected resources and organizations boost this development (Alt, Demirkan, Ehmke, Moen & Winter, 2019).
The conceptual exploration of service ecosystems has just begun, and evidence-gathering and real applications are needed to deepen the understanding of these ecosystems in different circumstances (Vargo & Lusch, 2017).
Methodology
This research aims to develop new services by creating artefacts as a solution to an unsolved business problem.
As well, it uses the developed new services as the use case in a coherent ecosystem, to define the guidelines needed to create a sustainable and growing ecosystem.The relevance and importance of the business problem is opportune, as smart mobility is currently booming in Finland.Bike-sharing systems are already in use in the capital region (Helsinki), and in the cities of Turku and Kuopio.A new, full-scale system is coming to the city of Oulu.A bike-sharing pilot scheme for research purposes is starting in the city of Hämeenlinna (Ruohomaa & Salminen, 2019).Additionally, an electric scooter-sharing system is already in use in Helsinki.On the other hand, all sharing systems require certain infrastructure, such as stations and ticket machines, to run the overall system.All of the vehicles and the infrastructure must be maintained, creating a growing market for the operation and maintenance of these systems.
In order to determine the answer to the research question, a more detailed objective was defined.This was done in collaboration with commercial actors operating and maintaining a vehicle fleet of more than 2,000 vehicles and other infrastructure equipment in 19 cities.The objective was to find a new way of providing services that is more efficient, easy to expand, fulfils defined quality requirements, and utilizes knowledge created by a digital platform.A data strategy framework (Pulkkinen et al., 2019) was rigorously utilized to develop a new way of providing services.The development of new services resulted in three new artefacts: a Smart Mobility Ecosystem (SME), a Smart Mobility Platform (SMP) and Smart Mobility Services (SMS) as described in Fig. 2. SMP functionalities were subjected to technical testing and a few iterations were made to develop the final functionalities.This process resulted in the structure of the SMP presented in Fig. 3, and in general requirements for the SMP.Following a design science methodology (Peffers et al., 2007), we also present a more general theoretical framework for smart services in a DBE environment.
Evaluation
The evaluation was performed to ensure that the developed artefacts fulfil the defined objective, which was to develop a new way of providing services that is more efficient, easy to expand and fulfils defined quality requirements.In practice, the evaluation was performed in workshops, where researchers and personnel from a commercial company verified the services.Personnel from the company have extensive experience in providing operation and maintenance services for different types of vehicle systems.A new method of providing services utilizing the SMP was also tested by researchers and maintenance personnel, using the idea of a "proof of concept" in the field.Additionally, the roadmap to create the SME was rigorously discussed and analysed by the senior management of the commercial actors.
Results
The development of new services that fulfil the defined objective above, resulted in three new artefacts: the SME, SMP and SMS.The next objective was to find a new, more efficient and easy way to expand provision of services that fulfil the defined quality requirements, and which utilize knowledge created by the SMP.
Smart Mobility Ecosystem
A sustainable and expanding SME requires a leading company, which is the designing actor in a bounded or restricted ecosystem.The designing actor needs to create a coherent ecosystem in which the behaviour of all actors is aligned with their collaboration target.In this case, the designing actor is a commercial actor operating and maintaining the vehicle fleet.Other actors include the customer, the local maintenance providers, the vehicle system suppliers, and the consumers.
For the creation of a coherent SME by the designing actor, the following design criteria must be fulfilled by each actor.
The customer is typically the city owning the vehicle system.The customer defines the service scope and quality requirements in the service level agreement (SLA), and selects an operation and maintenance provider through competitive bidding.Accordingly, the customer's requirement with regard to the collaboration target is for services which fulfil the SLA requirements at the lowest price.
The designing actor must have the capability to operate and maintain a heterogeneous vehicle fleet over a geographically large area.The service scope and quality requirements are defined in the SLA separately by each customer and will vary, which increases the complexity.
The collaboration targets services that fulfil the SLA requirements, at a cost level that represents the lowest price in competitive bidding, repeatedly, in order to gain a significant market share.
The local maintenance providers typically have first priority to directly service their customer, who will typically be in the same city.They may have some advantages such as local knowledge and relationships, but their capability to scale up operations to new types of vehicle, and especially to expand to new locations is limited.This makes their chances of growing to become a major player relatively small.Should they lose the contract to the designing actor, they may become a subcontractor to the designing actor and in this way, they become a member of the ecosystem and therefore the SMP is available to support their local operation.
The basis of a coherent SME is a service operation that is effective enough to be competitive with regard to cost while simultaneously fulfilling the SLA's scope and quality requirements, with the help of the SMP.This defines the high technical requirements for an SMP and is a mandatory requirement for creating a coherent SME.If the SLA requirement is not fulfilled via costcompetitive service operation in the SME with the help of the SMP, the local maintenance provider has a good chance of making a contract directly with the customer, without the SME.The local maintenance provider's collaboration target is to provide local services as defined in the SLA by utilizing knowledge created by the SMP.
The vehicle system supplier must ensure the integration of the vehicle system and the SMP.It is important to have all the required data in the SMP, in order to manage the service operation and maintenance, as defined above.The vehicle system supplier's collaboration target is to provide an application program interface (API), that includes all of the data needed to optimize the operation and maintenance available to be connected to the SMP.
The consumers use the vehicle system and the quality of the system represents their expectations with regard to the collaboration targets.Vehicle systems that do not fulfil their quality requirements will have a low usage level, and ultimately, the SME will decline.In practice, this means that all vehicles must be in good condition to Smart Mobility: Services, Platforms and Ecosystems Jukka Pulkkinen, Jari Jussila, Atte Partanen, Igor Trotskii, Aki Laiho drive, available when needed by consumers, and that there must be a place to easily leave vehicles after usage.Accordingly, the consumers' quality requirements need to be included in sufficient detail in the SLA requirements, which is the basis of the collaboration target for the SME.
Smart Mobility Platform
The SMP must provide a common application for the operation and maintenance of the vehicle fleet, as shown in Fig. 3.It defines the interactions of various actors within the SME and provides common interfaces for interacting in a well-defined way with the platform.The SMP must be capable of extending its functionality, as well as being easily integrated with other applications.
Based on the SMP's characteristics and functionality, general requirements were formulated.The provided requirements do not cover all the possible scenarios or required features, but they provide a solid guideline to follow, which considerably improves the probability of creating a coherent SME.On the other side, it also simplifies the design and life-cycle management of the systems.
1.The SMP must be highly modular.This means that different parts of the platform must be completely independent of each other.For example, the user interface for maintenance personnel must be independent of the platform's core logic.This requirement guarantees the ease of deployment and integration of the system, and increases its adaptability.2. The SMP must provide well-documented and consistent APIs, both for integrating different internal modules into a single application and for connecting to external services.This requirement is essentially an extension of the first requirement.3. The SMP must provide methods for automatic task generation, management and assignment to proper maintenance teams, based on the current state of the vehicle fleet.This requirement drastically increases the efficiency of operation and maintenance, and helps to ensure that the provided services fulfil the SLA requirements.This relies heavily on data collected from the vehicles and requirements defined by the SLA. 4. The SMP must provide all relevant information about maintenance tasks and the state of the vehicle fleet, and should be utilized for reporting.5.The SMP must provide dedicated user interfaces for different actors.6.The SMP must define and provide a common interface for easy extension of the platform to cover new types of vehicles.
Smart Mobility Services
The anchoring point in answering our research question: "What is the best way to build and manage a sustainable and expanding ecosystem for the vehicle fleet operation and maintenance?",relates to the SMS Smart Mobility: Services, Platforms and Ecosystems Jukka Pulkkinen, Jari Jussila, Atte Partanen, Igor Trotskii, Aki Laiho In this new way of providing services, the actors' resources are applied so that the ecosystem is successful, expanding, and sustainable in the long run.This is called a coherent ecosystem.In practice, it means that the smart services outcome fulfils the collaboration targets by utilizing the unique knowledge created by the platform, as described in Fig. 4. Accordingly, smart services can be defined so that the collaboration targets are reached through co-creation with ecosystem actors utilizing the knowledge created by the platform.
Discussion
The smart mobility market is a typical example of a relatively young industry which is growing fast.This means that there are relatively young companies in the industry, and tjat many are still in the start-up phase, waiting to penetrate the market.The leading vehicle system suppliers are not focusing on service business, but are instead giving full attention to the delivery of their smart mobility systems as a response to the growing demand.The service market is still very young and there is not yet any dominant player in the global market.
Based on our research, the biggest challenge in the creation of an SME is the integration of different vehicles on the SMP.This means that the vehicle system suppliers do not have API interfaces with all of the necessary data for vehicle operation and maintenance.In fact, they often have API interfaces, but these interfaces support only the connection of consumers to the system.Without an SMP that can connect and integrate data from different vehicle system suppliers, it will remain difficult to develop a robust (Iansiti & Levien, 2004) and sustainable (Figay et al., 2012) SME.
The further development of this industry will be interesting.The importance of effective operation and maintenance will grow when the industry becomes more mature and undoubtedly, the vehicle system suppliers' interests in new services will increase in the future, as has been the case in many other industries.
Obviously, the player who creates the first SMP and Smart Mobility: Services, Platforms and Ecosystems Jukka Pulkkinen, Jari Jussila, Atte Partanen, Igor Trotskii, Aki Laiho coherent ecosystem on top of the platform, will come to occupy the dominant position in this fast-growing service market.A real roadmap to building a SME is still missing, because the vehicle system suppliers' data is mandatory, and their interest in opening this type of interface is not yet in place.How this will be solved is not yet clear, but it is likely that cities, just like customers, could have some role to play, by requiring these interfaces early in the bidding phase of new smart mobility systems.In this way, the vehicle system suppliers would need to open their interfaces to the community of users.
Smart services as defined in this paper represent a dynamic, continuing narrative of value co-creation through resource integration and service exchange.Accordingly, this paper provides one use case using S-D logic in a real world scenario.Our research has presented concrete guidelines to create a coherent ecosystem in one specific industry and identified practical problems that need to be solved in order to obtain a major benefit from the ecosystem.
Conclusion
A new approach to services was presented with regard to the operation and maintenance of a heterogeneous and expanding vehicle fleet, in the new booming smart mobility market.The new method of providing services consists of an SME and an SMP, connected through SMS to co-create value for ecosystem customers.
A coherent SME, where the decision-making and behaviour of all actors supports the common collaboration targets is sustainable and can expand.The ecosystem is restricted and one actor in particular has a strategic intention to ensure its development; the commercial actor operating and maintaining the whole vehicle fleet.Other actors in the ecosystem comprise the local maintenance providers, vehicle system suppliers, customers, and consumers.In this article, the requirements were presented from each actor's point of view in order to create a coherent, sustainable and expanding ecosystem in the smart mobility environment.Empirical evidence from one case was presented regarding how designing actors can manage knowledge creation and sustainability of the ecosystem.Another result was the identification of technical requirements for the SMP as a basis for the sustainable and expanding ecosystem, thus ensuring a coherent ecosystem from the platform perspective.
Technical integration of different types of vehicles
Figure 2 .
Figure 2. The Artifacts: Smart Mobility Ecosystem, Smart Mobility Services and Smart Mobility Platform
Figure 3 .
Figure 3. Smart Services needs to reach the collaboration targets by utilizing the knowledge created from data
Figure 4 .
Figure 4. Smart Services needs to reach the collaboration targets by utilizing the knowledge created from data | 5,973.8 | 2019-09-26T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Robust localized zero-energy modes from locally embedded PT-symmetric defects
We demonstrate the creation of robust localized zero-energy states that are induced into topologically trivial systems by insertion of a PT-symmetric defect with local gain and loss. A pair of robust localized states induced by the defect turns into zero-energy modes when the gain-loss contrast exceeds a threshold, at which the defect states encounter an exceptional point. Our approach can be used to obtain robust lasing or perfectly absorbing modes in any part of the system.
This widespread study of topological systems originated from the classification of the Hermitian topological system [15]. Currently, nontrivial extensions of closed topological systems to their open counterparts attract increasing interest, connecting this area to non-Hermitian concepts such as parity-time (PT) symmetry [16], and resulting in novel topological applications and phenomena such as topological mode selection and lasing [17][18][19][20][21][22].
Whilst not completely settled, an understanding of such genuinely non-Hermitian topological effects is also emerging [23][24][25][26][27][28][29][30][31][32][33], where one has to account for a much larger range of possible symmetries and resulting universality classes [34][35][36]. A major complication in these endeavors is the break-down of the conventional bulkboundary principle. In particular, a range of studies have identified non-Hermitian degeneracies known as exceptional points (EPs) as a mechanism to create robust defect states, even when starting from systems that are trivial in their Hermitian limit [37][38][39][40].
These observations suggest that the boundary at the interface of two non-Hermitian systems can be enough to induce a topological transition, even when the coupling configuration in the bulk-which completely determines the topological phase in Hermitian systemsdoes not change. Nonetheless, so far, most studies of this phenomenon still utilized systems that either already possessed topological states in the Hermitian limit [17,18,34,[41][42][43][44][45], or altered the coupling configuration in some suitable way [37-40, 46, 47].
In this work, we demonstrate that a PT-symmetric defect embedded into the topologically trivial phase of a Hermitian system is indeed sufficient to create localized * Electronic address<EMAIL_ADDRESS>symmetry-protected defect states. The creation of these states is again manifested by an EP, and the states reside in the band gap, as desired for many applications [48,49]. In particular, single site non-Hermitian defects are good candidates for designing conventional photonic crystal lasers [50][51][52][53], and a wide range of other applications such as strain field traps [54] and strong photon localization [55].
Utilizing a PT-symmetric defect has a range of additional benefits. The PT-symmetry facilitates the emergence of exceptional points, which have been used to control lasing emission [56], enhance sensing [57,58], create coherent perfect absorption [59], and can induce directional transport [60] and conical diffraction [61]. However, the relation of these effects to topological transitions have not been addressed in these studies. Given that we demonstrate the appearance of the defect states without any change of the coupling configuration, our study paves the path to create robust localized states on demand and at any part of the lattice. This widens the scope for practical applications in quantum sensing, topological memories, and topological lasing, where one might desire to create or eliminate a robust localized zero mode at any location within a given structure.
With such practical applications in mind, we demonstrate the creation of these states for a specific structure of experimental interest, namely, a periodic dimer chain that in the passive case corresponds to a Su-Schrieffer-Heeger (SSH) chain [62] in its trivial coupling configuration. In its topologically nontrivial counterpart configuration, topological lasing utilizing edges or interfaces has been demonstrated in a number of studies [19][20][21][22]. In particular, in Ref. [19], the lasing of an SSH edge state was facilitated by pumping the system only at the edge. In contrast, we create a localized defect mode suitable for lasing inside the trivial phase, by only utilizing the non-Hermiticity induced by gain and loss. This long-living state is pinned to the centre of the band gap, and, as is typical for states emerging in EPs, is accompanied by a second robust mode of shorter life time.
Model.-We consider a non-Hermitian one-dimensional dimer lattice with periodic boundary conditions, representing, e.g., evanescently coupled microdisk resonators Resonators in the same dimer are coupled by the intra-dimer coupling strength k, while the inter-dimer coupling strength c is assumed to fulfill c < k. The defect is created by changing the intra-dimer coupling strength in a single dimer, denoted as n = 1, from k to d. In this system, the defect modes are always hybridized, and never become zero modes. (b) PT-symmetric version of the lattice of panel (a), where the orange (green) circles depict resonators with the gain (loss) of strength γ. We show that this lattice can support localized zero modes, which emerge through an exceptional point when d is sufficiently small and γ sufficiently large. (c) Modified set-up in which the PT-symmetric defect is embedded into the Hermitian system. This lattice can support the same type of zero modes as the model in panel (b), which demonstrates that such modes can be induced by embedding a non-Hermitian defect into a topologically trivial Hermitian system.
[63] as shown in Fig. 1. While non-Hermiticity can be obtained in different ways, we consider the case where the real part of the resonance frequency of the coupled modes is ω 0 , while the imaginary part γ (with γ > 0 representing gain and γ < 0 representing loss) in each dimer unit cell is antisymmetric. The coupled-mode equations that describe the dynamics in this lattice are given by where ψ n and ϕ n are the modal field amplitudes in the nth gain and loss disk. We assume that the intra-dimer couplings k and inter-dimer coupling c between the adjacent disks are real and fulfill k > c, and without loss of generality set ω 0 = 0. Periodic boundary condition are obtained by requiring Ψ N +n ≡ (ψ N +n ϕ N +n ) T = Ψ n ≡ (ψ n ϕ n ) T , with N being the total number of dimers in the lattice. The Hermitian system corresponds to a periodic variant of the celebrated SSH chain [62]. This system possesses a chiral symmetry, which guarantees that the real energy spectrum is symmetric about E = 0, as well as separate parity and time-reversal symmetries. In the non-Hermitian case these symmetries are broken, but the balanced gain and loss makes the system PT symmetric [16], i.e., the combination of parity and time-reversal still holds. As a consequence, the complex resonance-energy spectrum is symmetric with respect to the axis Im E = 0, where the occurrence of pairs of complex-conjugated en- Fig.1(a). (a) Quantized band structure for a system of 100 resonators with couplings c = 0.5 and k = 1, and no defect, d = k = 1. (b) Changing the defect coupling to d < k moves two modes from the band edges (identified by the red dots in (a)) into the gap. (c) Energy spectrum for d = 0, upon which the two sites on the defect dimer become the edges of a system with open boundary conditions. (d) The two modes always remain weakly hybridized, with mode profiles that are localized symmetrically on both of these effective edges. Note that in this and the following representations of the mode profiles, resonators are numbered so that the first and last resonators are those of the defect dimer. Fig. 1(b), in analogy to Fig. 2 but with a finite gain-loss parameter γ = 0.2. (a) Compared to the Hermitian case, the band structure of the system without a defect is similar, but the gap is reduced. (b) Changing the defect coupling to d < k again creates two defect modes, but these now undergo an additional transition in which they become zero modes with Re E = 0. This corresponds to an exceptional point, which for the given parameters occurs at d = 0.272. (c) Energy spectrum at the exceptional point. (d) At the exceptional point the mode profiles are still symmetric, but this symmetry is violated beyond the exceptional point, as shown in Fig. 4. ergies signifies the so-called PT-broken phase. Furthermore, instead of the chiral symmetry the non-Hermitian system displays a non-Hermitian charge-conjugation or particle-hole symmetry [17,38,39], i.e, the combination of the chiral symmetry with the time-reversal symmetry, which guarantees that the resonance-energy spectrum is symmetric with respect to the axis Re E = 0. Notably, this permits the existence of unpaired modes with Re E = 0, which is the key feature that we will exploit in the following.
FIG. 3: Defect modes in the non-Hermitian model of
For an infinitely long homogeneous system, the band structure of the above model is given by E (±) (q) = ± c 2 + k 2 − γ 2 + 2ck cos(q) where q is the Bloch wave number [64]. For the Hermitian case with γ = 0, schematically depicted in Fig. 1(a), the two bands are separated by a gap of size ∆ H = 2(k −c). This gap closes when k = c, when the chain becomes non-dimerized, signalling a band inversion as one passes from one topologically distinct coupling configuration to the another. For nonzero value of γ [ Fig. 1(b)] the gap size reduces to ∆ N H = 2 (k − c) 2 − γ 2 , which for small values of γ can be approximated as Therefore, the gap for the non-Hermitian lattice is smaller than the gap for the Hermitian lattice. In particular, the gap becomes zero at γ = γ EP = k − c, where the first two modes at the edge of the Brillouin zone merge with each other in an exceptional point. For k − c < γ < k + c, a part of the dispersion is purely imaginary, corresponding to states with degenerate resonance frequency Re E (±) (q) = 0. For γ = k + c all the modes from both bands have merged, creating a flat band of states with different life times [65,66]. The same features hold for the finite system, where the periodic boundary conditions lead to the quantization q = q m = 2πm N of Bloch wave number. This gives rise to discrete resonance frequencies E (±) m = E (±) (q m ), where m = 0, 1, 2, . . . N is the index of the associated super-mode.
At this point let us assume that we can adiabatically change the value of one of the intra-dimer couplings k to some other value d < k [see Fig. 1(a)]. This defect continues to preserve the symmetries of the system with exception of translation symmetry, and causes the emergence of two defect modes in the gap. The situation for the Hermitian case is illustrated in Fig. 2. Panel (a) shows the band structure of the periodic system with d = k, which is symmetric as dictated by the chiral symmetry, and gapped. As shown in panel (b), by decreasing the value of d from k toward zero two defect modes appear, which depart from the band edges and move into the gap. Both modes are related by the chiral symmetry, and each mode has a finite weight at both edges of the system. For a finite system, these two defect modes are therefore hybridized edge modes separated by a non-zero gap, and thus not topologically robust.
The contrasting situation of the non-Hermitian system with γ = 0 is shown in Fig. 3. As depicted in panel (a), for γ < k − c the band structure of the periodic sys- with Im E > 0, which is preferentially localized on the gain site of the defect dimer, whilst the lower panels show the mode with Im E < 0, which is preferentially localized on the lossy site. This asymmetry increases for increasing γ, hence, as one moves deeper into the PT-broken phase.
tem (d = k) remains symmetric and real, as dictated by PT and particle-hole symmetry, but the gap is smaller than the corresponding Hermitian case. As shown in panel (b), when reducing d below the value of k, two defect modes again emerge from the band edges and move into the gap. However, unlike in the Hermitian case, these modes meet in an exceptional point for a non-zero value of d [ Fig. 3(c)], meaning that the non-Hermitian system can support zero-energy modes without changing the coupling configuration in the bulk of the system. These zero-energy modes remain exponentially localized around the defect position, and right at the exceptional point are symmetrically localized around the defect, as shown in panel (d).
We note that the position of exceptional points in a PT-symmetric system is not robust, but their existence is-changing parameters, such as introducing disorder, simply shifts the exceptional point to another position in parameter space. Recently robust exceptional points have been proposed in Ref. [67] with application in robust exceptional point sensing, however, those robust exceptional points are not spatially localized and thus may not provide strong feedback for lasing application. Here, we encounter an exceptional point that signals the emergence of a pair of modes with robust frequency Re E = 0, in a system in which the bulk band structure remains real and has not undergone a topological transition. At the same time, this represents a mechanism to selectively break the PT symmetry of one predetermined mode in the whole system.
We now continue to characterize these modes in detail as one passes over the exceptional point into the PT-broken phase. We note that this can be done in Fig. 4, but for the model of Fig. 1(c), where the zero modes are induced by a PT-symmetric defect, without any changes to the coupling configuration (again c = 0.5 and k = 1). This transition occurs at an exceptional point at γ = 1; the panels show the modes for (a,b) γ = 1, (c,d) γ = 1.2, and (e,f) γ = 1.5. two ways, by fixing γ and decreasing d, or by fixing d and increasing γ. In Fig. 4 we show the mode profile for a fixed defect coupling d = 0.27 and different values γ = 0.2, 0.3, and 0.4 of the gain and loss parameter, while other parameters of the lattice are the same as in Fig. 3. Larger values of γ indicate that the system is deeper in the broken phase. The upper panels are associated with the zero-energy modes with a positive imaginary part of their eigenvalues, Im E > 0, whilst lower panels show the mode profiles of the mode with a negative imaginary part, Im E < 0. As γ is increased, these modes become asymmetric, preferentially localized either on the gain site of the defect (for Im E > 0) or on the corresponding loss site (for Im E < 0), as is typical for PT-broken states. We confirmed numerically that these features remain robust under the introduction of disorder in the couplings, up to a threshold that depends on how deep one is situated inside the broken phase. In addition, we found that when keeping d fixed the imaginary part of zero modes changes only weakly with such disorder.
The described mechanism of zero-mode creation translates to a wide class of PT-symmetric defects. Consider, for instance, a periodic lattice where there is no defect in the couplings, and gain and the loss parameter is zero everywhere except in one unit cell, as schematically depicted in Fig. 1(c). In practice this means that all the resonators remain passive with no net gain or loss except for the two resonators of one unit cell, one with net gain γ > 0 and the other with net loss −γ. With this defect embedded into the ring, we find that localized defect states occur for any finite value of γ, and that their spectral position moves inside the gap when γ k − c, mimicking the scenario with gain and loss at the edges of a finite system [37]. The defect states again turn into zero modes in an exceptional point, which occurs at γ = k Fig. 1(c), and the two system sizes N = 10 (blue) and N = 100 (red). This confirms the existence of an exceptional point at γ = 1, beyond which one encounters the robust zero modes. For the model of Fig. 1(b), the inset shows K and hence coincides precisely with the exceptional point for an isolated dimer. Similar to the case with defect coupling, the defect states are spatially symmetric at the exceptional point, but become increasingly asymmetric deeper in broken phase, as shown in Fig. 5. Eigenfunction analysis.-To further illuminate the conditions under which the defect states enter the PTbroken phase, where they become robust zero-energy modes, we turn to the analysis and characterization of the bi-orthogonal set of the eigenvectors [68]. Let L n | and |R n denote the left and right eigenvectors corresponding to the eigenvalue E n of a general non-Hermitian Hamiltonian H , i.e. L n |H = L n |E n , H|R n = E n |R n . ( The vectors can be normalized to satisfy L n |R m = δ nm , upon which N n |R n L n | = 1. Here N is the dimension of the Hilbert space, which for our systems is equivalent to the total number of resonators, N = 2N .
An observable that measures the non-orthogonality of the modes, and can be used to identify the proximity to the exceptional point in the presence of finite-size effects, is the so-called Petermann factor which determines the quantum-limited linewidth of lasers [69][70][71]. At an exceptional point, the eigenvectors associated with the degenerate eigenvalue coalesce, leading to a Petermann factor that diverges as [72,73] K ∼ 1/|γ − γ PT |.
We have studied the averaged Petermann factor which takes the value 1 if the eigenfunctions of the system are orthogonal, while it is larger than one otherwise. As shown in the main panel of Fig. 6, for the model of Fig. 1(c) the mean Petermann factor indeed diverges at γ = 1, which signals the transition of the defects modes into the symmetry-broken phase. For the model in Fig. 1(b), the inset of Fig. 6 shows how the transition depends on the interplay of d and γ in a system of N = 100 resonators. In this inset, the red ridge delineates the transition line, so that the phase with robust localized zero modes is found above this curve. Therefore, this technique can be used to determine the transition reliably for finite systems as a function of the system parameters. Conclusions.
-In summary, we demonstrated that robust zero modes can appear when a non-Hermitian defect is embedded into the topologically trivial phase of a Hermitian system. Our detailed analysis reveals that these states become robust in an exceptional point that is independent of the bulk structure, and that this phenomenon carries over to a range of PT-symmetric defects. In the form as presented here, the described systems could be realized on a variety of platforms in which SSH models with gain and loss have already been realized [18-21, 41, 42, 44]. Our approach can be easily extended to higher dimensions, and could provide useful insights also for the study of disordered systems, for which robust dynamical effects of localized modes have been recently reported [74]. | 4,537.4 | 2020-06-07T00:00:00.000 | [
"Physics"
] |
Composing Visual Music: Visual Music Practice at the Intersection of Technology, Audio-visual Rhythms and Human Traces
Creators of visual music face the challenge of retaining their own artistic impetus amidst an overwhelming choice of instruments, aesthetics, practice, techniques and technologies brought about by the impinging presence of a vast sea of data and tools. Navigating the data-driven ephemerality of artistic technology and its market-driven constraints by utilising strategies similar to composer Ron Kuivila’s (1998) for getting ‘under’, ‘over’ and ‘into’ will be examined with the aim of elucidating methodologies for creating works that other artistic practitioners may find useful. Leading pioneers of visual music were, of necessity, innovators of technology as well as visual musicians and artists. There is an intrinsic tension between developing new technology in order to re-imagine how music can be made visible and technological pioneers succumbing to the fascination of exploring the technology itself. Understanding aspects of perception, such as rhythm, is key to developing new technologies and processes in ways that avoid this pitfall and keep the experience of visual music central. Audio-visual synchronisation and rhythm are vital to create, in the seminal computer artist John Whitney’s words: ‘an art that should look like music sounds’ (1980: front dust jacket). Integrating the body, human traces and especially the human voice into visual music compositions underpins the key objective which is to create work that is non-narrative, ‘abstracted animation’ 1 (Watkins, 2015), and yet suffused with human presence and emotion. Visual music can be perceived as overly repetitive, cold and alienating if it seems to embody a purely mechanical alignment of music to image, or if it seems disengaged from both human emotions and natural imagery. This paper is part of an on-going investigation into developing methodologies for composing new abstract visual music pieces and, ultimately, parameters for a visual musical instrument.
Introduction
As Friedmann Dahn asserts, '[visual music] should be visible music, music made visible or, to expand the term an equal and meaningful synthesis of the visible and audible, and is therefore ultimately its own art form' (Dahn in Lund & Lund, 2009: 149).
Painters, artists working in light, animators, musicians and V-Js have all contributed to the long and rich history of visual music (Watkins, 2016). Within visual music the possibility of creating a synthesis of the visible and audible has been debated in terms as varied as synaesthesia and a 1:1 mapping (see Figure 1); new possibilities afforded by current technology and new research into perception and multi-modality has given this debate new life (Gallese, 2016). Visual music will be considered in the light of: audio-visual perception, rhythm, audio-visual synchronisation, technology and human traces.
Perception of Sound, Visuals and Audio-visuals
The theorist Adrian Klein (1930: 37) posited: 'somehow or other, we have got to treat light, form and movement, as sound has already been treated. A satisfactory unity will never be found between these expressive media until they are reduced to the same terms.' However, our physiological, cognitive and emotional responses to visuals and sound are very different. Visual musicians, exemplified by seminal artists, such as Jordan Belson, have the ambition to communicate in a similar manner to music. Belson stated: I don't want there to be any ideas connected to my images, and if there are any there, if anybody sees any, those are entirely in the eyes of the beholder […] Actually, the films are not meant to be explained, analysed, or understood. They are more experiential, more like listening to music (Belson in Brougher et al., 2005: 148).
Listening can be immersive. When music provokes emotions the listener attends to both the music and their own reaction more closely. This is a reinforcing cycle.
Generally, Grewe et al., state, this process is implicit (Grewe, Nagel, Kopiez, & Altenmüller, 2007: 313): 'Our own mind seems to react automatically to music. We sense no effort; music is re-creation, but yet it is the listener's re-creation.' Emotional responses to music are influenced by: individual hinterlands, prior experiences, expectations, memories and associations.
Hearing and sight function very differently; we process acoustic waves into a perception of sound in a completely different way to how we process electromagnetic waves into vision. 'Synaesthesia is an involuntary response in one sense, such as sight, triggered by the stimulation of another sense, such as hearing' (Watkins, 2016).
Creating a meaningful synthesis of visuals and sound would be greatly simplified if we were all synesthetes and, additionally, we all experienced sound-as-colour and colour-as-sound in a similar way. Clearly this is not part of normative cognition but the idea of synaesthesia has given impetus to visual music. I would argue that the idea of synesthesia, or pseudo-synaesthesia, is an expression of the cross-modal integration of the senses.
When viewing audio-visual works physiological, perceptual, cognitive and emotional effects are intertwined. Audio-visual pieces have a different effect from audio or visual pieces alone, as the seminal American editor Walter Murch points out: 'We never see the same thing when we also hear; we don't hear the same thing when we see as well' (Chion, Gorbman, & Murch, 1994: xxii Rudolf Arnheim, the German-born perceptual psychologist and visual theorist suggests: 'the ear is the tool of reasoning; it is best suited to receive material that has been given shape by man already -whereas seeing is a direct experience, the gathering of sensory raw material ' (2007: 195). Experientially I find 'shaped' audio much easier to listen to. However, as French theorist Michel Chion observes 'sound more than image has the ability to saturate and short-circuit our perception ' (Chion et al., 1994: 33). Sound generally has more of a direct physiological effect than vision, for example film viewers' breathing can be changed by the breathing noises on the sound track of a film. The direct effect of sound may be due to how sound is experienced; sound is in the air, surrounding viewers. In contrast screen-based images are localised to the screen. The audience will remember the images more readily and understand the images more rapidly if the sounds support the images, see the British musicologist Nicholas Cook (see Figure 1); this enables a faster and deeper immersion in the work.
As Chion (1994) argues the immersive effect of audio-visuals is not due to synesthesia but is trans-sensorial in nature. Some perceptions are unique to eye or ear, for example, whereas colour is only experienced visually, pitches and the interrelationships between pitches are only experienced auditorially. However, the majority of perceptions, including perception of rhythm, texture, and material affect both senses. Forms such as music, radio and silent film are less sensorially complete than audio-visuals and so allow the audience to engage their imaginations to fill the sensory gaps. The equivalent mode of engaging the audience in an audio-visual work is the metaphoric use of sound, i.e. reassociating less expected sounds with images to enrich their relationship by adding a measure of ambiguity. As Murch argues: The metaphoric use of sound is one of the most fruitful, flexible, and inexpensive means: by choosing carefully what to eliminate, and then reassociating different sounds that seem at first hearing to be somewhat at odds with the accompanying image, the filmmaker can open up a perceptual vacuum into which the mind of the audience must inevitably rush (Chion, 1994: xx).
To make this metaphoric use of sound possible the viewer must accept that, as Chion defines his ' audio-visual contract ' (1994: 222): 'the elements of sound and image to be participating in one and the same entity or world'. Though a limited use of acousmatic sounds will not break audience immersion, viewers generally expect to see the causes of the sound in the images on screen; this is how synchronisation and synthesis, in Chion's terms 'synchresis', occurs and the images gain ' added value', added emotion or information, from sound. Clearly this is predicated on the viewer being able to bridge the gap between the 'reassociated' sounds and images. This leads to an examination of the relationship between similarity and difference that is at the core of audio-visual media. Nicholas Cook analyses the relationship of vision and audio in this way: 'The pre-condition of metaphor -and if I am right, of crossmedia interaction -is what I shall call an enabling similarity…Rather than simply representing or reproducing an existing meaning, it participates in the creation of a new one ' (2000: 70). What this means is that the images and audio are not so similar as to be redundant nor so different as to be contradictory and in contest with each other; rather the images and audio dynamically complement each other and thus a new meaning is constructed (see Figure 1).
In non-representative work, such as ' abstracted animation' the role of rhythm is crucial in determining whether the result of the difference test is ' contrary' or ' contradictory'. Completely arrhythmic, asynchronous audio-visuals are likely to be ' contradictory' and in contest with each other.
Rhythm and Audio-Visual Synchronisation
Rhythm is a vital component in creating a meaningful synthesis of vision and audio.
The visual music instrument designer and composer Fred Collopy concludes that: Rhythm has played a particularly important role in the thinking of painters who have been interested in the relationship of music to their work. There is a rhythmic element to each of the three dimensions. The changing of colors is rhythmic, the ways in which forms are arranged (even in static images) is often described in terms of rhythm, and movement in time is inherently rhythmic. This suggests that rhythm constitutes a particularly rich point of entry for the design of instruments and for the development of technique for playing visuals in performance with music (2000: 360).
Perception of musical rhythm 2 has been extensively researched. Timing and tempo rely on the individual listener's perception and cognition: the listener organises their understanding of a rhythm. As Henkjan Honing the Dutch theorist of music cognition concludes: 'A listener does not perceive rhythm as an abstract unity, as is notated in a score, nor as a continuum in the way that physicists describe time' (2013,380). The Flemish musicologist Mark Leman (2008) has found physiological correlations; looking at the effects of embodied phenomena, such as walking speed and heart rate on the perception of pulse and tempo. The pulse is identified by Smalley (1997) as the smallest rhythmic structure in tonal music. We have an innate skill to find a musical pulse when listening to a varying rhythm (Honing, 2012). We are particularly attuned to listening for the onset of beats, as, in evolutionary terms, 2 Musical rhythm consists of meter (a beat, either single or compound), rhythmical structure (shorter groups of sequential patterns of emphasised beats that are grouped into a long hierarchically based grouping of groups), tempo (the impression of speed or change of speed), and timing (nuances of when notes are played, slightly ' early' or 'late' or mechanically regular).
57
prediction is a powerful tool (Huron, 2007), and so the onset, the very start of the beat, garners most attention.
Audio-visual rhythm has some similar effects. As Chion reminds us: Rhythm is an element of film vocabulary that is neither specifically auditory nor visual…the phenomenon strikes us in some region of the brain connected to the motor functions and it is solely at this level that it is decoded as rhythm (1994: 136).
Audio-visual synchronisation relies on the coincidence of action in the image with an auditory emphasis such as a beat. Chion defines these coincidences as 'sync points', an ' audio-visually salient synchronous meeting of a sound event and a sight event' (1994: 233). Synchronous points are similar to a musical chord in that they vertically divide the audio-visual flow, shaping it and creating phrases. Moreover, each sync point emphasises a point in time and imprints an audio-visual moment more heavily in our memories.
There are different types of synchronisation. The most obvious is at the level of a pulse, an image event coinciding with a short duration audio event. An ' absolute synchronisation point' 3 is most impactful and the most percussive; usually the audio coincidence is the onset of an accented beat. Visual coincidences include: a flash frame or a cut, or the movement of the subject in the frame, especially at the height of the action (for example a punch making contact), or used metaphorically (for example the gun shots exactly on the beat in Edgar Wright's Baby Driver (2017)), or movement of the camera (whether the camera is real or virtual). I would argue that we see so much moving image constructed with ' absolute synchronisation points' that we also have statistically learned expectations that are consistently being fulfilled. This fulfilment of expectations does not become boring because we are given very different opportunities of association with the audio-visuals, there are endless nuances in the execution and there are many possible variations. There is a continuum of synchronised points from ' absolute' to 'metaphorical'.
The degree of realism, the stretch of ' en creux' in moments of 'synchresis' also affects our sense of audio-visual synchronicity. If the audio appears naturalistic we only pay attention to the visuals. If the audio creates, in Chion's terms, a gap, and if we can bridge the metaphor, this bridging emphasises the moment. If the gap is too wide the audio-visuals become asynchronous. Asynchronicity emphasises the distinct media within the audio-visual work; it gives a much greater recognition of audio appealing to our auditory senses and images appealing to our visual sense as the two media pull apart. As Honing (2012) argues, to a great extent synchronicity is subjective. We favour synchronicity over asynchronicity; we prefer synchronous works and we often perceive non or randomly synchronised stimuli as synchronous.
For example, when turning on music and windscreen wipers in a car and feeling that the music and motion of the wipers coincide, or that raindrops running down a window coincide with randomly chosen music. We are wired for apophenia, 4 wired to see patterns and create connections from unconnected events. As Chion (1994: 211) asserts: ' disorder with no apparent goal is intolerable for human beings. We cannot resist giving it structure and form, a teleology, a shape and direction, even when it itself has none'. 5 Listeners categorise metres and rhythmic genres from their remembered experiences (Snyder, 2000) and form expectations of both. This applies to periodic temporal structures and changing temporal structures such as a bouncing ball, or speech. Honing states: 'We actually tend to hear rhythm and timing in what one might call "clumps"' (2013: 380). Putting the beat into the hierarchy of a rhythm structure may be (statistically) learned. We favour the rhythms we know the best. As 4 The psychiatrist Klaus Conrad initially coined the term ' apophany' in the 1950s, from the Greek apo [away from] and phaenein [to show] to emphasise that delusion can appear to be revelatory to schizophrenics. Over time the meaning has changed to the propensity for seeing connections between phenomena that are not related. 5 Psychology is beyond the scope of this paper, but this echoes the Gestalt psychologist Max Wertheimer's assertion that the perception and interpretation of incomplete or contradictory images is always into the simplest form, the 'Law of Pragnanz ' (1938: 71-88). argues: 'It is easier to process, code, or manipulate representations when they are mentally attached to events or objects' (Huron, 2007: 124). 'Event-related binding' (Huron's term) refers to how we unify phenomenal experiences; in vision we bind shape, colour and object recognition, in audio we bind timbre, pitch, loudness and location. Additionally we seek to lighten our cognitive task by tackling the relationship between a small number of elements, discerning neighbouring relationships (this uses less short-term memory than distant relationships), and discerning the amount of change (rather than a meta-level change in the rate of change). All these perceptual tendencies inform our appreciation of visual music. Many artists have sought absolute audio-visual synchronisation. This is in sympathy with our liking for synchronous events and ' event-related binding'. Laszlo Moholy-Nagy claimed that: 'to develop creative possibilities of the sound film the acoustic alphabet of sound writing will have to be mastered; in other words, we must learn to write acoustic sequences on the sound track without having to record real sound ' (1947: 277). Seminal visual musicians, such as Norman McLaren, realised Moholy-Nagy's creative vision by producing a visual optical soundtrack in the soundtrack area of celluloid film. He used several means, including creating an optical soundtrack by photographing shapes or by manually painting or scratching individual frames. His Synchromy is close to a 1:1 mapping of sound and shape; he used the same shapes to create the soundtrack as to create the visuals. The piece starts by introducing each note-shape singly. He did, however, add colour variation and visual repetition, with the intention of making the visuals more interesting. The piece is engaging at the start but then the visuals become overly predictable and repetitive and the variations weaken the absolute synchronisation without adding interest. When it was made in 1971 it was a technical feat; it required creating each pitch optically and filming them in sequence on to the sound track and a great number of optical passes to achieve the multi-layered visuals. Today's digital processes offer much greater speed, ease and flexibility. A 1:1 mapping of data can be cold and mechanical (Watkins, 2015). At first one experiences 'pure' visual music with pleasure as the 1:1 mapping fulfills one's prediction of the relationship between music and image, but soon the very predictability of this relationship dulls the pleasure of the experience.
John Whitney did not use 1:1 mappings, but developed ' differential dynamics', i.e. linked nested, or interrelated motion paths, the result of which is that shapes are overlaid, creating harmonic visual patterns via computer algorithms. 6 This was a result of noting that rhythm in music and rhythm in vision are very different: Often referred to as the drive of a piece of music, is almost automatically enhanced with metrical or cyclical consistency and repetition. Rock musicians know this-perhaps too well. On the other hand, the most difficult visual quality to compose into a composition, as every abstract filmmaker may know, is the same driving propulsive thrust with a visually rhythmic metrical cycle (1980: 69).
As technology has exponentially increased in power it has allowed composers of visual music such as Bret Battey to create much more complex patterns and more complex links between audio and vision. Composers such as Battey have achieved this by using the same algorithms to create both sound and image. Current Shadow Sounds (see Figure 3); is a test of creating and composing with ' audio-image units'. 7 Non-verbal vocalisations such as ' ooh' ' ah' ' eeh' and 'pah' are not mapped but visualised using an animator's skills. Thomas Wilfred's Lumia is an inspiration for the flowing animations. Each sound and animation is consistently used together, as one ' audio-image unit'. 'It is built from individual vocal gestures that are analogous to notes in tonal music' (Watkins, 2016). The work has ' absolute synchronisation points' but ultimately the amorphous animations needed to be more nuanced to reflect the audio shapes more clearly and so create a more meaningful synthesis between the visual and the audio.
A Continuum of Audio-visual Synchronisation
There are other types of audio-visual synchronisation beyond matching audio beats, for example the widely perceived feeling that higher pitched notes with brighter tones, and lower notes with darker tones match each other better. Similarly, higher in screen correlates with higher notes, lower in screen correlates with lower notes, ascending motion matches ascending musical pitches, and descending motion 7 'Audio-image unit' is my own term; it refers to instances of audio and animation that always appear together synchronised in the same way, these may be combined in any number of combinations. are more effective when either the visuals or audio are complex, than when both visuals and audio are complex. I would argue that when both imagery and audio are complex the viewer tends to discern some occasions of local synchronisation or some phrasing, but that the patterns quickly become too complex to enjoy and the piece tends towards fragmentation or audio-visual dissonance in the mind of the audience.
My feeling and intuition for combining audio and visual elements comes from working for many years as an animator and (mainly) timing animation and live action to audio. I would argue that audio-visual synchronisation also corresponds to the motion embodied in sounds; most sound has a forward impetus, a vectorisation, which means the sound cannot be reversed without changing. The composer Denis Smalley states that sound-making gestures create, in his term, 'spectromorphological life' giving sounds a strong forward impetus (1997: 111). This is in contrast to artificial sounds, for example white noise, which can be reversed without changing; these sounds lack impetus. In contrast Reservoir (see Figure 5) has a much more metaphorical audio-visual synchronisation. Many diegetic sounds were combined with impressionistic images.
The sound data for Reservoir was captured at the same time as the point-of-view footage, whilst circling the reservoir on foot. All the sounds are acousmatic; the viewer sees the effect of the sounds on the camera movement and not the makers of the sounds. The diegetic audio increases the sense of place and time; using these sounds in the order they were captured keeps the original acoustical geography of the circular walk intact. 'I abstracted and re-timed the imagery and created a layered time montage through re-synching visual and audio components' (Watkins, 2016). The audio events play in real-time; they are not tied to the images in the realist manner of synchronous sound but form audio-visual chords with the step-framed images, which synthesise the impressionistic visual and the diegetic audio in a meaningful way.
Audio-visual synchronisation is linked to anticipation. As Chion describes: 'the listener's anticipation of the cadence come to subtend his/her perception. Likewise, a camera movement, a sound rhythm, or a change in an actor's behaviour can put the spectator in a state of anticipation ' (1994: 58). There is a tension around anticipation of audio, visual and audio-visual events; we derive pleasure from predicting events.
As Huron states, in relation to music: 'Pleasantness is directly correlated with predictability ' (2007: 173). But we also like some surprise.
Repeated listening changes the experience; the listener expects to hear the surprises of the first listening repeated. Huron states that 'repeated listening makes the music more predictable. Veridical memories for music hold an extraordinarily refined level of detail. Listeners are highly sensitive to the slightest changes from familiar renditions ' (2007: 241). Chion (1994) argues that, because we are wired for speech, the ear processes faster than the eye, therefore, replaying a rapid image sequence will not allow the viewer to distinguish more. However, this does not take into account an animator's intensive viewing. When I am working as an animator I view sequences that I am working on numerous times, mute and with audio, in real-time and frame-by-frame. I view sequences just looking at the foreground or subject, or concentrating on the background, or just transitions, fragmenting the sequence to see the details ever more clearly. This intensive repeated viewing has a similar effect on me as the repeated listening cited above. I build an extraordinarily in-depth, detailed memory of the audio-visual piece; when it is played I anticipate every moment and if even one frame is altered it jumps out, even though the piece is playing in real-time at 25 frames per second. This ability to re-mix and review is very much a product of our digital technology.
Technology and Data
Creating works using current technology and processes affords opportunities (see above) and poses potential problems and challenges. Technology is ephemeral. It can be superseded and then be unavailable or it can be ubiquitous and clichéd. New technologies seem to offer new creative potential, the energy of the pioneer is felt, but when they become ubiquitous they quickly become clichéd, for example using data derived from volume to control lighting at an event; making lights brighter as the music is louder in real time. Additionally, as Professor of Digital Creativity at the University of Greenwich Gregory Sporton (2015) Another way of aiding engagement is to create a piece of music that is also an instrument. Laurie Spiegel's computer program Music Mouse (1985) was simultaneously a piece of music and an instrument. Eno + Chilvers' Bloom app (2008) is advertised as being a combination of an instrument, composition and artwork.
The parameters for the visual music instrument were defined through composition: by creating a composition that is also an instrument and additionally provides parameters or 'rules' for visual music composition and so stays ' over' technology.
Bjork's Biophilia (2011) demonstrates the expanding possibilities for multiple outputs providing many levels of engagement. Biophilia is a multi-disciplinary, crossplatform release, encompassing an album, live shows, website, an iPad application for each track, and a film documenting the project.
Other technological influences on the work include the Mellotron (1963); a pre-synthesizer instrument used for the beginning of Strawberry Fields (The Beatles, 1967). It had a keyboard that played tape loops; one key played one sound. The pitch of the sound could be altered, by varying the speed at which it was played.
Additionally there was control over tone and volume. The concept of linking one motion, the pressing of one key, to one sound, within a process that allows both pitch and volume to be altered fed into the design and process for Watkins' Sky (2017) (see Figure 8).
Data was used to humanise abstract animation: data from images of landscape and data of human traces. Given that the material is digital video, which lacks the tangible physicality of film, this humanising data is especially important. As Guy Sherwin, the pre-eminent British film artist, points out, when talking about the materiality and processes of celluloid film: 'For an artist materials matter, they become important' (Lumière, 2011). Reservoir (see above) explores the materiality of digital video, using an old format and resolution of digital image, layering it and colourising it until the image almost disintegrates.
The gathered data becomes the artistic material. To create Horizon (2014) (see Many of the fascinations afforded by the natural setting might be called "soft fascination". Clouds, sunsets, snow patterns, the motion of the leaves in a breeze -these readily hold the attention, but in an undramatic fashion.
Attending to these patterns is effortless, and they leave ample opportunity for thinking about other things.
A fruitful way forward in the face of the challenges and opportunities of technology and data is to combine Kuivila's staying ' over' technology, with the use of data as artistic material that can be found in Le Grice's term 'retrieved' to form new experiences, moments of in Kaplan's term 'soft fascination'.
Human traces
In this age of burgeoning artificial intelligence in the arts, human experience and input seems ever more crucial. My ideal visual music starts with the human voice.
Watkins (2016) For Ambience (see Figure 7), audio was used that embodies emotions in the form of traditional songs, sung on vowels only: 'to underpin the abstract movement of light and colour with human motivation and emotion' (Watkins 2016). Particle systems were used to gain more detailed control of the flowing shapes. Nuance was added by directly translating some human traces, for example turning tracking data from the singer's head movement into the movement of a particle-emitter and the data from the singer's mouth movement into circles of colour.
Sky (2017) combines new elements with processes from Shadow Sounds
(audio-visual units based with non-verbal vocalisations) and Ambience (particle flows) and Horizon (gathering data to create an ' abstracted animation). The process was initiated by creating a library of unique animated shapes driven by vowels and consonants. 12 vowels that moved from the front to the back of the mouth, including ih uh aw oo, were used. The consonants B D G V L Z M were chosen to give a distinct range of sounds. These 12 vowels each had 15 variations: the vowel is sung by itself and 7 consonants were placed both before and after the vowel. Clearly vowels and consonants cannot be cut together but must be sung individually, i.e. Z and ih do not sound the same as Zih.
Informed by my research (see above) the onset of the sound and the visuals are absolutely synchronised and the development has a looser correlation that is predicated on my choices as an animator. Using parameters such as sound shapes, motion paths in 3-dimensional space, velocity, density, textures, particle shapes, fine lines and motion blur, a library of ' audio-image units' was created. The animations are inspired by the impetus and 'spectromorphological life' of the sounds. The singer, Martin Nelson, asked if the shapes were programmed from the sound data, as they seemed to fit so well. It was pleasing that using the sensibilities of an animator resulted in animations that felt so right to him. Given that rhythm is vital to create meaningful audio-visual synchronisation, the rhythmic nature of the ' audioimage units' is emphasised, chiefly by using sounds with little pitch variation. The frequencies are contained within about two pitches, allowing a rich, reverberant, resonant human dissonance. This is distinct from the many data-driven transpositions of pitch/frequency and or volume-to-image that are common within visual music.
Isolating each ' audio-image unit' and then blending them together into a visual music composition allows great flexibility and the possibility for other composers to use these animations as an instrument. The background of Sky is created from footage of clouds treated as data: re-timed, layered, colourised and revealed through particle animations. The Sky is under three minutes long. It is the first part of 24 parts that will make up an hour-long piece. Like Richter's Rhythmus 21 and later McLaren's Synchromy it is purposefully simple at the beginning in order to aid the viewers' understanding of the relationship between visuals and sung sounds.
Conclusion
This paper delineates an evolving visual music practice, underpinned by an animator's fervour, current research into audio-visual perception and the canon of visual music. In response to work that has a more mechanical mapping the aim is to use current technology to create new visual music that affords 'soft fascination'; works of ' abstracted animation' that are suffused with human presence and emotion.
This work starts with the audio, audio that has the emotion of the human voice in the sung non-verbal sounds. Visual music depends upon a meaningful synthesis of visuals with audio. These visuals depend on an animator's sensibility; they are not mechanistically or algorithmically produced from the audio. 'Audio-image units' are created that are initiated by an ' absolute synchronisation point' and develop to embody a diversity of audio-visual synchronisation. A nuanced use of audio-visual rhythm is developing through using these ' audio-image units' to simultaneously compose light, form and movement and sound into longer phrases, sections and pieces. Thus the two very distinct perceptions of light and sound have been synthesised in a meaningful way, without reducing them to limited, cold, mechanical terms. I hope this approach will be useful to others practicing in this area. | 7,306.6 | 2018-04-04T00:00:00.000 | [
"Art",
"Computer Science"
] |
Mixed MXenes: Mo 1.33 CT z and Ti 3 C 2 T z freestanding composite films for energy storage
MXenes are a class of 2D materials with outstanding properties, including high electronic conductivity, hydro-philicity, and high specific capacitance. In particular, Mo 1.33 CT z MXene has a high specific capacitance, whereas films of Ti 3 C 2 T z MXene possess high flexibility and high electronic conductivity. The fabrication of composite materials based on these two MXenes is therefore motivated, taking advantage of combining their good properties. In this article, we introduce a one-step approach to prepare composite MXene films using pristine Mo 1.33 CT z and Ti 3 C 2 T z MXenes. The composite films display superior flexibility and electronic conductivity, as well as high capacitance, up to 1380 F cm (cid:0) 3 (460 F g (cid:0) 1 ), in 1 M H 2 SO 4 . A capacitance retention of 96% is obtained after 17,000 cycles. In addition, the capacitance retentions are about 56% and 25% at scan rates of 200 mV s (cid:0) 1 and 1000 mV s (cid:0) 1 , respectively. A significant rise in the capacitance at high rates, 875 F cm (cid:0) 3 (282 F g (cid:0) 1 ) at a current density of 20 A g (cid:0) 1 , is achieved by using a 3 M H 2 SO 4 solution. The use of composite MXene as negative electrodes for asymmetric supercapacitor devices, as well as lithium-ion batteries, is also discussed. This work suggests new pathways for the use of MXene composites with double transition metals (Mo and Ti) in energy storage devices.
Introduction
Nowadays, rechargeable energy storage devices, such as batteries and supercapacitors, are of great importance in our daily life due to their use in portable devices, electric vehicles, and wearable electronics.Therefore, ongoing exploration targets either improvement in the performance of currently available materials, or the discovery of novel materials.MXenes, discovered 10 years ago, are considered one of the most promising materials for energy storage.They have a general formula of M n+1 X n T z , where M = transition metal (e.g.Ti, Mo, V, Nb, Ta, etc.), X=C and/or N, and T = surface termination groups (e.g.OH, F, Cl, or O) [1,2] which can significantly influence the electronic, structural, and electrochemical properties of the MXene films [3][4][5].MXenes are typically prepared from their parent 3D MAX phase with the general formula M n+1 AX n , where A is typically Al [6].Chemical etching using acid [1], molten salt [7], or ionic liquids [8] are approaches used for the selective removal of A elements from MAX phases.MXenes have various application-inspiring properties, including high electronic conductivity, tunable mechanical and tribological properties [9], and high gravimetric and volumetric capacitances [10][11][12][13].Therefore MXenes have been explored for various applications, such as sensors [14,15], metal-ion capacitors [16,17], plasmonic devices [18,19], hydrogen evolution catalysts [20,21], water purification [22], sorbents for urea removal [23], electromagnetic interference shielding [24], metal-ion batteries [25,26], and supercapacitors [27][28][29].
MXenes have previously shown outstanding potential for energy storage applications [30], and various approaches have been developed to improve and tune the electrochemical performance of MXenes, including the formation of composite materials with carbon-based materials (e.g.graphene, graphene oxide, carbon nanotubes) [31][32][33], transition metal oxides [34], or transition metal chalcogenides.[35].Alternatively, other modifications can be employed to increase the MXene film porosity [36,37] and reduce the restacking of MXene flakes during vacuum filtration, such as freeze drying, [12,27] natural sedimentation [38], blade coating [39], template digestion [29], fast gelation [40], and laser writing [41].Recently, composite materials constructed from Ti 3 C 2 T z MXene and nanostructured Nb-based carbides (e.g.Nb 2 CT z MXene) [42] or nitrides (e.g.NbN nanoparticles) [43] have been reported, which show enhanced performance in supercapacitor applications (low rate capacitance of about 370 F g − 1 for Ti 3 C 2 T z /Nb 2 CT z and 1000 F cm − 3 for Ti 3 C 2 T z /NbN nanoparticles).Therefore, there is a need to explore combinations Ti 3 C 2 T z MXene with other types of MXenes that possess high specific capacitance, such as Mo 1.33 CT z MXene.
Mo 1.33 CT z MXene was discovered in 2017 [44][45][46].It was obtained by selective etching of Al and Sc or Y elements from the parent i-MAX phases (Mo 2/3 Sc 1/3 ) 2 AlC [45] and (Mo 2/3 Y 1/3 ) 2 AlC [47] with in-plane chemical ordering of the M-elements.The Mo 1. 33 CT z MXene displayed a promising gravimetric capacitance (1150 F cm − 3 ), [45] which was further improved with post-etching treatment [48].Composite materials of Mo 1.33 CT z MXene with PEDOT:PSS polymer [49] and positively charged lignin polymer [50] were also studied for symmetric and asymmetric supercapacitor devices, respectively.Later on, high mass loading and flexible composite electrodes of Mo 1.33 CT z MXene combined with cellulose were reported, featuring a high areal capacitance up to 1.4 F cm − 2 [51].
Ti 3 C 2 T z MXene is the most thoroughly studied MXene in the literature, with high electronic conductivity (1000-6500 S cm − 1 ) [30] compared to Mo 1. 33 CT z (2.9 S cm − 1 ) [52].On the other hand, as mentioned above, the Mo 1.33 CT z MXene has a higher specific capacitance.A combination of the two MXenes may therefore increase the conductivity and flexibility of the composite films and improve their electrochemical performance.Previous studies have shown that the etching of the MAX phase with double transition metals, Mo 2 TiAlC 2 , can result in the formation Mo 2 TiC 2 T z MXene with a volumetric capacitance limited to 413 F cm − 3 at 2 mV s − 1 .[53] Accordingly, there is motivation to establish an alternative protocol for preparing MXene films containing double transition metals, Mo and Ti, while maintaining high capacitance and electronic conductivity.In this article, we report a straightforward approach for fabricating a series of composite MXene films by simply mixing the as-prepared Ti 3 C 2 T z and Mo 1.33 CT z MXenes.The composite MXene films display outstanding electrochemical performance in terms of high capacitance (1380 F cm − 3 , 460 F g − 1 ), capacitance retention, and rate capability.In addition, the composite MXene films are flexible and feature a high electronic conductivity.
Synthesis, morphology, and structure of the composite MXene films
The Mo 1.33 CT z and Ti 3 C 2 T z MXenes were prepared by selective chemical etching of the corresponding 3D MAX phases (see experimental section and Fig. S1).The composite MXene films were prepared using the approach shown schematically in Fig. 1.The aqueous suspensions of Mo 1.33 CT z and Ti 3 C 2 T z MXenes were mixed by hand shaking in given weight ratios, which after vacuum filtration produced a series of Mo 1.33 CT z -Ti 3 C 2 T z composite films (see Table 1).The overall film loadings were about 2.1 mg cm − 2 , which can be considered relatively high.It should be noted that hand shaking has the advantage over other approaches such as magnetic stirring or sonication, because it is simple, fast, straightforward, and decrease the chance of oxidation of MXenes or formation of MXene flakes with small flake size.[54].
The as prepared Mo 1.33 CT z MXene films were brittle (Fig. 2a) and showed a relatively low electronic conductivity (about ~ 1 S cm − 1 , Fig. S2a) [52].In contrast, the Ti 3 C 2 T z films featured high flexibility (see Fig. 2e and its inset), as well as a high electronic conductivity (~ 4000 S cm − 1 , not shown).The detailed morphology and structure of the pristine MXenes, Mo 1.33 CT z and Ti 3 C 2 T z , have been described elsewhere [10,45,46], although the single-sheet materials have average lateral dimensions of approximately a few hundred nanometers for both Mo 1.33 CT z and Ti 3 C 2 T z .Upon adding 75% (weight%) of Mo 1.33 CT z to 25% of Ti 3 C 2 T z , the composite film that was formed, hereafter referred to as 3Mo:1Ti, showed an enhanced electronic conductivity of about 24 S cm − 1 , as determined by 4-point probe measurements (see Fig. S2a).Furthermore, the 3Mo:1Ti film displayed higher flexibility and better handling than the as-prepared Mo 1.33 CT z MXene film (see Fig. 2b and its inset).Likewise, the mixing of 50% (weight%) Mo 1.33 CT z and 50% Ti 3 C 2 T z resulted in the formation of a flexible MXene film (see Fig. 2c and its inset), referred to as the 1Mo:1Ti film; with even higher electronic conductivity (about 140 S cm − 1 , Fig. S2a).These electronic conductivity values for the composite MXene films exceed the previously reported conductivity values for a Mo 1.33 CT z MXene composite with PEDOT:PSS conducting polymer (18 S cm − 1 , Fig. S2a) [49].It is clear that, as the percentage of the highly conducting Ti 3 C 2 T z MXene increases, so the electronic conductivity increases.However, when 25% (weight%) of Mo 1.33 CT z was added to 75% of Ti 3 C 2 T z , agglomeration of the laminated flakes was observed (see Fig. 2i) and a corrupted film was formed, limiting its practical application (see Fig. 2d).Fig. S2j shows a schematic illustration of the composition ratios of the prepared composite MXene films and the trend in agglomeration.
The Ti 3 C 2 T z and Mo 1.33 CT z MXenes were produced using different etching approaches and different delaminating agents: LiCl and TBAOH, respectively (see experimental section).In order to understand the reason behind the agglomeration, we therefore investigated the effect of presence of such agents on the MXenes, by adding excess amount of TBAOH and LiCl to the Ti 3 C 2 T z and Mo 1.33 CT z MXenes, respectively.Interestingly, the agglomeration was only observed upon adding excess LiCl to Mo 1.33 CT z MXene, whereas the addition of excess TBAOH to Ti 3 C 2 T z MXene did not cause agglomeration (see Figs. S2k and l).Accordingly, the agglomeration observed in the 1Mo:3Ti sample can be attributed to the excess amount of LiCl originating from the etching procedure of the Ti 3 C 2 T z MXene, causing agglomeration to the Mo 1.33 CT z MXene [55].In other words, Ti 3 C 2 T z MXene can possibly act as a binder to the Mo 1.33 CT z MXene in the 3Mo:1 T film [56], resulting in the formation of a flexible, freestanding, mixed MXene electrode.However, when the concentration of Ti 3 C 2 T z increases, the possibility of Mo 1.33 CT z agglomeration increases due to the presence of a greater amount of residual LiCl in the mixture, and thus agglomeration occurs for the film with the 1Mo:3Ti ratio.Care should therefore be taken when preparing mixed MXene films owing to the complicated aqueous chemistry of MXenes [57][58][59].Future studies are motivated to address the agglomeration by using a similar delaminating agents for the two MXenes.To further investigate the origin of the agglomeration, XPS analysis was applied to the different composite MXene films.The XPS Fig. 1.Schematic illustration for the synthesis of Mo 1.33 CT z -Ti 3 C 2 T z composite films.
A.S. Etman et al. survey spectra and the C 1s XPS spectra of the pristine and composite MXene samples are shown in Note S1.Fig. 3a and b show the XPS spectra of Ti 2p regions and Mo 3d regions for the composite films.Notably, the 1Mo:1Ti and 3Mo:1Ti films displayed similar oxide levels for both Ti 3 C 2 T z and Mo 1. 33 CT z (see Table S4).However, the 1Mo:3Ti film showed the same oxide percentage for Ti 3 C 2 T z , but a higher oxide content for the Mo 1.33 CT z (the value increased from ≈ 10-47%).The latter observation suggests that the agglomeration observed in the 1Mo:3Ti sample can, also be attributed to the oxidation of the Mo 1.33 CT z upon adding the Ti 3 C 2 T z MXene.
The SEM cross-section images for the Mo 1.33 CT z (Fig. 2f), 3Mo:1Ti (Fig. 2g, and Figs.S2f-g), 1Mo:1Ti (Fig. 2h, and Figs.S2h-i), and Ti 3 C 2 T z (Fig. 2j) films indicated that the pristine MXene films had a more compact structure than those of the composite films.For instance, the 3Mo:1Ti film shows larger spacing between the MXene layers (i.e., it is more porous) than those of the pristine MXene films.In addition, the composite MXene films showed a slightly curved layer morphology, which can enhance in-plane ion transport and hence increase the number of accessible active sites during electrochemical cycling, as described in earlier reports on modified Ti 3 C 2 T z MXene.[29] This curved layer morphology was observed over a large part of the composite MXene films, as confirmed by SEM images collected from different regions (see Fig. S2f-i).It should be noted that the EDX elemental mapping shows an even distribution of Mo and Ti elements inside the composite films (see Figs. S2b-e), reflecting the film's homogeneity.
The X-ray diffraction (XRD) patterns for the pristine Ti 3 C 2 T z and Mo 1.33 CT z showed a typical pattern, with 00l reflections and 000l reflections, respectively (see Fig. 3c).The 002 peak of the Ti 3 C 2 T z MXene was located at 7.5 ˚(corresponding to a d-spacing of about 11.7 Å), and the Mo 1.33 CT z 0002 reflection was observed at 5.9 ˚(corresponding to a dspacing of about 15.0 Å).Notably, the composite films' low-angle peaks were found at 5.5 ˚and 5.3 ˚(corresponding to a d-spacing of 16.0-16.5Å).As the content of the Mo 1.33 CT z MXene increases, the shift toward lower angles increases.The corresponding increase in the d-spacing of the composite films in comparison to the Ti 3 C 2 T z and Mo 1.33 CT z pristine MXenes suggests that the composite MXene films can host more intercalated ions than the Ti 3 C 2 T z and Mo 1.33 CT z pristine MXenes, and hence attain a higher capacitance.
Based on the results of XRD, SEM, and conductivity measurements discussed above, the composite films are expected to provide a superior electrochemical performance for supercapacitors and batteries owing to their high electronic conductivity and their morphology of less compact stacking [29,30].In the following discussions, we provide a general exploration of the electrochemical performance of mixed MXene films.We focus mainly on supercapacitors, but a brief investigation of their performance in lithium-ion batteries is also presented.
Electrochemical performance of the composite MXene films in supercapacitors 2.2.1. Electrochemical performance on gold/stainless-steel current collectors
The electrochemical performance of the composite films was studied using a three-electrode stainless-steel Swagelok cell with a gold current collector, see Fig. 4. A stable electrochemical behavior was observed for the composite MXene films when using a potential window of − 0. 0.3 V vs. Ag/AgCl [51].The electrodes were initially precycled for 30 cycles at a scan rate of 10 mV s − 1 (see Fig. S5).Only the Ti 3 C 2 T z electrodes showed a bit of anodic oxidation during the first CV cycle, which then stabilized in subsequent cycles.As a general trend, the electrode capacitance was raised during the precycling process, as a result of the structure opening up and increasing the interfacial surface area of the electrode accessed by the electrolyte.As can be seen from the CV shapes in Fig. 4a, typical pseudocapacitive CVs were observed for the pristine MXenes and the composite electrodes as well.Likewise, the constant current measurements showed a similar behavior to the CV experiments, with a sloping plateau, see Fig. 4b.The 3Mo:1Ti electrodes delivered discharge capacitances of about 324 and 176 F g − 1 , at current densities of 3 and 10 A g − 1 , respectively, while the 1Mo:1Ti electrodes delivered discharge capacitances of about 226 and 142 F g − 1 , at the same current densities.
The variation of the gravimetric and volumetric capacitances of the composite MXene films with scan rates are shown in Fig. 4c and d.Notably, the composite MXene electrodes featured higher gravimetric and volumetric capacitances than the pristine Ti 3 C 2 T z (blue circles in Fig. 4c and d) and Mo 1.33 CT z (purple triangles in Fig. 4c and d) MXenes.The higher capacitance may have originated from a larger space between the MXene sheets in the composite electrodes, as a consequence of their curved layer morphology (see SEM images in Figs. 2 and S2).This is in agreement with previous studies on modified Ti 3 C 2 T z MXene [29].This, in turn, results in an increase in the interfacial surface area of the electrodes accessed by the electrolyte, which allows in-plane ion transport and thus increases the number of accessible active sites during electrochemical cycling (see schematic illustration in Fig. 4e and f) [29].
In particular, at a low scan rate (2 mV s − 1 ) the 3Mo:1Ti electrodes displayed high gravimetric (460 F g − 1 ) and volumetric (1380 F cm − 3 ) capacitances (red squares in Fig. 4c and d).At higher scan rates of 200 and 1000 mV s − 1 , however, the capacitance dropped to 82 F g − (247 F cm − 3 ) and 27 F g − 1 (80 F cm − 3 ), respectively.Consequently, capacitance retentions of about 18% and 6% were obtained at scan rates of 200 and 1000 mV s − 1 , respectively.On the other hand, the 1Mo:1Ti electrodes (black diamonds in Fig. 4c and d) featured a lower capacitance (326 F g − 1 , 1011 F cm − 3 ) at a scan rate of 2 mV s − 1 .When the scan rate was raised to 200 and 1000 mV s − 1 , the 1Mo:1Ti electrodes showed capacitances of 81 F g − 1 (252 F cm − 3 ), and 33 F g − (103 F cm − 3 ), respectively.Therefore, the capacitance retentions were about 25% and 10%, at scan rates of 200 and 1000 mV s − 1 , respectively.It can immediately be seen that the rate performance of the 1Mo:1Ti electrodes was much better than that of the 3Mo:1Ti electrodes.One possible reason for the low rate performance of the 3Mo:1Ti electrodes is the increased iR-drop as a result of their lower electronic conductivity [60].It should also be noted that both MXenes Mo 1.33 CT z and Ti 3 C 2 T z have mixed surface terminations (T) of F, OH, and O [10,44], and accordingly the mixed MXene composites most likely maintain similar terminations.Therefore, the surface terminations of the mixed MXene films possibly affect the electrochemical behavior (in H 2 SO 4 solutions) in analogous way to that reported earlier for pristine MXenes [3,4,10,45].
The long-term cycling of the 1Mo:1Ti (see Fig. S6a) and 3Mo:1Ti (see Fig. S6b) electrodes at a current density of about 10 A g − 1 , showed stable behavior over 10,000 cycles, with a coulombic efficiency approaching 100%.However, the capacitance retentions were about 80% for both composites.One possible reason for the fading is the dissolution of Ti 3 C 2 T z upon raising the potential above 0.2 V [61].Therefore, we performed analogous experiments where the upper cut-off potential was limited to 0.2 V (i.e. a potential window between − 0.3 and 0.2 V).Notably, the change in capacitance with scan rate for the 1Mo:1Ti electrode, obtained using an upper cut-off potential of 0.2 V, was identical to that obtained using an upper cut-off potential of 0.3 V (see Fig. S6c).Furthermore, the fading was successfully reduced, and a capacitance retention of about 96% was obtained after 17,000 cycles for the 1Mo:1Ti electrodes (see Fig. S6e).In comparison, the capacitances of the 3Mo:1Ti electrodes were reduced upon using an upper cut-off potential of 0.2 V (see Fig. S6d), while the capacitance retention was improved in comparison to that using an upper cut-off potential of 0.3 V (see Fig. S6f).The 3Mo:1Ti electrodes showed impressive stability over 27,000 cycles.Furthermore, the morphology of the electrodes was maintained after the long-term cycling (see Fig. S7).The coulombic efficiency was almost 100%, reflecting the high reversibility of the electrochemical cycling.In addition, the increase in capacitance during electrochemical cycling seen in Fig. S6f can be explained by the fact that the composite electrode structure is further opened up during the long-term cycling (see Fig. S7).
Composite MXene films based on Mo 1.33 CT z prepared from Y-based MAX phase
The abovementioned composite films were obtained using Mo 1.33 CT z MXene prepared using (Mo 2/3 Sc 1/3 ) 2 AlC MAX phase.However, motivated by a comparatively low abundance of Sc, we also investigated alternative pathways.Previous reports have shown that the Mo MXenes.
The CV shape of the 3Mo[Y]:1Ti electrodes at a scan rate of 10 mV s − 1 , shown in Fig. 5b, was very similar to that of the 3Mo [Sc]:1Ti, reflecting the fact that the electrochemical behavior of the two films is similar.Likewise, the variation of capacitance with scan rate showed a higher specific capacitance than those of the pristine MXenes, but, analogous to the 3Mo[Sc]:1Ti, with a minor difference in the rate performance (see Fig. 5c).These results clearly indicate that the morphology, structure, and electrochemical performance of the composite MXene films are independent of the origin of the pristine Mo 1.33 CT z MXene.This allows the use of the (Mo 2/3 Y 1/3 ) 2 AlC i-MAX phase during MXene production.
The electrochemical performance on glassy carbon current collectors
The use of a glassy carbon current collector can expand the potential window of the MXene electrochemical cycling, since the H 2 evolution is suppressed to a more negative potential than with gold/stainless-steel. [62] The electrochemical performance of the composite films was therefore studied using a three-electrode plastic Swagelok cell with glassy carbon current collectors (see Fig. 6).The use of glassy carbon not only enabled the expansion of the potential window (− 0.5 ~ 0.2 V vs. Ag/AgCl), but also improved the rate capability of the 1Mo:1Ti electrodes (see Fig. 6b).As can be seen from the CV shapes in Fig. 6a, typical pseudocapacitive CVs were observed for the pristine MXenes and the composite electrodes.However, a couple of redox peaks were observed upon expanding the potential window.These peaks were not detected on a stainless-steel substrate (see Fig. S8) due to the incomplete reduction of the electrodes at the lower cut-off potential (− 0.3 V).The peak-to-peak separation increases when going from 1Mo:1Ti to 3Mo:1Ti electrodes, which can be attributed to the increased iR-drop as a result of the lower electronic conductivity of the 3Mo:1Ti electrodes compared to that of the 1Mo:1Ti electrodes.The latter observation explains the good rate performance of the 1Mo:1Ti electrodes compared to 3Mo:1Ti electrodes (see Fig. 6c and d).
As a general trend, the composite MXene electrodes delivered higher gravimetric and volumetric capacitances than the pristine Ti 3 C 2 T z (blue circles in Figs.6c and d) and Mo 1.33 CT z (purple triangles in Figs.6c and d) MXenes.As mentioned above, the higher capacitance can be attributed to the presence of more space between the MXene layers of the composite electrodes due to a more curved layer morphology.[29] The 3Mo:1Ti electrodes (red squares in Figs.6c and d) delivered a high gravimetric (430 F g − 1 ) and volumetric (1290 F cm − 3 ) capacitance at a low scan rate (2 mV s − 1 ); however, the capacitance dropped to 55 F g − 1 (165 F cm − 3 ) and 15 F g − 1 (45 F cm − 3 ) at scan rates of 200 and 1000 mV s − 1 , respectively.In other words, the capacitance retention was about 13% and 4% at scan rates of 200 and 1000 mV s − 1 , respectively.In contrast, the 1Mo:1Ti electrodes (black diamonds in Figs.6c and d) delivered a lower capacitance (304 F g − 1 , 942 F cm − 3 ) at a low scan rate (2 mV s − 1 ), while at scan rates of 200 and 1000 mV s − 1 the electrodes delivered capacitances of 165 F g − 1 (516 F cm − 3 ), and 74 F g − 1 (230 F cm − 3 ), respectively.This corresponds to a capacitance retention of about 55% and 25% at scan rates of 200 and 1000 mV s − 1 , respectively.The long-term cycling of the electrodes was stable over 10, 000 cycles, with excellent retention and coulombic efficiency (see S9).The increase in capacitance during electrochemical cycling matches the results obtained using a stainless-steel cell and, as mentioned above, this can be attributed to the opening up of the structure of the composite electrodes.A test was performed to investigate the reproducibility of the electrochemical results (see Figs. S9b and c).The results revealed that the electrochemical behavior was the same for electrodes obtained from the same mixed MXene film as well as for another film prepared using reproduced pristine MXene batches.This reflects the good homogeneity and reproducibility of the mixed MXene films and the robustness of the synthesis protocol.
A pathway for improving the performance of the MXene electrodes is the use of 3 M H 2 SO 4 rather than 1 M H 2 SO 4 .[51,62] Fig. 7a shows a comparison between the CVs at a scan rate 10 mV s − 1 of the 1Mo:1Ti electrodes in 3 M (black solid line) and 1 M (purple dotted line) H 2 SO 4 .When 3 M H 2 SO 4 electrolyte was used, a significant increase in the capacitance was observed.Furthermore, a promising rate performance was obtained; for instance, at a current density of 20 A g − 1 , the 1Mo:1Ti electrodes delivered a capacitance of about 282 F g − 1 (875 F cm − 3 ) and 80% of the capacitance was retained after 10,000 cycles with a coulombic efficiency approaching 100% (see Fig. 7c and d).Likewise, the 3Mo:1Ti showed an analogous CV shape to that of 1Mo:1Ti film (see Fig. 7b).In addition, an enhancement of the specific capacitance and rate capability was also obtained.For example, at current densities of about 10 and 100 A g − 1 , the 3Mo:1Ti electrodes delivered capacitances of about 1200 F cm − 3 (400 F g − 1 ) and 864 F cm − 3 (288 F g − 1 ), respectively (see Figs. S10 and S11).However, the capacitance retention was about 85% after 2000 cycles at a high current density of 100 A g − 1 (see Fig. S11).Therefore, the use of 3 M H 2 SO 4 can enhance the capacitance and rate performance.This can be attributed to the higher conductivity of 3 M as compared to 1 M H 2 SO 4 solutions.[63] However, it also reduces the long-term stability as a result of the dissolution of active material in the electrolyte, as well as the presence of parasitic side reactions such as H 2 evolution which is dependent on the concentration of H + in the electrolyte solution [64].The low coulombic efficiency at low rates (not shown) further confirmed the presence of parasitic side reactions.The dissolution of the active materials was also revealed by the change in the electrolyte color after long-term cycling.
Table S5 and Fig. S12 summarize a comparison of the electrochemical performance in supercapacitors for the Mo 1.33 CT z -Ti 3 C 2 T z electrodes reported in this work and the other state-of-the-art for Ti-and Mo-based MXene electrodes.The comparison illustrates the superior performance of the mixed MXene electrodes in terms of high capacitance, good rate performance, and high capacitance retention during long-term cycling.
Composite MXene films used in asymmetric supercapacitor devices
The 3Mo:1Ti film showed the highest capacitance among the investigated composite MXene films.Therefore, it was chosen as the negative electrode for an asymmetric supercapacitor (ASC) with an activated carbon positive electrode.As shown in Fig. 8a, the composite MXene (3Mo:1Ti) has a working potential window of − 0.5 ~ 0.2 V (vs.Ag/AgCl), whereas the activated carbon has a potential window 0.0 ~ 1.0 V (vs.Ag/AgCl).Therefore, the asymmetric device featured a cell voltage of about 1.5 V. Cyclic voltammetry and constant-current measurements were used to test the electrochemical performance of the ASC.The device was precycled for 200 cycles at 10 mV s − 1 to obtain a stable electrochemical response with a typical pseudocapacitive CV shape (see Figs. 8b and S13).The ASC delivered discharge capacitances of about 57, 42, and 14 F g − 1 at scan rates of 2, 20, and 200 mV s − 1 , respectively (see Fig. 8b and d).In other words, capacitance retentions of about 74% and 25% were obtained upon increasing the scan rate by a factor of 10 and 100, respectively, reflecting the good rate performance of the ASC.Likewise, the constant current measurements showed sloping charge/ discharge profiles, with discharge capacitances of about 50 and 10 F g − 1 at current densities of about 0.5 and 5 A g − 1 , respectively (see Fig. 8c and d).The device featured an energy density of about 16-18 Wh kg − 1 , and a good capacitance retention (~75%) after long-term cycling at 5 A g − 1 .
Composite MXene films used in lithium-ion batteries
Inspired by the superior performance of the composite MXene film (3Mo:1Ti) in supercapacitors, we tested it as a negative electrode material for a Li-ion battery.Cyclic voltammetry and galvanostatic chargedischarge techniques were used to examine the electrochemical behavior for lithiation/delithiation in the potential window 0.05 ~ 3.0 V vs. Li + /Li.The electrodes were first cycled at 5 mV s − 1 for about 200 cycles to obtain a steady-state electrochemical performance.The capacity of the electrodes increased during this precycling (see Fig. S14a), indicating an increase in the interfacial surface area accessed by the electrolyte as a result of the structure opening up upon electrochemical cycling.Analogous behavior was previously observed for Mo 2 CT z MXene [65].The CVs showed a pseudocapacitive CV shape with a redox pair at potentials of 0.8 and 1.4 V (vs.Li + /Li), in agreement with previous reports on molybdenum based MXene [53,65].Anasori et.al showed that the electrochemical lithiation mechanism of molybdenum based MXenes can be described as adsorption/intercalation of Li-ions up to a lower cut-off potential ≥ 1.6 V (vs.Li + /Li), and a conversion reaction below 0.6 V (vs.Li + /Li) [53].Considering our lower cut-off potential of 0.05 V (vs.Li + /Li), we possibly have both lithiation mechanisms.The CV rate performance (see Figs. S14b-d) showed capacities of about 106, 81, 46, 36, 24, and 15 mAh g − 1 at scan rates of 0.05, 0.1, 0.5, 1, 5, and 50 mV s − 1 , respectively.Likewise, the composite MXene electrodes delivered capacities of about 80, 51, 24, 19, and mAh g − 1 , at current densities of about 10, 20, 100, 200, and 1000 mA g − 1 , respectively (see Figs. S15a-c).Furthermore, the low-rate capacity was maintained after cycling at a high rate (see Fig. S15b).In addition, the composite MXene films showed a capacity retention of about 89% after about 400 cycles at a current density of 200 mA g − 1 .
Previous studies showed that the Li-ions adsorption on O termination is more favorable [66] than that on OH and F terminations which can significantly affect the accessible capacity.[67][68][69] For example the theoretical capacity of O terminated and OH terminated Mo 2 TiC 2 T z were 181 and 43 mAh g − 1 , respectively [53].Providing that the Mo 1.33 CT z has less favorable O terminations and instead possess mixed terminations with a higher F content (proposed formula Mo 1.2 CO 0.7 (OH) 0.5 F 1.1 ) [44] which is not favorable for Li-ions adsorption [67][68][69], the accessible capacity of Mo 1.33 CT z MXene is expected to be lower than that of other molybdenum based MXenes with more favorable O terminations (e.g.Mo 2 CT z ) [44].It should also be noted that the accessible capacity of the MXene-based electrodes can be significantly influenced by the method of electrode manufacturing [37], and it has been shown that vacuum-dried MXene electrodes can possess a lower capacity than those prepared by natural sedimentation [38] or freeze drying [70].Therefore, future studies on the composite MXene electrodes are recommended, to explore the potential for improvements in the accessible capacity for Li-ion intercalation by modifying the pathways for electrode manufacturing.
Altogether, the mixed MXene approach, comprising the formation of composites with double transition metals (Mo and Ti), enabled improvement in the properties of each individual MXene.According to the mixing ratios, the electronic conductivities of the mixed MXene films were 24-140 times higher than that of Mo 1.33 CT z MXene, and the gravimetric capacitances of the mixed MXene films were 1.3-1.9higher than that of Ti 3 C 2 T z .Moreover, the specific capacitance of mixed MXene outperformed the previously reported values for double transition metal MXene (Mo 2 TiC 2 T z ) [53].Future studies are motivated to explore other MXene composites, targeting optimized properties for superior performance in energy storage applications.
Conclusion
In summary, we reported a one-step protocol for fabricating composite MXene films based on Ti 3 C 2 T z and Mo 1.33 CT z MXenes.The results were independent on the parent material used for derivation of the Mo 1.33 CT z MXene, (Mo 2/3 Y 1/3 ) 2 AlC or (Mo 2/3 Sc 1/3 ) 2 AlC i-MAX phase.The composite films were extremely flexible and showed good electronic conductivity (140 S cm − 1 ).The SEM cross-section showed that the composite film is more porous and possesses a more curved layer morphology, which may enhance the in-plane ion transport and increase the number of accessible active sites for electrochemical reactions.Furthermore, the EDX elemental mapping showed a homogeneous distribution of Mo and Ti elements within the films.
The composite films featured a high gravimetric and volumetric capacitance in comparison to the pristine MXene films.At a low scan rate, the 3Mo:1Ti film showed a higher capacitance than the 1Mo:1Ti film, whereas the performance of the 1Mo:1Ti film was superior to that of 3Mo:1Ti at a high scan rate.This can be attributed to an about 6 times higher electronic conductivity in the former film (1Mo:1Ti).A good capacitance retention was observed for the 1Mo:1Ti and 3Mo:1Ti films, 96% and 109% after more than 17,000 and 27,000 cycles, respectively.The use of a glassy carbon current collector can expand the potential window of cycling (− 0.5 ~ 0.2 V vs. Ag/AgCl), compared to (− 0.3 ~ 0.2 V vs. Ag/AgCl) on gold/stainless steel.The use of 3 M H 2 SO 4 improved both the capacitance and the rate performance; however, it reduced the reversibility at low rates due to parasitic side reactions.Furthermore, an asymmetric supercapacitor (ASC) of a composite MXene film and activated carbon featured an operating potential of 1.5 V, a capacitance of about 57 F g − 1 , and an energy density of about 16-18 Wh kg − 1 .The composite MXene was also tested as a negative electrode for a lithium-ion battery, delivering a capacity of about 106 mAh g − 1 .synthetized.Powders of Ti 3 AlC 2 were produced, starting with a mixture of TiC (Alfa Aesar, 98+%), Ti (Alfa Aesar, 98+%) and Al (Alfa Aesar, 98+%) in a 1:1:2 molar ratio, which were mixed together using a mortar and pestle for 5 min.They were then inserted into an alumina tube furnace with Ar gas flowing.The furnace was heated at a rate of 5 • C/ min up to 1450 • C and held for 280 min, then cooled down to room temperature.The resulting material is a lightly sintered Ti 3 AlC 2 sample, which is then crushed into a powder of particle size < 60 µm using mortar and pestle (see Fig. S1).
Experimental
To convert the Ti 3 AlC 2 to Ti 3 C 2 T z flakes, half a gram of Ti 3 AlC 2 powder was added to a premixed 10 ml aqueous solution of 12 M HCl (Fisher, technical grade) and 2.3 M LiF (Alfa Aesar, 98+%) in a Teflon bottle.Prior to adding the MAX powder to the HCl-LiF solution, the latter was placed in an ice bath.After adding the MAX powder, the whole mixture was kept in the ice bath for 0.5 h to avoid the initial overheating that can result from the exothermic nature of the reaction.The bottle was then placed on a magnetic stirrer hot plate in an oil bath and held at 35 • C for 24 h, after which the mixture was washed through 3 cycles of 40 ml of 1 M HCl, followed by 3 cycles of 40 ml of 1 M LiCl (Alfa Aesar, 98+%).Then, the mixture was washed through repeated cycles of 40 ml of distilled water until the supernatant reached a pH of approximately 6.After washing, 45 ml of distilled water were added and deaerated by bubbling N 2 gas through it.It was then sonicated using an ultrasonic bath for 1 h.The resulting suspension was centrifuged for 20 min at 2000 rpm.The supernatant produced was of a concentration of 4-7.5 mg ml − 1 .More details can be found in Ref. [10].
To produce Mo 1.33 CT z , one gram of the (Mo 2/3 Sc 1/3 ) 2 AlC i-MAX phase powder was added to 20 ml 48% HF, stirring for 24 h at room temperature.After the reaction, the product was washed with deionized water.After washing, the multilayer MXene was delaminated into single/few-layer MXene by intercalating with 10 ml of a solution of 54-56 wt% tetrabutylammonium hydroxide, TBAOH, (Sigma Aldrich, Sweden), which was shaken manually for 5 min.Extra TBAOH was removed by centrifuging at 5000 rpm for 5 min and by carefully rinsing (three times) with water.Then, water was added to the intercalated powder and the mixture was shaken for 5 min, for delamination into single-or few-layered MXene.Finally, homogeneous delaminated Mo 1.33 CT z MXene was obtained by centrifuging for 30 min at 3500 rpm with a concentration of 1-2 mg ml − 1 .
Two grams of (Mo 2/3 Y 1/3 ) 2 AlC[AlY] powders (− 450 mesh) were immersed in a Teflon bottle containing 40 ml of 25 vol% HF (Sigma Aldrich, St. Louis, USA).The mixture was stirred using a Teflon-coated magnetic stirrer bar for 120 h at RT.The resulting mixture was washed with N 2 deaerated deionized (DI) water for several cycles until the pH was ≈ 6 (typically, 8 cycles were needed).For each washing cycle, 40 ml of DI water were added to the multilayer powder in a centrifuge tube, which was hand-shaken for 1 min, then centrifuged at 5000 rpm for 1 min, after which the supernatant was decanted.For delamination, 1 g of the multilayer powder was added to 5 ml of an aqueous solution of 54-56 wt% TBAOH.The mixture was hand-shaken for 5 min, and then washed 3 times, using 40 ml deaerated DI water each time.Subsequently, 50 ml of deaerated DI water was added to the intercalated powder and hand-shaken for 5 min, followed by centrifuging at 2500 rpm for 30 min.The resultant supernatant contained delaminated single-or few-layer Mo 1.33 CT z flakes at a concentration of 3 mg ml − 1 .
Synthesis of composite MXene films
The prepared MXene suspensions were mixed according to the weight ratios summarized in Table 1, and then shaken by hand for 30 s, followed by vacuum filtration using a Celgard 3501 membrane.The films were allowed to dry in air, peeled off the Celgard membrane, and stored under inert Ar atmosphere for further use.
Material characterization and electrochemical measurements
A scanning electron microscope (SEM, LEO 1550 Gemini) mounted with an energy-dispersive X-ray (EDX) detector was used to explore the morphology, thickness, and homogeneity of the composite MXene films.X-ray diffraction (XRD) patterns were collected using a PANalytical diffractometer equipped with Cu K α radiation source (λ = 1.54 Å, step size 0.0084 , time per step 20 s).
X-ray photoelectron spectroscopy (XPS) measurements were performed on freestanding film samples of Mo 1.33 CT z , 3Mo:1Ti, 1Mo:1Ti, 1Mo:3Ti, and Ti 3 C 2 T z using a surface analysis system (Kratos AXIS Ultra DLD , Manchester, UK) with monochromatic Al-K α (1486.6 eV) radiation.The sample was mounted on double-sided tape and grounded to the sample stage with copper contacts.The X-ray beam irradiated the sample surface at an angle of 45 ˚with respect to the surface and provided an X-ray spot of ≈ 300 × 800 µm.Charge neutralization was performed using a co-axial, low energy (~0.1 eV) electron flood source to avoid shifts in the recorded binding energy (BE).XPS spectra were recorded for Ti 2p, Mo 3d, and C 1s regions.The analyzer pass energy used for all regions was 20 eV, with a step size of 0.1 eV.The BE scale of all the XPS spectra was referenced to the Fermi-edge (E F ), which was set to a BE of zero eV.The peak fitting was carried out using CasaXPS Version 2.3.16RP 1.6 in the same manner as in Refs [65,71].The XPS survey spectra were used to obtain the global atomic elemental percentage.For that measurement, the analyzer pass energy was set to 160 eV and the step size was 0.1 eV.
All electrochemical analysis was done using three-electrode stainless-steel and plastic Swagelok cells with gold and glassy carbon current collectors, respectively.A piece of Celgard 3501 soaked in 1 M or 3 M H 2 SO 4 solution was used as the separator.An Ag/AgCl (3.5 M KCl) and a circular piece of activated carbon (YP-50, Kuraray, Japan) were employed as the reference and counter electrodes, respectively.The pristine MXene and composite MXene films were used as the working electrodes.These working electrodes had a diameter of about 4.0 mm and their overall mass varied between 250 and 270 µg.Cyclic voltammetry and constant-current techniques were used to explore the electrochemical activity of the electrodes.For the measurements using gold/ stainless-steel cells, potential windows of − 0.3 ~ 0.3 V vs. Ag/AgCl or − 0.3 ~ 0.2 V, were used.Whereas a potential window of − 0.5 ~ 0.2 V was used for the measurements made using glassy carbon/plastic cells.The packing density of the films was calculated from their areal mass loading and their thickness.
The ASC experiments were carried out using an activated carbon positive electrode and a composite MXene film negative electrode.The electrodes' diameter was 4.0 mm and their masses (m+ and m-) were balanced according to the equivalent stored charge (Q + = Q -) using the equation (Q +/-= specific capacitance * m +/-).Cyclic voltammetry and galvanostatic charge-discharge techniques were employed to examine the electrochemical behavior of the ASC devices, and the cell voltage was varied between 0 and 1.5 V.The energy density (E) was calculated using the equation (E (Wh kg − 1 ) = 0.5* specific capacitance (F g − 1 ) * (potential window (V)) 2 / 3.6).
The electrochemical performance of composite MXene in a lithiumion battery was measured using a coin cell with a lithium metal counter electrode and composite MXene working electrode.The cells were assembled in an Ar-filled glove box (O 2 , and H 2 O levels ≤ 1 ppm).A piece of glass fiber paper was used as a separator and 1 M LiPF 6 in a 1:1 mixture of ethylene carbonate (EC): diethyl carbonate (DEC) was used as the electrolyte.Cyclic voltammetry and galvanostatic charging/ discharging in the potential window 0.05 ~ 3 V (vs.Li + /Li) were used to test the lithium-ion cells.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 4 .
Fig. 4. Electrochemical performance in 1 M H 2 SO 4 using a gold/stainless-steel collector: (a) CVs at a scan rate of 10 mV s − 1 for Ti 3 C 2 T z (blue dotted line), Mo 1.33 CT z (purple dashed line), 1Mo:1Ti (black solid line), and 3Mo:1Ti (red solid line).(b) Potential capacitance profiles of 1Mo:1Ti (dotted lines) and 3Mo:1Ti (solid lines) electrodes, at current densities of 3 (blue) and 10 (purple) A g − 1 .(c) and (d) Variation of the gravimetric and volumetric capacitances, respectively, with the logarithm of scan rates for Ti 3 C 2 T z (blue circles), Mo 1.33 CT z (purple triangles), 1Mo:1Ti (black diamonds), and 3Mo:1Ti (red squares) electrodes.(e) and (f) Schematic illustration for single-type MXene and composite MXene films, respectively, showing the in-plane ion transport in the composite MXene owing to a more curved layered morphology.Black spheres represent the electrolyte ions.
Fig.
Fig.S9).The increase in capacitance during electrochemical cycling matches the results obtained using a stainless-steel cell and, as mentioned above, this can be attributed to the opening up of the structure of the composite electrodes.A test was performed to investigate the reproducibility of the electrochemical results (see Figs.S9b and c).The results revealed that the electrochemical behavior was the same for electrodes obtained from the same mixed MXene film as well as for another film prepared using reproduced pristine MXene batches.This reflects the good homogeneity and reproducibility of the mixed MXene films and the robustness of the synthesis protocol.A pathway for improving the performance of the MXene electrodes is the use of 3 M H 2 SO 4 rather than 1 M H 2 SO 4 .[51,62]Fig.7ashows a comparison between the CVs at a scan rate 10 mV s − 1 of the 1Mo:1Ti electrodes in 3 M (black solid line) and 1 M (purple dotted line) H 2 SO 4 .When 3 M H 2 SO 4 electrolyte was used, a significant increase in the capacitance was observed.Furthermore, a promising rate performance was obtained; for instance, at a current density of 20 A g − 1 , the 1Mo:1Ti electrodes delivered a capacitance of about 282 F g − 1 (875 F cm − 3 ) and 80% of the capacitance was retained after 10,000 cycles with a coulombic efficiency approaching 100% (see Fig.7c and d).Likewise, the 3Mo:1Ti showed an analogous CV shape to that of 1Mo:1Ti film (see Fig.7b).In addition, an enhancement of the specific capacitance and rate capability was also obtained.For example, at current densities of about 10 and 100 A g − 1 , the 3Mo:1Ti electrodes delivered capacitances of about 1200 F cm − 3 (400 F g − 1 ) and 864 F cm − 3 (288 F g − 1 ), respectively (see Figs.S10 and S11).However, the capacitance retention was about 85% after 2000 cycles at a high current density of 100 A g − 1 (see Fig.S11).Therefore, the use of 3 M H 2 SO 4 can enhance the capacitance
Fig. 6 .
Fig. 6.Electrochemical performance in 1 M H 2 SO 4 using a glassy carbon current collector: (a) CVs at a scan rate of 10 mV s − 1 for Ti 3 C 2 T z (blue dashed line), Mo 1.33 CT z (purple dashed line), 1Mo:1Ti (black solid line), and 3 Mo:1Ti (red solid line).(b) Variation of the volumetric capacitance with the logarithm of scan rates using stainless-steel (red diamonds) and glassy carbon (blue diamonds) substrates.(c) and (d) Variation of the gravimetric and volumetric capacitances, respectively, with the logarithm of scan rates for Ti 3 C 2 T z (blue circles), Mo 1.33 CT z (purple triangles), 1Mo:1Ti (black diamonds), and 3Mo:1Ti (red squares) electrodes.
Fig. 7 .
Fig. 7. Electrochemical performance in 3 M H 2 SO 4 using a glassy carbon current collector: (a) CVs at a scan rate of 10 mV s − 1 of 1Mo:1Ti electrodes in 3 M H 2 SO (black solid line) and 1 M H 2 SO 4 (purple dotted line).(b) CVs at a scan rate of 10 mV s − 1 for 1Mo:1Ti (black solid line) and 3Mo:1Ti (red solid line).(c) Potential capacitance profiles of 1Mo:1Ti electrodes at current densities 20 A g − 1 .(d) Long-term cycling of 1Mo:1Ti electrodes at current densities 20 A g − 1 capacitance (red diamonds), retention (black circles), and coulombic efficiency (blue circles).
Fig. 8 .
Fig. 8. Electrochemical performance of asymmetric supercapacitor in 1 M H 2 SO 4 using a glassy carbon current collector: (a) CVs at a scan rate of 10 mV s − 1 of 3Mo:1Ti electrode (blue line) and activated carbon (red line) electrodes using a 3-electrode setup.(b) CVs of ASC at different scan rates.(c) Potential capacitance profiles of ASC at different current densities.(d) Variation of the gravimetric capacitances of ASC with the logarithm of scan rates (black diamonds) and current densities (red squares).Inset in (d) shows the long-term cycling of ASC at 5 A g − 1 .
Table 1
3 ~ Summary of the composite Mo 1.33 CT z -Ti 3 C 2 T z electrodes prepared in this study. | 11,418.4 | 2021-06-24T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Promoter conservation in HDACs points to functional implications
Background Histone deacetylases (HDACs) are the proteins responsible for removing the acetyl group from lysine residues of core histones in chromosomes, a crucial component of gene regulation. Eleven known HDACs exist in humans and most other vertebrates. While the basic function of HDACs has been well characterized and new discoveries are still being made, the transcriptional regulation of their corresponding genes is still poorly understood. Results Here, we conducted a computational analysis of the eleven HDAC promoter sequences in 25 vertebrate species to determine whether transcription factor binding sites (TFBSs) are conserved in HDAC evolution, and if so, whether they provide useful information about HDAC expression and function. Furthermore, we used tissue-specific information of transcription factors to investigate the potential expression patterns of HDACs in different human tissues based on their transcription factor binding sites. We found that the TFBS profiles of most of the HDACs were well conserved in closely related species for all HDAC promoters except HDAC7 and HDAC10. HDAC5 had particularly strong conservation across over half of the species studied, with nearly identical profiles in the primate species. Our comparisons of TFBSs with the tissue specific gene expression profiles of their corresponding TFs showed that most HDACs had the ability to be ubiquitously expressed. A few HDAC promoters exhibited the potential for preferential expression in certain tissues, most notably HDAC11 in gall bladder, while HDAC9 seemed to have less propensity for expression in the nervous system. Conclusions In general, we found evolutionary conservation in HDAC promoters that seems to be more prominent for the ubiquitously expressed HDACs. In turn, when conservation did not follow usual phylogeny, human TFBS patterns indicated possible functional relevance. While we found that HDACs appear to uniformly expressed, we confirm that the functional differences in HDACs may be less a matter of location of activity than a question of which proteins and which acetyl groups they may be acting on. Electronic supplementary material The online version of this article (10.1186/s12864-019-5973-x) contains supplementary material, which is available to authorized users.
Background
Histone deacetylases (HDACs) remove the acetyl group from lysine residues of the N-terminal tail of core histones, allowing the repression of transcription. These metal binding proteins are mostly active in large multiprotein complexes, and can also act on non-histone proteins. Human histone deacetylases require zinc, and have been grouped into different classes based on their sequence similarity to homologues they have evolved from in yeast. Class I HDACs (1,2,3,8) are most similar to yeast RPD3 protein, while Class II HDACs (4,5,6,7,9,10) are homologues of yeast HDA1 [1]. HDAC11 forms Class IV on its own, sharing features from both Class I and Class II enzymes [1], while the sirtuin enzymes, which require NAD+ for catalysis and were formerly categorized as Class III HDACs, have evolved independently.
The HDAC proteins have been well characterized, and the position of their active site(s), their genomic position and cellular localization are well established (Table 1). Their modes of action have been investigated extensively over the last two decades, with particular emphasis on HDAC inhibitors as possible drugs for use in cancer therapy [4,5]. However, the high level of similarity between the HDACs and the seemingly interchangeable nature of their activity makes them a complex family of proteins that has proven difficult to fully decipher [6].
Early studies on HDAC evolution found evidence of an ancient family of proteins with de-acetylase activity [7]. At the time, research focused on phylogenetic studies of protein sequences for the characterization of vertebrate HDAC active domains and localization signals to infer functional overlap and clues of alternative functions [8]. It was quickly ascertained that HDAC1 and HDAC2 are closely related and work in concert most of the time, and more recent work confirms that one is not a direct substitute for the other [9]. HDAC3 is equally widely expressed, interacts with Class II HDACs, and affects a wide range of cellular processes [10][11][12].
HDAC8, while also considered a Class I HDAC, seems to have evolved from a separate, equally ancient lineage that works on multiple substrates and is involved in several pathways [13,14]. This histone deacetylase has a particular structure/function conformation that is different from its human homologues [15] and a propensity for de-fatty-acetylation [16]. Such findings suggest that some of the functional differences among HDACs may be linked to the nature of the acetyl compounds that they remove from their substrate proteins, which has profound implications on how we view and investigate this enzyme family.
Traditionally considered as recruiters for Class I HDACs due to their low catalytic activity when compared to other histone deacetylases [17], Class II HDACs 4, 5, 7, and 9 are now known to be active in their own right, playing a central role in regulating gene expression relating to muscle development, tissue differentiation and other pathways [18]. Extensive research has shown that HDAC4 is involved in a myriad of roles [19,20], while HDAC5 is increasingly implicated in axon regeneration [21] and in cardiovascular contexts [6]. HDAC7 seems to play an important role for bone development [22] and in diabetes [23], while a flurry of recent articles have similarly associated HDAC9 with several disease pathways including various cancers and stroke [24][25][26].
HDACs 6 and 10 seem to have an interesting relationship and are often classified separately as Class IIB, as both have two highly similar catalytic domains, although the second domain of HDAC10 is considered inactive [1]. Comparative sequence analysis indicated that HDAC10 and HDAC6 may have shared a common ancestor at some point in vertebrate evolution [7], and ideas as to how both evolved separate functions are beginning to emerge. HDAC6 was the first histone deacetylase that was shown to work on a non-histone protein, tubulin, and is predominantly cytosolic [27], while recent work indicates that HDAC10 acts as a polyamine deacetylase [28]. Like other HDACs, both seem to be active in a variety of developmental and pathological contexts.
The only Class IV member, HDAC11, is arguably also the least well understood of the HDAC family. Recent reviews focus on its role in the immune system [29,30], to the exclusion of other roles it may play that have not yet been discovered.
Given their vital and extensive roles in the regulation of gene expression and protein activity in eukaryotic genomes both in and out of the nucleus, relatively little is known about the regulation of HDAC expression. We expect that the transcription of HDACs does not differ markedly from other genes whose promoters are regulated by histone phosphorylation and acetylation [31], which they, as the histone deacetylases, are necessarily involved in [32]. Regulation of HDACs in cancer cells by the ubiquitous transcription factors Sp1 and Sp3 has been well investigated [33], and their expression profile has been studied in some disease cases [34,35]. Furthermore, HDACs are subjected to the same array of post-translational modification as other proteins [36]. Increasing evidence is being accumulated about HDAC roles in development, housekeeping functions, and disease onset and progression, none the least in cancers. In plants, histone deacetylases have been shown to act on a wide array of molecules, including N-acetyleserotonin [37]. Despite our expanding knowledge, there seems to be an urgent need to elucidate their separate functions and intersections of function, and to better understand how their own expression is regulated. Turning to computational methods as potential guides to bench experimentation, we conducted an in-depth analysis of HDAC promoter sequences with two questions in mind: Are transcription factor binding sites (TFBSs) conserved in HDAC evolution, and if so, do they provide useful information about HDAC transcriptional regulation and HDAC function?
These questions were fueled by recent literature on the slow evolution of TFBSs [38,39] and their potential use in highlighting gene expression patterns (reviewed in [40]). Given that there is no gold standard to assess methods for TF analysis [41], and as divergent as promoter sequences can be among closely related species (e.g. [42]) and among the promoter regions of closely related genes (e.g. [43]), there seems to be enough signal in them to imply functional relevance [39] which can then be confirmed by experimental data. Given the functional overlap that HDACs seem to have, and increasing evidence of their ubiquity in the human system, we were curious whether there were any signals in their promoters which could help deepen our understanding of this enzyme family.
Evolutionary conservation of TFBSs in HDAC promoters
We found that human TFBS patterns in HDAC promoters are evolutionarily conserved across all HDACs, with only HDACs 5, 7 and 10 indicating unusual patterns of TFBS distribution along the promoter region. In Fig. 1 we present the HDAC1 promoter alignment as an example of the Genomatix output, showing promoter sequences aligned according to the quantitative phylogenetic distances between their TFBS patterns. Here, the TFBS patterns appeared in the predictable evolutionary groupings, with apes (H. sapiens, G. gorilla and P. troglodytes) and rodents (R. norvegicus and M. musculus) forming clades. We observed similar patterns in HDACs 2, 3, 4, 6, 8, 9 and in HDAC11 (Additional file 2: Figure S1, Additional file 3: Figure S2, Additional file 4: Figure S3, Additional file 5: Figure S4, Additional file 6: Figure S5, Additional file 7: Figure S6 and Additional file 8: Figure S7). HDACs 8 and 9 had evolutionarily conserved TFBS patterns in closely related organisms (Additional file 6: Figure S5, Additional file 7: Figure S6). Notably, a HDAC8 equivalent was absent in P. troglodytes while an HDAC9 equivalent was missing from G. gorilla.
The promoter of HDAC5 was conserved for the greatest number of species (Fig. 2), and had two fish species (P. reticulata and D. rerio) clustering close to two old world monkeys, while promoters of HDAC5 homologues from rat and mouse occupied different ends of the dendrogram. Promoter sequences of HDACs 7 and 10 also showed transcription factor binding site patterns where the classic phylogenetic lineages did not hold true. The TFBS profile of human HDAC7 (Fig. 3) appeared to be most similar to that of pig S. scrofa, green monkey C. sabaeus and rat R. norvegicus, while the TFBSs from other primate promoters appeared to follow different patterns. The promoter of human HDAC10 (Fig. 4) was most similar to that of rabbit, O. cuniculus, while the promoter sequences of other primate HDAC10s were more similar to that from horse, E. caballus.
TFBS patterns provide useful information on HDAC regulation and function
We analyzed the large amount of gene expression data available from mRNA studies [44] and found that most HDACs seemed to be able to be expressed in most tissues, albeit showing higher expression levels in some tissues than in others. For the most part, these expression profiles were based on experiments that were not targeting HDAC function per se, suggesting that drawing patterns of tissue specificity from these results may be challenging.
Using the previously gathered promoter motif data in a new context, we considered transcription factor binding site trimers that were present on HDAC promoters to denote expression in a given tissue. In particular, we used a non-exclusive set defined as TFs expressed in most tissues, and a preferentially expressed set of TFs that were more highly expressed in the given tissues when compared to other tissues found in Genomatix (as described in Methods). In particular, a trimer was only considered if all 3 units -a unit being the transcription factor whose binding site is present on the relevant HDAC promoter -were expressed within the given tissue, pointing to evidence that the corresponding HDAC is expressed in the underlying tissue. To help identify overarching patterns, we collapsed the 59 non-disease tissue specificity designations available into the 11 human biological systems (Additional file 1: Table S1), namely cardiovascular/hematopoietic, digestive, endocrine, excretory, immune/lymphatic, integumentary, muscular, nervous, respiratory, reproductive, and skeletal systems, with an additional designation for embryonic expression. In Table 2, we list the biological systems where we found at least one TFBS trimer that appeared in a human HDAC promoter sequence, and compared these results to previously reported tissue specificity of HDACs. Since we only used human data in this analysis, the absence of a TFBS trimer on an HDAC promoter could indicate that this HDAC is not expressed in the corresponding tissue, indicated in Table 2 using an "All Except" annotation.
Class I HDACs 1, 2, and 3 are particularly wellstudied and known to be ubiquitously expressed [9][10][11][12], validating our approach. HDAC8, known to be active on multiple substrates [15] and many different proteins [48] is only reported to be highly expressed in smooth muscle, while our results suggest that HDAC8 is another ubiquitous HDAC (Table 2). In fact, when non-exclusive TFs were included in this trimer analysis, almost all of the HDACs had fairly widespread tissue representation, with the exception of HDACs 7 and 11, which had fewer tissues represented in the results, suggesting a narrower expression range. When only preferentially expressed TFs were considered, HDAC3 had TFBS trimers from only 3 major systems such as cardiovascular/ hematopoietic, immune/lymphatic, and muscular. HDACs 9 and 10 also had these same three major systems represented as TFBS trimers in their promoters, in addition to the embryonic system in HDAC9 and the endocrine system in HDAC10 promoters. The promoter sequence of HDAC11 had no trimers present when the preferential expression filter was applied, suggesting that its expression is governed only through non-exclusive TFs.
This analysis of trimers also indicated that there were higher instances of TFs across the HDACs that were expressed in the nervous, immune and endocrine systems. However, this observation may reflect the fact that there were disproportionately more TFs listed under these systems in the Genomatix annotation than for other systems, such as respiratory or integumentary. Therefore, we finetuned our approach, treating TFBSs individually and determining the observed numbers of binding sites of TFs that appear in HDAC promoter sequence and are expressed in specific tissues, and used a log 2 -fold change to assess the significance of our findings (see Methods) when we compared observed to expected numbers. In the heatmap in Fig. 5a where we considered a non-exclusive set of TFs that were expressed in most tissues, we observed that HDACs 1,4,5,6 and 8 appear to be largely associated to the majority of 59 tissues. In turn, HDACs 2,3, 9,10 and 11 seem to be less likely expressed in most tissues. In particular, HDAC7 seems to be associated with expression in blood cells, and less in the embryonic system, while TFBSs for TFs associated with the nervous system seem to be underrepresented in the promoter of HDAC9. Furthermore, HDACs 6 and 8 appear strongly over-represented in thyroid gland, lung and cartilage. HDAC10 had a low score for TFBSs associated with muscle tissue, while HDAC11 had a high score for TFBSs associated with the gall bladder.
As for the heatmap in Fig. 5b, where we considered preferentially expressed set of TFs that were more highly expressed in given than other tissues we observed a high degree of paucity of binding sites on HDAC promoters. Specifically, we found enrichment for TFBSs associated with the nervous system in the HDAC3 promoter, and for those associated specifically with neuralgia and smooth muscle in the promoter of HDAC7. Similar to the trimer results, we observed fewer TFs associated with the nervous system on the promoter of HDAC9, while the high specificity for expression in gall bladder remained highlighted in the promoter region for HDAC11.
Discussion
In this work, we were curious whether there was any evolutionary conservation in HDAC promoter sequences, and if so what it can tell us about the transcriptional regulation and function of the eleven human HDACs. Our results confirmed that in general, there was evolutionary conservation in HDAC promoters, and in cases where this conservation did not align with currently accepted phylogeny, the pattern of TFBS arrangement on human promoters showed some similarity with different species, indicating a possible functional relevance. Unusual patterns in genetic sequence phylogenies suggest dynamic and relatively recent changes in the evolution of such sequences, implying evolutionarily recent patterns in the way the corresponding HDACs are regulated [49]. In fact, promoter conservation among vertebrate species seems to be more prominent for the ubiquitously expressed HDACs, particularly for HDACs 1 and 2, suggesting that these have not undergone recent evolution, a hypothesis in line with literature on the evolution of so-called "housekeeping" and "essential" genes [50]. Those HDACs that exhibit an unusual pattern of TFBSs on their promoters seem to also have a propensity for expression in fewer tissues such as seen in our results of HDAC5 and their possible preferential association with the cardiovascular/ hematopoietic, muscular, nervous and endocrine tissues ( Table 2). The exceptional case in our results is HDAC11 that has a conserved promoter region which followed conventional species phylogenies. Yet, HDAC11 also exhibited a possible preferential association with expression in the gall bladder, a conclusion we recommend to be followed up with laboratory experimentation.
Overall, our results imply that most HDACs are able to be ubiquitously expressed. For example, our results of the HDAC8 promoter region concur with studies into the evolution of HDAC catalytic domains, highlighting the relatively recent functional evolution of HDAC8 [8]. In turn, these are validated by recent discoveries about HDAC8 function [15,16,51] which further indicate that the differences between the HDACs may not be one of location of expression as much as a structure/function difference in their catalytic process.
Early phylogenetic studies using HDAC protein sequences did not report differences between the catalytic domain of HDAC7 and the other Class II HDACs [8]. Our findings showed that there was little evolutionary conservation in the promoter sequence of HDAC7, and that it has a broad tissue specificity spanning most biological systems but perhaps not as strongly in embryonic tissues. Recent molecular investigations place HDAC7 in the endocrine and skeletal systems [23] as well as in the brain playing a key role in memory formation [52]. Like HDAC8, the regulation of HDAC7 seems to have evolved differently to its functional relevance, making it particularly interesting for further experimental investigations similar to those that have taken place in HDAC8 as detailed above.
Conclusions
Previous studies have shown that quantitative differences in transcription factor binding are observable even in closely related species, yet only a weak correlation is found between binding variation and regulatory function [53]. Furthermore, TFBSs have a high evolutionary turnover rate such that even closely related species may not have conserved binding sites on their promoters [54]. This may explain some of the differences we observed in TFBS pattern along the promoters of HDACs from different species. Given that low evolutionary conservation at the promoter level may not have a significant effect on gene expression [54,55], we posit that our twopronged approach points to new avenues for studying the regulation of HDAC expression. Since the HDACs themselves are heavily involved in gene expression, further studies into the transcriptomic levels of these genes may prove useful to compare to TFBS patterns reported here. This will allow us to infer how sensitive the regulation of regulatory proteins is with regards to differences at both the binding site and transcriptomic level.
Exploring the biochemical role of each of the HDAC homologues in the different species we tested would shed further light as to how these proteins have evolved, and why. Early thoughts about why there are eleven HDACs in the human system had focused on tissue or time specificity, and there is enough information about their function now to add several layers of complexity on this question. Our results suggest that all HDACs are ubiquitously expressed, and that the differences between them rest in which acetyl group they remove from a protein, and which proteins they act on, instead of where they act. Given their role in gene expression and the impact that dysregulation of HDACs can have on the health of an organism, it is crucial that a comprehensive analysis of the biochemical roles and transcriptional regulation of these enzymes is performed, so that better targeted therapeutics can be identified. With HDAC inhibitors gaining traction in the treatment of various cancers and other diseases, the ability to fully understand their regulation and function, including through experimental promoter validation, remains a crucial research priority.
Transcription factor binding sites
To determine TFBSs in the promoters of human HDAC genes, we considered promoter sequences starting from 1,200 base pairs upstream to 100 base pairs downstream of the transcriptional start site (TSS) as designated in the National Center for Biotechnology Information's (NCBI) Nucleotide database. Research in yeast found that conservation of TFBSs is highest within 200 bp upstream of the TSS [42]. Furthermore, there is evidence of multiple TSSs and alternative promoters per HDAC according to the Database for Transcriptional Start Sites (DBTSS) [56]. We therefore considered a range of 1,200 bp upstream of the NCBI's TSS to capture as many TFBS signals as might be present from possible alternative TSSs.
We extracted known and annotated HDAC sequences in 25 different species from the NCBI's Nucleotide database. To check their similarity and possible kinship we aligned organism-specific HDACs with their human counterparts. In particular, we established 11 HDAC groups of sequences that are annotated according to their corresponding human HDAC. Although we only considered known organism-specific HDACs, we reviewed their annotations and assigned a given organism-specific HDAC to the corresponding group, if the corresponding Table 2 Tissue and system specificity of histone deacetylases. Reported data is from cited references [5,17,[45][46][47]. Predicted data is based on enrichment of TFBS trimers in the analysis described in the main text. Following TF groupings in Genomatix, non-exclusive refers to the presence of TFs that are active in most tissues, and preferential refers to TFs that are more highly expressed in these systems than in others. Dark bullet points denote predicted activity, outline bullet points denote a predicted lack of activity in these systems Reported tissue specificity Predicted non-exclusive system specificity Predicted preferential system specificity sequence was most similar to the underlying human HDAC. We then extracted their promoter sequences for our analysis using the Genomatix software suite (www. genomatix.com), as its transcription factor database has a taxonomically relevant classification system that was applicable to all the considered species. The computational detection of TFBS motifs was based on scanning these promoter sequences through position weight matrices of corresponding transcription factors with MatInspector as implemented in the Genomatix software suite [57,58], that was also used to visualize TFBSs on the promoter sequences. We set a core similarity of 0.75 (maximum is 1.0) and a matrix similarity of the optimized value + 0.10 to find TFBSs. We used transcription factor motifs from transcription factor families that were found in either all species or only in vertebrates.
Similarity of TFBS profiles
Every promoter is initially represented by a sequence of TFBSs. We normalized the presence of 3-mers (trimers) of TFBSs by pðα 1 ; where L is the number of binding sites on the promoter, and α i refers to a particular transcription factor. Randomness in this data was reduced via the corresponding 2-mers and 1-mers through In a promoter sequence we determined the occurrence of a 3-mer m of TFBSs as if p 0 ≠0 p 0 ¼ 0 : As a consequence, each promoter sequence was represented as a profile of trimers.
Comparing pairs of trimer profiles of TFBSs between species, we defined a distance between promoter sequences M and N as the cosine distance This similarity measure was used to determine all pairwise distances between promoter profiles of TFBSs between species. Distance matrices were used to reconstruct the trees using the neighbor-joining algorithm as implemented in the DendroPy Phylogenetic Computing Library [59]. The resulting dendrograms were visualized using Fig-Tree, a freely available web-based software tool (http:// tree.bio.ed.ac.uk/software/figtree/).
Tissue specific TFBSs
The Genomatix database [57,58] was again used to determine the names and descriptions of the transcription factors as well as their recorded tissue-specific expression. According to Genomatix, "the tissue associations of matrix families are determined by automatic evaluation of all PubMed abstracts (co-citations of transcription factors and tissues) and subsequent manual curation." Specifically, we considered a non-exclusive set defined as TFs expressed in most tissues, and a preferentially expressed set of TFs that were more highly expressed in the given tissues when compared to other tissues. Only human promoters were used for tissue specificity analysis, due to availability of data.
As a general expected value for transcription factors that appear on a given promoter p and are expressed in a tissue t, E p,t , we defined E p;t ¼ x t j ⋃ t x t j n p , where x t is the number of transcription factors that are expressed in tissue t, while j ⋃ t x t j is the total number of transcription factors in all tissues, and n p is the number of transcription factor binding sites in the underlying promoters sequence p. We utilized this background distribution to determine the enrichment of a promoter sequence p in a tissue t, defined as f p;t ¼ O p;t E p;t , where O p,t is the observed number of transcription factor binding sites that appear in promoter sequence p and are expressed in tissue t. Given the difficulty of assessing significance in this domain, and the lack of direct activity data of TFs on HDAC promoters [41], we considered the log 2 -fold change of observed and expected TFBS frequencies fc p;t ¼ log 2 O p;t E p;t which allowed us to assess the prevalence of expressed TFBSs that occur in a given HDAC promoter sequence in a given tissue. As a consequence, a promoter sequence appears enriched in a given tissue if fc pt > 1 and diluted if fc pt < − 1. | 6,006 | 2019-07-27T00:00:00.000 | [
"Biology"
] |
Reduction of the Parameters in MSSM
In the present work we search for renormalization group invariant relations among the various massless and massive parameters of the Minimal Supersymmetric Standard Model. We find that indeed several of the previously free parameters of the model can be reduced in favor of few, the unique gauge coupling and the gaugino mass at the unification scale among them. Taking into account the various experimental constraints, including the B-physics ones, we predict the Higgs and the supersymmetric spectrum. We find that the lightest Higgs mass is in comfortable agreement with the measured value and its experimental and theoretical uncertainties, while the electroweak supersymmetric spectrum starts at 1.3 TeV and the colored at ~4 TeV. Thus the reduced MSSM is in natural agreement with all LHC measurements and searches. The supersymmetric and heavy Higgs particles will likely escape the detection at the LHC, as well as at ILC and CLIC. However, the FCC-hh will be able to fully test the predicted parameter space.
Introduction
The main expectation of the particle physics community from a unified description of the observed interactions is to understand the present day large number of free parameters of the Standard Model (SM) in terms of a few fundamental ones. In other words, to achieve reduction of parameters at a fundamental level.
The traditional way to reduce the number of free parameters of a theory, which in turn would make it more predictive, is to introduce a symmetry. Grand Unified Theories (GUTs) are very good examples of this strategy [1][2][3][4][5]. In the case of minimal SU (5), because of the (approximate) gauge coupling unification, it was possible to reduce the gauge couplings of the SM and give a prediction for one of them. In fact, the LEP data [6] were interpreted as suggesting that a further symmetry, namely N = 1 global supersymmetry (SUSY) [7,8] should also be required to make the prediction viable. GUTs can also relate the Yukawa couplings among themselves, again SU (5) provided an example of this by predicting the ratio M τ /M b [9] in the SM. Unfortunately, requiring more symmetry does not necessarily helps, since additional complications are introduced due to new degrees of freedom that normally are needed, requiring in turn new ways and channels of breaking the symmetry, among others, which in general reduce the predictivity of a theory.
A natural extension of the GUT idea is to find a way to relate the gauge and Yukawa sectors of a theory, that is to achieve Gauge-Yukawa Unification (GYU). A symmetry which naturally relates the two sectors is SUSY, in particular N = 2 SUSY [10]. However, N = 2 supersymmetric theories have serious phenomenological problems due to light mirror fermions. Other theories such as superstring theories or composite models might provide relations among the gauge and Yukawa couplings, but have even more phenomenological problems. A successful strategy in relating dimensionless couplings has been developed in a series of studies [11][12][13][14][15][16][17][18][19][20][21]. It was based on searches for renormalization group invariant (RGI) relations. This program, called Gauge-Yukawa unification scheme, applied in the dimensionless couplings of supersymmetric GUTs, such as gauge and Yukawa couplings, had already celebrated successes by predicting correctly, among others, the top quark mass in the finite and in the minimal N = 1 supersymmetric SU (5) GUTs [14][15][16], SU (3) 3 [20] and later in the Minimal Supersymmetric Standard Model (MSSM) [21]. One of the impressive aspects of the RGI relations is that their validity can be guaranteed to all-orders in perturbation theory by studying the uniqueness of the resulting relations at one-loop, as was proven [22,23] in the early days of the program of reduction of couplings [22][23][24][25][26][27]. Even more impressive is the fact that it is possible to find RGI relations among couplings guaranteeing finiteness to all-orders in perturbation theory [28][29][30][31][32].
SUSY seems to be an essential ingredient for a plenomenologically successful realization of the above strategy. Nevertheless its breaking has to be understood too in order to extend the successes in other sectors of the theory, such as the Higgs masses and the SUSY spectrum.
Indeed, the search for RGI relations has been extended to the soft SUSY-breaking sector (SSB) of these theories [19,33], which involves parameters of dimension one and two. The first important development in this programme concerned the combined reduction of couplings and masses in supersymmetric theories [19]. In this work the coefficients of the soft SUSY-breaking terms were reduced in order to minimize the number of independent parameters. The scheme of dimensional renormalization was used with mass parameters introduced similarly to couplings. Then the differential equations of the renormalization group also involve derivatives with respect to the masses. It is characteristic for dimensional renormalization that those β-functions which carry a dimension are linear or quadratic forms in the dimensional couplings and masses, while the coefficients of these polynomials depend on the dimensionless couplings only. Since in this approach the mass parameters enter similarly to the couplings, masses are included with the couplings in the reduction process. In this way non-trivial constraints on the soft SUSY-breaking terms were obtained which are compatible with renormalization and lead to surprisingly simple sum rules [34].
Another very important development concerning the renormalization properties of the SSB was made in Refs. [35][36][37][38][39][40][41], based conceptually and technically on the work of Ref. [42]: the powerful supergraph method [43][44][45][46] for studying supersymmetric theories was applied to the softly broken ones by using the "spurion" external space-time independent superfields [47]. In the latter method a softly broken supersymmetric gauge theory is considered as a supersymmetric one in which the various parameters such as couplings and masses have been promoted to external superfields that acquire "vacuum expectation values". Based on this method the relations among the soft term renormalization and that of an unbroken supersymmetric theory were derived. In particular the β-functions of the parameters of the softly broken theory are expressed in terms of partial differential operators involving the dimensionless parameters of the unbroken theory. The key point in the strategy of Refs. [38][39][40][41] in solving the set of coupled differential equations so as to be able to express all parameters in a RGI way, was to transform the partial differential operators involved to total derivative operators. This is indeed possible to be done on the RGI surface which is defined by the solution of the reduction equations. The last has very important consequences in the finite theories since the finiteness of the dimensionless sector can be transferred to the SSB sector too.
In parallel to the above theoretical developments certain phenomenological issues have been established too. For long time a rather constrained universal set of soft scalar masses has been assumed in the SSB sector of supersymmetric theories not only for economy and simplicity but for a number of other reason too: (a) they were part of the constraints that preserve finiteness up to two-loops [48,49], (b) they are RGI up to twoloops in more general supersymmetric gauge theories, subject to the condition known as P = 1/3Q [33] (where all relevant details and definitions can be found), and (c) they appear in the attractive dilaton dominated SUSY-breaking superstring scenarios [50][51][52]. However, further studies have shown that there exist a number of technical problems all due to the fact that the universality assumption for the soft scalar masses is very restrictive. For instance, (i) in finite unified theories the universality predicts that the lightest supersymmetric particle is a charged particle, namely the superpartner of the τlepton, (ii) the standard radiative electroweak symmetry breaking of the MSSM does not work with universal soft scalar masses [52], and (iii) which is more serious, the universal soft scalar masses lead to charge and/or color breaking minima deeper than the standard vacuum [53]. In addition criticisms arose on an aesthetic basis, i.e. that the universal assumption is too strong to be put by hand given that it does not result from something fundamental. A way out was indirectly already suggested in ref [19], where the solutions found among soft scalar masses were very different from the universal one. Moreover a more careful look suggested the existence of a "sum rule" among the soft scalar masses and the gaugino mass. This interesting observation was clearly done in ref [34] where it was examined in N = 1 Gauge-Yukawa unified theories at one-loop for the non-finite case and then at two-loops for the finite case [54]. The sum rule manages to overcome all the unpleasant phenomenological consequences mentioned above. Moreover it was proven [41] that the sum rule for the soft scalar masses is RGI to all-orders for both the general as well as for the finite case. Finally, the exact β-function for the soft scalar masses in the Novikov-Shifman-Vainstein-Zakharov (NSVZ) scheme [55][56][57] for the softly broken supersymmetric QCD has been obtained [41].
Using the above tools and results it was possible to study and predict the spectrum of the full finite models in terms of few input parameters. A particular finite model was selected out of this examination and provided us with the prediction for the lightest MSSM Higgs boson in the range of 121-126 Gev [58][59][60][61] four and half years before the experimental discovery [62,63]. 1 Identifying the lightest Higgs boson with the newly discovered state one can restrict the allowed parameter space of the model. A similar analysis was done for the reduced MSSM [21].
In the present work we examine the reduced MSSM using the "exact" relations among soft scalar and gaugino masses, following the original analysis suggested in ref [19]. Obviously the reduced MSSM in the present case is much more constrained as compared to the previous one [21], which was enjoying the benefit of the relaxed "sum rule". The results are confronted with the relevant flavor physics results. We evaluate the full SUSY spectrum (for sfermions restricted to the third generation), which turns out to be rather heavy, and in particular we calculate the lightest MSSM Higgs-boson mass. Here, in contrast to previous evaluations, an improved calculation is employed that yields more reliable results for heavy SUSY masses. The light Higgs-boson mass is naturally found in the region of 124 − 129 GeV.
Reduction of Parameters
The reduction of couplings was originally formulated for massless theories on the basis of the Callan-Symanzik equation [22,23]. The extension to theories with massive parameters is not straightforward if one wants to keep the generality and the rigor on the same level as for the massless case; one has to fulfill a set of requirements coming from the renormalization group equations, the Callan-Symanzik equations, etc. along with the normalization conditions imposed on irreducible Green's functions [64]. There has been a lot of progress in this direction starting from ref. [19], as it is already mentioned in the Introduction, where it was assumed that a mass-independent renormalization scheme could be employed so that all the RG functions have only trivial dependencies on dimensional parameters and then the mass parameters were introduced similarly to couplings (i.e. as a power series in the couplings). This choice was justified later in [65,66] where the scheme independence of the reduction principle has been proven generally, i.e it was shown that apart from dimensionless couplings, pole masses and gauge parameters, the model may also involve coupling parameters carrying a dimension and masses. Therefore here, to simplify the analysis, we follow Ref. [19] and we too use a mass-independent renormalization scheme.
We start by considering a renormalizable theory which contain a set of (N + 1) dimension-zero couplings, (ĝ 0 ,ĝ 1 , ...,ĝ N ), a set of L parameters with mass-dimension one, ĥ 1 , ...,ĥ L , and a set of M parameters with mass-dimension two, m 2 1 , ...,m 2 M . The renormalized irreducible vertex function Γ satisfies the RG equation where where µ is the energy scale, while β i are the β-functions of the various dimensionless couplings g i , Φ I are the various matter fields and γ m 2 α , γ h a and γ φI J are the mass, trilinear coupling and wave function anomalous dimensions, respectively (where I enumerates the matter fields). In a mass independent renormalization scheme, the γ's are given by where γ h,b a , γ m 2 ,β α and γ m 2 ,ab α are power series of the g's (which are dimensionless) in perturbation theory.
We look for a reduced theory where are independent parameters and the reduction of the parameters left is consistent with the RG equations (1,2). It turns out that the following relations should be satisfied Using Eqs. (3) and (4), the above relations reduce to The above relations ensure that the irreducible vertex function of the reduced theory has the same renormalization group flow as the original one. The assumptions that the reduced theory is perturbatively renormalizable means that the functionsĝ i , f b a , e β α and k ab α , defined in (4), should be expressed as a power series in the primary coupling g:ĝ The above expansion coefficients can be found by inserting these power series into Eqs. (5), (6) and requiring the equations to be satisfied at each order of g. It should be noted that the existence of a unique power series solution is a non-trivial matter: It depends on the theory as well as on the choice of the set of independent parameters. It should also be noted that in the case that there are no independent mass-dimension 1 parameters (ĥ) the reduction of these terms take naturally the form where M is a mass-dimension 1 parameter which could be a gaugino mass which corresponds to the independent (gauge) coupling. In case, on top of that, there are no independent mass-dimension 2 parameters (m 2 ), the corresponding reduction takes analogous formm
Reduction of dimensionless parameters in the MSSM
Hereafter we are working in the framework of MSSM, assuming though the existence of a covering GUT. The superpotential of the MSSM (where again we restrict ourselves to the third generation of sfermions) is defined by where Q, L, t, b, τ, H 1 , H 2 are the usual superfields of MSSM, while the SSB Lagrangian is given by where φ represents the scalar component of all superfields, λ refers to the gaugino fields while in the last brace we refer to the scalar components of the corresponding superfield. The Yukawa Y t,b,τ and the trilinear h t,b,τ couplings refer to the third generator only, neglecting the first two generations. Let us start with the dimensionless couplings, i.e. gauge and Yukawa. As a first step we consider only the strong coupling and the top and bottom Yukawa couplings, while the other two gauge couplings and the tau Yukawa will be treated as corrections. Following the above line, we reduce the Yukawa couplings in favor of the strong coupling α 3 and using the RGE for the Yukawa, we get This system of the top and bottom Yukawa couplings reduced with the strong one is dictated by (i) the different running behaviour of the SU (2) and U (1) coupling compared to the strong one [67] and (ii) the incompatibility of applying the above reduction for the tau Yukawa since the corresponding G 2 turns negative [21]. Adding now the two other gauge couplings and the tau Yukawa in the RGE as corrections, we obtain where Note that the corrections in Eq. (11) are taken at the GUT scale and under the assumption that d dg 3 Let us comment further on our assumption above, which led to the Eq. (11). In practice we assume that even including the corrections from the rest of the gauge as well as the tau Yukawa couplings, at the GUT scale the ratio of the top and bottom couplings α t,b over the strong coupling are still constant, i.e. their scale dependence is negligible. Or, rephrasing it, our assumption can be understood as a requirement that in the ultraviolet (close to the GUT scale) the ratios of the top and bottom Yukawa couplings over the strong coupling become least sensitive against the change of the renormalization scale. This requirement sets the boundary condition at the GUT scale, given in Eq. (11). Alternatively one could follow the systematic method to include the corrections to a non-trivially reduced system developed in ref [69], but considering two reduced systems: the first one consisting of the "top, bottom" couplings and the second of the "strong, bottom" ones. We plan to return with the full analysis of the latter possibility, including the dimensionful parameters, in a future publication.
In the next order the corrections are assumed to be in the form Then, the coefficients J i are given by for the case where only the strong gauge and the top and bottom Yukawa couplings are active, while for the case where the other two gauge and the tau Yukawa couplings are added as corrections we obtain
Reduction of dimensionful parameters in the MSSM
We move now to the dimension-1 parameters of the SSB Lagrangian, namely the trilinear couplings h t,b,τ of the SSB Lagrangian, Eq. (10). Again, following the pattern in the Yukawa reduction, in the first stage we reduce h t,b , while h τ will be treated as a correction.
where M 3 is the gluino mass. Using the RGE for the two h we get where we have also used the 1-loop relation between the gaugino mass and the gauge coupling RGE Adding the other two gauge couplings as well as the tau Yukawa h τ as correction we get Finally we consider the soft squared masses m 2 φ of the SSB Lagrangian. Their reduction, according to the discussion in Section 3, takes the form The 1-loop RGE for the scalar masses reduce to the following algebraic system (where we have added the corrections from the two gauge couplings, the tau Yukawa and h τ ) Solving the above system for the coefficients c Q,u,d,Hu,H d we get while G 2 t,b , ρ 1,2,τ and ρ hτ has been defined in Eqs. (11,12,13) respectively. For our completely reduced system, i.e. g 3 , Y t , Y b , h t , h b , the coefficients of the soft masses become obeying the celebrated sum rules The µ parameter of the superpotential cannot be reduced, at least in a simple way of the form µ = c µ M 3 g 3 as an ansatz at one loop. The parameter m 2 3 in the SSB sector could in principle be reduced in favor of µ and M 3 , but in our analysis we keep m 2 3 as independent parameter. However, it should be noted that the requirement of radiative electroweak symmetry breaking (EWSB) relates µ and m 2 3 , and leaves only one of them as an independent parameter, which we choose to be µ.
Phenomenological constraints
In this section we will briefly describe the phenomenological constraints that we apply on the parameter space of the reduced MSSM, as described above.
Flavor constraints
As additional constraints we consider four types of flavor contraints, where SUSY is know to have a possible impact. We consider the flavour observables BR(b → sγ), BR(B s → µ + µ − ), BR(B u → τ ν) and ∆B Ms . 2 The uncertainties are the linear combination of the experimental error and twice the theoretical uncertainty in the MSSM (if no specific MSSM estimate is avialabe we use the SM uncertainty).
For the B u decay to τ ν we use the limit [71,77,78] As our final flavor observalbe we include ∆M Bs as [79,80] ∆M exp Our theory evaluations are obtained with the code SuFla [77].
We do not include a bound from the cold dark matter (CDM) density. It is well known that the lightest neutralino, being the lightest supersymmetric particle (LSP) in our model, is an excellent candidate for CDM [81]. However, the models could easily be extended to contain (a) small R-parity violating term(s) [82][83][84][85]. They would have a small impact on the collider phenomenology discussed here (apart from the fact that the SUSY search strategies could not rely on a 'missing energy' signature), but would remove the CDM bound completely. Other mechanisms, not involving R-parity violation (and keeping the 'missing energy' signature), that could be invoked if the amount of CDM appears to be too large, concern the cosmology of the early universe. For instance, "thermal inflation" [86] or "late time entropy injection" [87] could bring the CDM density into agreement with the WMAP measurements. This kind of modifications of the physics scenario neither concerns the theory basis nor the collider phenomenology, but could have a strong impact on the CDM derived bounds. (Lower values than the ones permitted by the experimental measurements are naturally allowed if another particle than the lightest neutralino constitutes CDM.) We will briefly comment on the anomalous magnetic moment of the muon, (g − 2) µ , at the end of Sect. 6.
The light Higgs boson mass
Due to the fact that the quartic couplings in the Higgs potential are given by the SM gauge couplings, the lightest Higgs boson mass is not a free parameter, but predicted in terms of the other model parameters. Higher-order corrections are crucial for a precise prediction of M h , see Refs. [88][89][90] for reviews.
The spectacular discovery of a Higgs boson at ATLAS and CMS, as announced in July 2012 [62,63] can be interpreted as the discovery of the light CP-even Higgs boson of the MSSM Higgs spectrum [91] (see also Refs. [92,93] and references therein). The experimental average for the (SM) Higgs boson mass is taken to be [94] M exp Adding a 3 (2) GeV theory uncertainty [95][96][97] for the Higgs boson mass calculation in the MSSM we arrive at as our allowed range. For the lightest Higgs mass prediction we used the code FeynHiggs [95,97,98] (version 2.14.0 beta). The evaluation of Higgs boson masses within FeynHiggs is based on the combination of a Feynman-diagrammatic calculation and a resummation of the (sub)leading and logarithms contributions of the (general) type log(mt/m t ) in all orders of perturbation theory. This combination ensures a reliable evaluation of M h also for large SUSY mass scales (see Sect. 6 below). With respect to previous versions several refinements in the combination of the fixed order log resummed calculation have been included, see Ref. [97]. They resulted not only in a more precise M h evaluation for high SUSY mass scales, but in particular in a downward shift of M h at the level of O(2 GeV) for large SUSY masses.
In our previous analysis [21] the Higgs boson mass was calculated using a "mixedscale" one-loop RG approach, which captures only the leading corrections up to two-loop order. Consequently, our new implementation of the M h calculation is substantially more sophisticated and in particular reliable for high stop mass scales. Furthermore, in that previous analysis no B physics constraints were used, which now pose relevant constraints on the allowed parameters space and thus on the prediction of the SUSY spectrum.
Numerical analysis
In this section we analyze the particle spectrum predicted by the reduced MSSM. So far the relations among reduced parameters in terms of the fundamental ones derived in Sects. 3 and 4 had a part which was RGI and a another part originating from the corrections, which are scale dependent. In our analysis here we choose the unification scale to apply the corrections to the RGI relations. It should be noted that we are assuming a covering GUT, and thus unification of the three gauge couplings, as well as a unified gaugino mass M at that scale. Also to be noted is that in the dimensionless sector of the theory since Y τ cannnot be reduced in favor of the fundamental parameter α 3 , the mass of the τ lepton is an input parameter and consequently ρ τ , is an independent parameter too. At low energies, we fix the values of ρ τ and tan β using the mass of the tau lepton m τ (M Z ). For each value of ρ τ there is a corresponding value of tan β that gives the appropriate m τ (M Z ). Then we use the value found for tan β together with G t,b , as obtained from the reduction equations and their respective corrections, to determine the top and bottom quark masses. We require that both the bottom and top masses are within 2σ of their experimental value, which singles out large tan β values, tan β ∼ 42−47. Correspondingly, in the dimensionful sector of the theory the ρ hτ is a free parameter, since h τ cannot be reduced in favor of the fundamental parameter M (the unified gaugino mass scale). µ is a free parameter, as it cannot be reduced in favor of M 3 as discussed above. On the other hand m 2 3 could be reduced, but here it is chosen to leave it free. However, µ and m 2 3 are restricted from the requirement of EWSB, and only µ is taken as an independent parameter. Finally, the other parameter in the Higgs-boson sector, the CP-odd Higgsboson mass M A is evaluated from µ, as well as from m 2 Hu and m 2 H d , which are obtained from the reduction equations. In total we vary the parameters ρ τ , ρ hτ , M and µ.
We start our numerical analysis with the top and the bottom quark masses. As mentioned above, the variation of ρ τ yields the values of m t (the top pole mass) and m b (M Z ), the running bottom quark mass at the Z boson mass scale, where scan points which are not within 2σ of the experimental data are neglected. This is shown in Fig. 1.
The experimental values are indicated by the horizontal lines and are taken to be [78], with the uncertainties at the 2 σ level. One can see that the scan yields many parameter points that are in very good agreement with the experimental data.
We continue our numerical investigation with the analysis of the lightest MSSM Higgsboson mass. The prediction for M h is shown in Fig. 2 as a function of M (the common gaugino mass at the unification scale) in the range 1 TeV < ∼ M < ∼ 6 TeV. The lightest Higgs mass ranges in where we discard the "spreaded" points with possibly lower masses, which result from a numerical instability in the Higgs-boson mass calculation. One should keep in mind that these predictions are subject to a theory uncertainty of 3(2) GeV, see above. The red points correspond to the full parameter scan, whereas the green points are the subset that is in agreement with the B-physics observables as discussed above (which do not exhibit any numerical instability). The inclusion of the flavor observables shifts the lower bound for M h up to ∼ 126 GeV. The horizontal lines in Fig. 2 show the central value of the experimental measurement (solid), the ±2.1 GeV uncertainty (dashed) and the ±3.1 GeV uncertainty (dot-dashed). The requirement to obtain a light Higgs boson mass value in the correct range yields an upper limit on M of about 5 (4) TeV for M h = 125.1 ± 2.1 (3.1) GeV.
Naturally the M h limit also sets an upper limit on the low-energy SUSY masses. The full particle spectrum of the reduced MSSM (where we restricted ourselves as before to the third generation of sfermions) compliant with the B-physics observables is shown in Fig. 3.
In the upper (lower) plot we impose M h = 125.1 ± 3.1 (2.1) GeV. Including the Higgs mass constraints in general favors the somewhat higher part of the SUSY particle mass spectra. The tighter M h range cuts off the very high SUSY mass scales. The lighter SUSY particles are given by the electroweak spectrum, which starts around ∼ 1.3 TeV. They will mostly remain unobservable at the LHC and at future e + e − colliders such as the ILC or CLIC, with only the very lower range mass range below ∼ 1.5 TeV might be observable at CLIC (with √ s = 3 TeV). The colored mass spectrum starts at around ∼ 4 TeV, which will remain unobservable at the (HL-)LHC. However, the colored spectrum would be accessible at the FCC-hh [101]. The same applies to the heavy Higgs-boson spectrum. The four "new" Higgs bosons will likely remain outside the reach of the (HL-)LHC, ILC and CLIC, again with the very lower part of the spectrum potentially accessible at CLIC. However, the full Higgs boson spectrum would be covered at the FCC-hh [101].
In Tab. 1 we show three example spectra of the reduced MSSM, which span the mass range of the parameter space that is in agreement with the B-physics observables and the Higgs-boson mass measurement. The four Higgs boson masses are denoted as M h , M H , M A and M H ± . mt 1,2 , mb 1,2 , mg, mτ 1,2 , are the scalar top, scalar bottom, gluino and scalar tau masses, respectively. mχ± 1,2 and mχ0 1,2,3,4 denote the chargino and neutralino masses. The rows labelled "light" correspond to the spectrum with the smallest mχ0 1 value (which is independent of upper limit in M h ). This point is an example for the lowest M h values that we can reach in our scan. As discussed above, the heavy Higgs boson spectrum starts above 1.4 TeV, which is at the borderline of the reach of CLIC with √ s = 3 TeV. The colored spectrum is found between ∼ 4 TeV and ∼ 6 TeV, outside the range of the (HL-)LHC. The LSP has a mass of mχ0 1 = 1339, which might offer the possibility of e + e − →χ 0 1χ 0 1 γ at CLIC. All other electroweak particles are too heavy to be produced at CLIC or the (HL-)LHC. "δM h = 2.1(3.1)" has the largest mχ0 1 for Table 1: Three example spectra of the reduced MSSM. "light" has the smallestχ 0 1 in our sample, "δM h = 2.1(3.1)" has the largest mχ0 1 for M h ≤ 125.1 + 2.1(3.1) GeV. All masses are in GeV and rounded to 1 (0.1) GeV (for the light Higgs mass).
M h ≤ 125.1 + 2.1(3.1) GeV. While, following the mass relations in the reduced MSSM, the mass spectra are substantially heavier than in the "light" case, one can also observe that the smaller upper limit on M h results in substantially lower upper limits on the various SUSY and Higgs-boson masses. However, even in the case of δM h = 2.1 GeV, all particles are outside the reach of the (HL-)LHC and CLIC. On the other hand, all spectra offer good possibilities for their discovery at the FCC-hh [101], as discussed above.
Finally, we note that with such a heavy SUSY spectrum, despite the large values of tan β, the anomalous magnetic moment of the muon, (g −2) µ (with a µ ≡ (g −2) µ /2), gives only a negligible correction to the SM prediction. The comparison of the experimental result and the SM value shows a deviation of ∼ 3.5 σ [102][103][104]. Consequently, since the results would be very close to the SM results, the model has the same level of difficulty with the a µ measurement as the SM.
To summarize, the reduced MSSM naturally results in a light Higgs boson in the mass range measured at the LHC. On the other hand, the rest of the spectrum will remain (likely) unaccessible at the (HL-)LHC, ILC and CLIC, where such a heavy spectrum also results in SM-like light Higgs boson, in agreement with LHC measurements [105]. In other words, the model is naturally in full agreement with all LHC measurements. It can be tested definitely at the FCC-hh, where large parts of the spectrum would be in the kinematic reach.
Conclusions
In the present paper we have examined the reduced MSSM, in which we first calculate the exact relations among soft scalar and gaugino masses at the unification scale. This constitutes an interesting improvement w.r.t. previous analyses [21], which relied on the existence of a "sum rule" among soft scalar and gaugino masses, where due to the "simple" nature of the constraint agreement with experimental data could be realized more easily. It should be noted that in the reduced MSSM the "sum rule" still is valid. However, here we have the exact relations among these masses, and consequently the dimensionful SSB mass relations are as those among the dimensionless couplings.
In our phenomenological analysis we have derived the spectrum of the reduced MSSM as a function of the common gaugino mass at the GUT scale. The light Higgs boson mass was evaluated with the latest (preliminary) version of FeynHiggs [97], which yields more reliable results in the case of very large SUSY mass scales, as it turns out to be the case in our analysis. The resulting spectrum was confronted with various B-physics constraints. We find that the lightest Higgs mass is in very good agreement with the measured value and its experimental and theoretical uncertainties. The SUSY Higgs boson mass scale is found above ∼ 1.3 TeV, rendering the light MSSM Higgs boson SMlike, in perfect agreement with the experimental data. The electroweak SUSY spectrum starts at 1.3 TeV and the colored spectrum at ∼4 TeV. Consequently, the reduced MSSM is in natural agreement with all LHC measurements and searches. The SUSY and heavy Higgs particles will likely escape the detection at the LHC, as well as at ILC and CLIC. On the other hand, the FCC-hh will be able to fully test the predicted parameter space. | 7,940.8 | 2017-12-07T00:00:00.000 | [
"Physics"
] |
An annotated checklist of Tetranychidae (Acari: Trombidiformes) of the Transcarpathian region (Ukraine)
The first checklist of spider mites (Tetranychidae) of Transcar-pathia, Ukraine is compiled based on the revision of collection materials stored in I. I. Schmalhausen Institute of Zoology of the National Academy of Sciences of Ukraine. The mite collections of I. Akimov, A. Putrashik, and of the authors were studied, thus covering a 45 year-long period of research of spider mites in the study region. The checklist includes 28 species of 10 genera of tet-ranychid mites, which is about 40% of the species diversity of spider mites in Ukraine. For each species, information is provided on the number of individuals (males, females, nymphs, and larvae), host plants, record localities (for own collections — with geographic coordinates), as well as data on distribution in other regions of Ukraine. The largest part of the collection includes the findings of common species, such as Amphitetranychus viennennsis , Bryo-bia rubrioculus , and Panonychus ulmi . The genus Eotetranychus is represented by the largest number of species (8). Two species ( Eotetranychus quercicola Auger & Migeon, 2014 and Schizotet-ranychus beckeri Wainstein, 1958) are recorded for the first time for the fauna of Ukraine. In addition, 21 species of spider mites are noted for the first time for the territory of Transcarpathia. Three of the 11 species previously indicated for Transcarpathia, namely Oligonychus brevipilosus Zacher, 1932, Oligonychus lagodechii Liv. et Mitr., 1969, and Schizotetranychus jachontovi Reck, 1953 are not represented among the collection materials. In addition to the two taxa noted here for the first time in Ukraine, five species of tetranychids ( Bryobia praetiosa, Bryobia lagodechiana, Eury-tetranychus furcisetus, Schizotetranychus spireafolia , and Tetrany-chus frater ), which were previously discovered in other regions of Ukraine, should be included in the electronic database of Spider Mites Web as those recorded in Ukraine. Eight host plant species are indicated for the first time for six spider mite species ( Eupato-rium cannabinum for B. praetiosa, Armoracia rusticana, Betonica officinalis , and Melilotus officinalis for B. lagodechiana , Picea abies for E. furcisetus , Ribes nigrum for A. viennensis, Quercus robur for E. quercicola , and Salix glauca for S. shizopus ).
Introduction
Transcarpathia, or Zakarpattia Oblast, is quite speci c in comparison with other regions of Ukraine [Gerenchuk 1981].Features of the relief and climate, hydrological regime, a wide variety of natural biotopes create unique and promising conditions for investigating the fauna of this region.
e Transcarpathian vegetation is also quite diverse and includes almost half of the entire oristic richness of Ukraine.
However, only a few studies were conducted of the economically important plant-inhabiting mites of the Tetranychidae family in this region.Until recently, there was only one publication dedicated to the diversity of spider mites of Transcarpathia, in which 11 species of tetranychids were noted [Putrashyk 2011].e collected material was, however, only partly identi ed and published in that work.A few years ago, the entire collection of A. Putrashyk was kindly transferred to the scienti c collections of I. I. Schmalhausen Institute of Zoology.As a result of the revision of the materials, we have discovered some errors in the identi cation of specimens, and, accordingly, in the publication of Putrashyk.ree of the eleven species listed in the cited publication [Putrashyk 2011], namely Oligonychus brevipilosus Zacher, 1932, Oligonychus lagodechii Liv. et Mitr., 1969, and Schizotetranychus jachontovi Reck, 1953 have not been found in the collection.Also, a number of previously unpublished specimens of tetranychid mites from Transcarpathia collected by I. A. Akimov [1976] were identi ed in the collection materials housed in the Institute.Finally, own mite collections of the authors gathered in 2016-2017 and 2021 are also included in the present study.
Here we present the rst checklist of Tetranychidae from the Transcarpathian region of Ukraine.
Material and Methods
e checklist is based on results of the revision of collection materials gathered by Ihor Akimov (1976), Alla Putrashyk (2008-2012), and Olha Zhovnerchuk and Andreia Dudynska (2016-2017and 2021).Record localities and host plants, the number of specimens of each species are presented.Samples collected in 2016 and later are provided with coordinates and height above sea level; for older samples these data are not available.Data on the species distribution in other regions of Ukraine are also given.
Genera and species are given in alphabetic order.e names of host plants are given according to GBIF (Global Biodiversity Information Facility).All specimens are housed in the scienti c collections of I. I. Schmalhausen Institute of Zoology of National Academy of Sciences of Ukraine.e species is widely distributed in the world [Migeon & Dorkeld 2006-2022] and in Ukraine [Vojtenko 1969;Mitrofanov et al. 1987;Akimov & Zhovnerchuk 2010;Zhovnerchuk 2014b;Akimov & Zhovnerchuk 2016].
Tribe Hystrichonychini Pritchard & Baker, 1955
e world fauna of spider mites includes 22 genera of that tribe [Migeon & Dorkeld 2006-2022].In present study, the tribe is represented by one genus and a singular species.
Genus Tetranycopsis Canestrini, 1889
is genus of the tribe Hystrichonychini is represented only by 10 species in the database Spider Mites Web.ree of those species are noted for the fauna of Ukraine [Migeon & Dorkeld 2006-2022].Only one species of the genus is found in the Transcarpathian region.
Genus Eurytetranychus Oudemans, 1931
ere are 19 species of this genus in the world fauna.One species has been recorded in Ukraine [Migeon & Dorkeld 2006-2022].We noted two species of the genus in the Transcarpathian region, and a new host plant for one of them.e species is found in all geographical zones of Ukraine on Salix [Akimov 1965;Vojtenko 1969;Mitrofanov et al. 1987;Zhovnerchuk 2014b;Akimov & Zhovnerchuk 2016].Salix glauca is a new recorded host plant for this mite.
e species was described by Wainstein from western Russia [Migeon & Dorkeld 2006-2022].e nd is new for Ukraine.
Genus Tetranychus Dufour, 1832
is genus is represented by 153 species in the world fauna, only seven of them are recorded in the fauna of Ukraine [Migeon & Dorkeld 2006-2022].In the Transcarpathian region, we noted three species that are widely distributed in the world as well as a rare species that is not listed in the cited database for the territory of Ukraine.
Discussion
e checklist of tetranychid mites of the Transcarpathian region of Ukraine currently includes 28 species of 10 genera.Two species -Eotetranychus quercicola and Schizotetranychus beckerihave been recorded for the rst time for the fauna of Ukraine.For the rst time, 20 species of spider mites have been listed for the territory of Transcarpathia.Of the 28 species of spider mites, 25 species were found on deciduous trees, bushes, and grasses, and only three species on conifers.e genus Eotetranychus, represented by eight species, is the most diverse in the studied collection.Four genera (Amphitetranychus, Neotetranychus, Panonychus, and Tetranycopsis) are represented by only one species each.In addition to the two species of tetranychids new for the fauna of Ukraine, another ve species (B.praetiosa, B. lagodechiana, E. furcisetus, S. spireafolia, and T. frater) that have been previously recorded in other regions of Ukraine (steppe zone [Akimov 1965;Akimov & Zhovnerchuk 2016], forest zone [Vojtenko 1969], forest steppe zone [Akimov & Zhovnerchuk 2010;Zhovnerchuk 2014a-b;Zhovnerchuk & Chumak 2018;Zhovnerchuk et al. 2021], and Crimea [Mitrofanov et al. 1987]), can be added to the electronic database Spider Mites Web (Migeon & Dorkeld 2006-2022) According to the Spider Mites Web database, there are 65 species of tetranychid mites known to occur in Ukraine [Migeon & Dorkeld 2006-2022].Even taking into account the fact that another seven species indicated in this publication have not been previously included in that list of species for Ukraine, the species richness of spider mites found in Transcarpathia (28 species) is about 40% of the group's richness in the whole of Ukraine.If we analyse the species composition of tetranychids in di erent natural zones of Ukraine, the highest richness (40 species) is found in the steppe zone [Akimov & Zhovnerchuk 2016], followed by the forest-steppe zone with 37 species [Zhovnerchuk 2014b], whereas the lowest species richness (20) is noted for the zone of mixed forests [Vojtenko 1969].In Hungary, neighbouring Transcarpathia, 46 species of tetranychid mites are currently known, while in Romania and Slovakia eight and seven species, respectively [Migeon & Dorkeld 2006-2022].Obviously, it can be explained by the degree of exploration of these territories.At the same time, a number of species indicated for the Hungarian fauna may as well be found in the territory of the Transcarpathian region of Ukraine upon further thorough research. | 1,940.4 | 2022-12-30T00:00:00.000 | [
"Biology"
] |
Optimal Fair Scheduling in S-TDMA Sensor Networks for Monitoring River Plumes
Underwater wireless sensor networks (UWSNs) are a promising technology to provide oceanographers with environmental data in real time. Suitable network topologies to monitor estuaries are formed by strings coming together to a sink node. This network may be understood as an oriented graph. A number of MAC techniques can be used in UWSNs, but Spatial-TDMA is preferred for fixed networks. In this paper, a scheduling procedure to obtain the optimal fair frame is presented, under ideal conditions of synchronization and transmission errors. The main objective is to find the theoretical maximum throughput by overlapping the transmissions of the nodes while keeping a balanced received data rate from each sensor, regardless of its location in the network. The procedure searches for all cliques of the compatibility matrix of the network graph and solves a Multiple-Vector Bin Packing (MVBP) problem. This work addresses the optimization problem and provides analytical and numerical results for both the minimum frame length and the maximum achievable throughput.
Introduction
River-fed sediment plumes in estuaries and deltas are important to be monitored, because of their influence on water quality and the environment.The techniques employed to monitor nearshore environments can be classified into two main categories: remotely and in situ methods.For remote sensing, satellite devices (AVHR Radiometer [1], images from MODIS-Aqua [2]) or unmanned aerial vehicles [3] have been used.In situ measurements can be taken by means of underwater sensors (i.e., river drifters [4] or video remote sensing [5]).Underwater Wireless Sensor Networks (UWSNs) are a very promising and convenient instrument in oceanography, in particular for pollution monitoring and offshore exploration [6].Sediment plumes may show different patterns due to currents and wind.Figure 1 presents a possible deployment of UWSN, intended to cover the area of interest.There are two types of nodes in the network: sensor and sink nodes.Sink nodes collect data from sensor nodes and serve as network gateways.The shallow water acoustic channel is highly hostile.Therefore, the choice of an efficient MAC protocol is essential to the design of UWSN [7].Two multihop transmission mechanisms from sensors to sink nodes are possible: broadcast or point-to-point.The latter is the chosen option for the present work.Concerning the choice between channel-partitioning or random access protocols [8,9], time-division multiplexing (TDM) is the preferred technique, because of its simplicity and power efficiency.To overcome the limited throughput, Spatial Time-Division Multiple Access (STDMA), which is a collision-free multihop channel access protocol [10], is used in the present work.
Since all node locations are equally important in terms of data acquisition, transmission fairness [11] is a scheduling objective.In this analysis, fairness means that all nodes transmit the same amount of their own data in the long-term, regardless of their distance from the sink node.
In this paper, a network with a single sink will be analyzed.Two different gateway locations are considered and it will be shown how its location has a strong influence on the network throughput.Previous works by other authors deal with fairness scheduling in STDMA networks.Wang et al. proposed a scheduling algorithm, but they emphasized adaptive scheduling instead of shortest frame [12].Concerning UWSNs, Diamant and Lutz proposed STDMA protocol for ad hoc UWSNs where fairness was considered but not uniformly achieved [13].Chitre et al. demonstrated that the optimal schedule for random networks is periodic and presented a computationally efficient algorithm that finds good schedules [14] while our work presents a new procedure that finds the optimal scheduling when the location of the nodes is known.Xiao et al. also presented an algorithm to find optimal scheduling in TDMA networks, but only for linear (onerow) topology in UWSNs [15].Our procedure determines the optimal fair scheduling for the case of saturated load condition (i.e., the sensor nodes have always data to transmit) in a network where the topology follows the estuary shape.Analytical expressions for the frame length and numerical results for the throughput are presented as well.
Network Description and Scheduling
Before analyzing in depth the STDMA network scheduling, some aspects should be considered.In the network topology shown in Figure 1(b) the nodes are located at the vertex of an equilateral triangular mesh, and they are stationary.Two possible gateway locations are shown in Figure 2: gateway on the corner and in the center.The main reason to consider these two locations is that they are the two limiting cases for performance and cost of the network deployment.If a network with a central gateway is chosen, the maximum throughput is obtained at the expense of a higher cost, due to the larger distance from the gateway to the shore.
As the word indicates, a plume has the shape of a large feather; that is, it covers an area longer than wide, as shown in Figure 1(a).To fit this area of interest, the chosen network topology consists of three or six (depending on the gateway position) strings coming together to the gateway.Figure 2 also shows the throughput of every node in a 13-node network (12 sensor nodes and a gateway).Neighbor nodes are in the transmission range from each other, and nonadjacent nodes are not, because of transmission power control [15].Transmit mode is simplex; that is, a node that is transmitting does not receive simultaneously, and vice versa.After an initial synchronization phase, the forwarding table (shown by the arrows in Figure 2) will be set and will remain static.
The amount of data acquired by sensors makes that every node always has a packet ready for transmission (saturated load condition).Time is divided into equally long slots.Long propagation delays of acoustic waves and the associated spatiotemporal uncertainty are taken into account when considering a time slot that includes not only the transmission time but also the propagation time and a guard time.When a node transmits, it does so at a constant binary rate: the channel data rate, , equal for all nodes.A fair frame is defined as the set of slots needed for all nodes to successfully send one and only one packet of its own data to the gateway.Thus, network operation is periodic, the period being the frame duration.Simultaneous transmissions are allowed, in order to minimize the frame length.This is the benefit of Spatial TDMA [10].
TDMA scheduling is the assignment of slots to nodes in order to find a suitable periodic frame.In TDMA scheduling, two types of assignments are possible: node-oriented [16] and link-oriented [17].In acoustic networks, when transducers (projectors and hydrophones) are not directional, the node-oriented assignment is recommended.The first step in STDMA scheduling is to determine the compatible nodes, which are those nodes that can transmit simultaneously without causing any intranet interference.There are two possible types of transmission incompatibilities [17]: type 1 occurs when a node transmits while its neighbors in the same string are transmitting too; type 2 occurs when a node simultaneously receives from two, or more, different transmitting nodes.Scheduling will cope with the incompatibilities in the network.The next step for the STDMA scheduling is to find the shortest fair frame.This requires solving an optimization problem under two constraints: (i) only compatible nodes can be planned in the same slot and (ii) the number of transmissions of every node must fulfill a fairness operation in the network.
Fair Frame Optimization
This section details the proposed algorithm to find the optimal fair frame.Let be the number of sensor nodes (labeled 2, . . ., +1; node 1 is the gateway), let 0 be the throughput of a single node, and let be the aggregated throughput of node , that is, the throughput due to the data collected by node plus the data received from upstream nodes and forwarded by node .A frame is a particular set of time slots, where every slot may contain simultaneous transmissions of compatible nodes.In order to set a fair behavior in the network, the gateway should have received exactly 0 from each node of the network by the end of the frame.This constraint forces a number of transmissions for every node in the frame, given by set = { 2 , 3 , . . ., +1 }.For instance, in Figure 2(a), = {4, 4, 4, 3, 3, 3, 2, 2, 2, 1, 1, 1}.The procedure used to find the shortest fair frame consists of two steps: (A) look for all sets of compatible nodes and (B) formulate and solve the combinatorial optimization problem to find the shortest fair frame.A third step, to remove the excess transmissions, is a prudent practice to avoid overloading the nodes which are closer to the gateway.
Compatible Nodes.
Let = (, ) be the network graph, where is the set of nodes.The cardinal of is + 1 (|| = + 1), and is the set of edges.As shown in Figure 2(a), in our network || = and there is a single edge leaving node , the so-called edge −1 (because sensor nodes are numbered from 2 to + 1).When a particular node is transmitting, elements of the compatibility matrix [10] will be 1 if edges ( , ) can be active simultaneously and 0 otherwise.To enlighten the concept of compatibility matrix an example is provided in Figure 3, where we can note that node 6 has no compatible nodes ( ,5 = 5, = 0), since when node 6 is transmitting, (i) node 3 cannot transmit because it is receiving from node 6, (ii) neighbor node 9 cannot transmit because node 6 is not in the receiving mode, (iii) neighbor nodes 5 and 7 would interfere at node 3, (iv) neighbor nodes 8 and 10 cannot transmit because node 6 transmissions would interfere at nodes 5 and 7, (v) nodes 11, 12, and 13 (label "pn" in Figure 3(a)) cannot transmit because node 6 would interfere at nodes 8, 9, and 10, (vi) Nodes 2 and 4 (label "cn" in Figure 3(a)) cannot transmit because they would interfere at node 3.
The network relay scheme can be represented by an oriented graph.The cover of cliques, C = { 1 , 2 , . . ., ℓ }, is the set of maximal cliques in a graph.The natural number ℓ is unknown a priori.Many algorithms are available in the technical literature [18] to find C. Our preferred algorithm is that in [19], due to its efficiency and simple implementation.Every clique contains an edge or a group of edges that can be active without conflict; obviously, any subset of also satisfies that requirement.Every edge in the graph is contained in at least one clique of the cover, and every time slot in the frame will contain one clique (or a subset of the clique) of the cover, which ensures the transmission compatibility in that slot.
To find the optimum scheduling of transmissions in (1)-( 5), an algorithm that solves MVBP problems, based on arc-flow graph formulation [22], is used.
Excess Transmissions.
The constraints in (2) mean that the demand for transmissions ( ) may exceed the initial set .The workin [22] states that, otherwise, the MVBP solver algorithm may exclude other optimal solutions.In our case, the demand should fulfill exactly because of the expected fair behavior of the network.If it is exceeded, two inconveniences arise: (i) a possible traffic bottleneck, because the extra data cannot be delivered to the gateway in a frame and (ii) a waste of energy due to unnecessary transmissions, as the energy consumed by nodes is a critical parameter in UWSNs.The easiest solution is removing the excess of transmissions that exists in the frame.
Results
For the sake of simplicity, the STDMA protocol has been assumed to be ideal (error-free channel) and the performance of the network has been calculated under these circumstances.In a realistic channel, the packet error rate must be taken into account.Long propagation delays suggest that the preferred error detection and correction technique is FEC (Forward Error Correction).In this case, the throughput is decreased by a factor equal to the redundancy factor of the FEC overhead, but the optimal fair scheduling remains unchanged.
The procedure described in the previous section has been used for networks of different sizes to obtain the shortest fair frame.Frame length , shown in Figure 4, cannot be known a priori because the problem is NP-complete.We have analyzed networks with up to 42 nodes and used a polynomial fitting algorithm to find analytical expressions for , which are shown in Table 1.These results can help to design a network since they allow calculating a lower bound for the time needed to get a complete data packet from every node.It is remarkable that when the gateway is in the center, the frame length always equals the number of sensors ().This means that its scheduling has the shortest length.
The number of transmissions in a frame, ∑ +1 =2 , is an important figure concerning energy consumption.It depends only on set .When the network has three or six branches and is a multiple of three, the number of transmissions in the optimal fair frame follows a quadratic law in , given by where "mod" stands for the modulo operation.These results are shown in Figure 5.It is noteworthy that the average number of transmissions per node, ∑ +1 =2 /, follows a linear law with .The normalized throughput is defined as the ratio between the binary data rate through the gateway and the channel data rate, .In the present case, this figure can be calculated as the ratio between the number of sensor nodes and the number of slots in a frame, N/L.Using the length of the optimal fair frame shown in Table 1, the normalized throughput is given in Figure 6.It can be seen that for networks with the gateway in the center the normalized throughput is 1 and that it is possible to get more than 70% of that ideal throughput in networks with up to 12 sensor nodes with the gateway on the corner.We consider that this is a manageable performance loss if we take into account that a gateway close to the shore is more convenient.
Conclusion
In this paper, a procedure that determines an optimal frame for STDMA UWSN with a fairness requirement has been presented.The network consists of three or six strings coming together to a gateway.The scheduling procedure uses two algorithms, one to find cliques in an oriented graph and MVBP problem solver to find the shortest frame.Analytical expressions for the optimal frame lengths have also been presented.Two gateway locations were considered: at the center/edge of the network.Under ideal conditions, the
Figure 1 :Figure 2 :
Figure 1: (a) Area of interest in an estuary: with West drift (A) or East drift current (B).(b) Proposed topology of a network with one gateway and 12 sensor nodes covering the area of interest (labeled B, in case (a)).
Figure 3 :
Figure 3: Network with 13 nodes and a corner gateway when node 6 is transmitting.(a) Dark gray: neighbor nodes, light gray: parent (pn) and child (cn) of neighbor nodes.(b) Compatibility matrix.
Figure 4 :
Figure 4: Length of the fair frame in optimal STDMA scheduling.
Figure 5 :Figure 6 :
Figure 5: Total number of transmissions in a fair frame.
time slot is used or 0 otherwise; : number of times that node is assigned to time slot (binary because one node can transmit only once in one slot); : length of the shortest frame.The constraint for our frame searching problem is : demand for node, = for a fair frame; : capacity of the th dimension.In our case, will be the number of elements of the largest clique in , = max({| 1 |, | 2 |, . . ., | ℓ |}) ∀.
Table 1 :
Length of the optimal fair frame. | 3,637.2 | 2016-02-29T00:00:00.000 | [
"Computer Science"
] |
Magnetic flux distribution in a ferromagnetic material magnetized by U-shaped electromagnets of different geometric dimensions types
A model experiment has been carried out on studying the type of changes in the magnetic induction in a homogeneous isotropic sample that is locally magnetized with attached U-type electromagnets of different geometrical dimensions. The study was aimed at finding out magnetic flux distribution at different locations within the sample and determining the effect that the geometry of attached electromagnets has on this distribution.
Introduction
Magnetic methods of non-destructive testing and evaluation are widely used for assessing the structural state, determining the strength characteristics, and analyzing the phase composition of products made of ferromagnetic materials [1][2][3][4].In non-destructive testing of products under industrial conditions, magnetic characteristics are most often measured locally by using attached magnetic devices (AMD) that include magnetization facilities (coils, permanent magnets, electromagnets) and gaging elements (pickup coils, flux gates, Hall probes, magnetoresistors, etc.) [5].The geometrical sizes of test products in most cases by far exceed the AMD sizes.Therefore, it is important that the magnetization facility should ensure magnetization or magnetization reversal of the evaluated area of the test product along a close-to-major hysteresis loop.In order to choose the optimum AMD configuration and adequately interpret the results of local magnetic measurements during in-production control of the structure and mechanical properties of metallurgical products (roll stock, forged pieces, rolls, etc.) without using costly "classical" testing methods, it is important to know magnetic flux distribution in near-surface and internal layers of test objects.
When evaluating surface-hardened products [6,7], a more complicated problem arises.To assess the characteristics of the surface-hardened layer, it is necessary to concentrate magnetic flux in the hardened layer, whereas when evaluating properties of a viscous bulk of the product, one needs to ensure significantly deeper magnetic-flux penetration [8].
Thus, depending on the local magnetic evaluation task, the topography of magnetic-flux distribution in the product as well as the conditions of magnetization and magnetization reversal in separate tubes of product's magnetic flux should be known [9].Some experimental studies of magnetic-flux topography in ferromagnetic products that are locally magnetized by an attached electromagnet have been reported [10][11][12][13][14]. Theoretical calculations of the steadystate magnetic-flux distribution that are based on the system of Maxwell equations and the fundamental integral equation of magnetostatics have been performed only for some particular cases.Some works on the numerical modeling of flux and field distributions inside ferromagnets have been published recently [15][16][17][18].In most experimental studies, methodological obstacles that are related to imperfection of measuring transducers impose certain restrictions on the shape of test products, and the shape, in its turn, affects the magnetic-flux distribution in these samples.For example, in [10] the width of the plate coincides with the width of electromagnet poles; this considerably reduces lateral magnetic-flux straying and increases magnetization depth.The same is confirmed by modeling results in [15].Miniature high-sensitivity measuring transducers are currently used to take measurements of magnetic-flux parameters without the above restrictions on the sample shape.
The aim of this work is to study experimentally the type of changes in magnetic induction in a homogeneous isotropic sample with allowance for lateral straying when the sample is locally magnetized by attached U-type electromagnets of different sizes in order to examine the spatial magnetic-flux distribution in the sample and to determine the effect that the geometry of attached electromagnets has on this distribution.
Experimental procedure and material
A compound model sample of 40Kh chromium steel, which is extensively used in machine building, was prepared for the experiments.The sample consisted of two parts A and B with dimensions 100 × 70 × 90 and 85 × 70 × 90 mm, respectively.A schematic drawing of the sample is presented in Fig. 1.
A 10 × 10-mm groove was milled on the inner surface of part A that is in contact with part B. A 10 × 10 × 10-mm measuring insert made of steel of the same grade could be displaced along the entire groove length.Three Hall probes were attached to each of the three mutually orthogonal facets of the measuring insert.The insert is schematically shown in Fig. 1b.
Auxiliary 40Kh-steel 10 × 10-mm inserts with lengths of 2, 5, 10, 20, 40, and 50 mm were manufactured to ensure the homogeneity of the magnetic medium that was tested.These inserts were used to fill the entire length of the groove.During experiments, parts A and B were fayed together, with the measuring insert being fixed in the groove.To avoid distortions of magnetic flux in the model sample, all the above parts were subjected to identical thermal treatment (quenching at a temperature of 860 °С followed by tempering at a temperature of 300 °С in a vacuum furnace) so as to ensure identical magnetic and mechanical properties.
The geometrical dimensions and the number of turns of electromagnets that were used to magnetize the sample are listed in Table 1.The electromagnet current varied within a range of −2.6 to +2.6 A using a regulated power supply unit.The origin of a Cartesian coordinate system was put at the center of electromagnet's neutral cross section, as show in Figs.2a and 2b.The Hall probes that registered induction along x-, y-, and z-axes are denoted as Px, Py, and Pz, accordingly.At the beginning of measurements, the electromagnet was placed on the preliminarily demagnetized sample so that the Hall probe Px be at the center of the neutral cross section of the electromagnet at a depth of 5 mm [at a point with coordinates (0; 0; 5)].Measurements with electromagnets were carried out in the following manner.The measuring insert was installed so that its upper facet should coincide with the surface of the model sample (see Fig. 2a).Given such an arrangement, the Hall probes had the following coordinates: Px (x = 0, у = 0, z = 5); Py (х = -5, у = 5, z = 5); and Pz (х = -5, у = 0, z = 10).A cycle of measurement of magnetic properties was executed in this initial position (see Fig. 2b).The entire system was demagnetized in all cases prior to the measurements, with the complete demagnetization performed after each measurement cycle.In order to study lateral straying, the attached electromagnet was sequentially displaced along the y-axis in 5-mm steps into positions 1', 1'' and so on (see Fig. 2b), with the entire measurement cycle of hysteresis loops by the Hall probes repeated every time.Then, the electromagnet was returned to the original position, and similar measurements were taken, with the electromagnet displaced in 5-mm steps along the x-axis.After measurements for different prescribed values of x had been taken, the electromagnet returned to the initial position.
A b
The next series of measurements were taken after sequential 5-mm displacements of the measuring insert, with the free space filled every time with the auxiliary inserts.For the sake of clearness, Fig. 3a schematically presents attached magnetic devices.It can be seen that the x-component of magnetic induction Bxboth for AMD 1 and for AMD 2 -increases when approaching the electromagnet pole starting at half the interpolar gap, attains its maximum approximately at the border of the inner pole edge, and then decreases to zero.When the electromagnet moves still further away from the measuring insert, Bx changes sign, reaches its maximum near the outer pole edge, and then rapidly drops down to zero.When the AMD is displaced from the measuring insert along the y-axis, the xcomponent of magnetic induction Bx decreases, the maxima near the borders of the poles become less pronounced, and at y = 20 mm the x-component of the magnetic-flux density only slightly changes in the interpolar zone.
Results and discussion
The x-component of the magnetic induction Bx changes in such a way only at a relatively small depth of z = 5 mm.At z = 10 mm (Fig. 4a), the maxima near the borders become less pronounced (for any displacement along the y-axis), while the sign-reversal position of Bx is displaced farther from the center.At z = 20 mm (Fig. 4b), the x-component of magnetic induction Bx does not change sign for any of the values of the x-coordinate that are indicated in the figures.
Figure 5 shows dependences of the z-component of the induction Bz for the maximum applied field for different positions along the y-axis (y = 0, 5, 10, 15, 20 mm) at the depth of z = 5 mm.For clearness, the figures schematically indicate AMDs.The magnetic-induction component Bz at the center of the interpolar gap equals zero for any displacement along the y-axis, increases when approaching the pole, reaches its maximum value beneath the pole center, and decreases to zero when moving further away from the measuring insert.For deeper locations of the measuring insert, the values of Bz are smaller, but the overall type of the dependences is retained.For the maximum applied field, the magnetic-induction component By (Fig. 6) depends on the displacement along the y-axis in the following way.At y = 0 mm, By is equal to zero everywhere; then it grows when approaching the pole, reaches its maximum value near the border of the inner pole edge, and decreases to zero when the AMD moves further away from the measuring insert.
Displacement along the y-axis alters the type of the dependence of By in the following manner.The maximum position is shifted closer to the pole center, whereas the very maximum value of By first increases, reaches its maximum at y =10-15 mm, and then reduces (Fig. 7).The lack of monotony and the presence of a maximum on the By(y) curve can be explained by the fact that in this case two "concurrent" processes occur.On the one hand, magnetic-flux density should in toto decrease to zero when the AMD is displaced along the y-axis, but, on the other hand, the ratio By/|B| monotonically increases from zero and does not exceed unity, given such a displacement.Figure 8 presents dependences of the modulus of the total magnetic-induction vector |B| for the maximum applied field for different positions along the y-axis (y = 0, 5, 10, 15, 20 mm) at depths of z = 5, 10, 20 mm, as measured with AMD 1.It can be seen that the modulus of the total magnetic-induction vector |B| increases when approaching the pole and reaches its maximum value at the inner pole edge (x = 17 mm).At the pole center, the value of |B| decreases and then reaches its second maximum approximately 2-3 mm away from the outer pole edge (x = 28 mm).As the AMD moves further away from the measuring insert, the modulus of the total magnetic-induction vector |B| reduces and tends to zero.Such a type of changes in |B| takes place only at a relatively small depth of z = 5 mm.At z = 10 and 20 mm, the first maximum (near the inner pole edge) stops being very pronounced, while the second maximum (near the pouter pole edge) disappears.Displacement of the AMD along the y-axis also diminishes the maxima of |B| near the edges of the poles.The fact that the maxima are located near pole edges can be explained as follows.The main contribution to the value of the modulus of the total magnetic-induction vector is rendered by the Bx and Bz components (the By component contributes much less).However, the magnetic-induction component Bx turns to zero approximately beneath the center of a pole of the attached magnetic device (see Figs. 3 and 4), thereby reducing the modulus |B|.Therefore, a small decrease in the modulus of magnetic-induction vector is observed beneath the pole center as compared with the pole edges.At z = 20 mm, the component Bx no longer demonstrates sign reversal beneath the pole, which is manifested in the lack of double maxima of |B|.It should be noted that the sign reversal of Bx beneath the pole center is accounted for by the presence of stray fluxes in the transducer-sample circuit.When magnetizing products, it is important to know the depth to which they are magnetized.Figure 9 presents dependences of the modulus of the total magnetic-induction vector |B| at the center of the neutral cross section (x = 0, y = 0) on the distance z to the sample surface (Fig. 9a) and on /√ e (Fig. 9b).The dependences were measured when the sample was magnetized by three electromagnets with different geometrical dimensions, where Se is the cross section area of the relevant AMD.The values of currents in measurements with the electromagnets of AMD 2 and AMD 3 were set so as to ensure a field of 4.2 kA/m beneath their poles, a field which is created beneath the pole in case of magnetization with AMD 1 operating at the maximum current.It can be seen from Fig. 9a that the value of the total magnetic-induction vector |B| monotonically decreases with the depth z, with its value dependent, to a considerable degree, on the geometrical dimensions of the attached magnetic device (the cross section area of the poles of the electromagnet and the interpolar distance).It should be noted that at small depths, in near-surface layers of the neutral section, the smaller the gap, the greater the values of |B|; whereas the greater the cross section area of the AMD poles, the larger the magnetization depth.The latter statement is illustrated more visually in Fig. 9b.At small depths of less than approximately 1.3√ e mm, the value of |B| depends, to a considerable degree, on the distance d between the centers of electromagnet poles, while at larger depths it only depends on the cross section area of AMD pole.Reducing the pole section area, in particular, its width, leads to increasing lateral straying and, accordingly, decreasing the magnetization depth and flux density in the test object.Kostin et al. [17] performed numerical modeling and obtained quantitative estimates of the lateral straying and percentage of magnetic flux that goes beyond the profile of electromagnet poles from the overall magnetic flux in the magnetic circuit.They demonstrated that, first of all, the greatest lateral straying of magnetic flux is observed in the neutral cross section, and, secondly, the lateral straying depends, to a considerable degree, on both the ratio of the width of the magnetized plate to the pole width and the interpolar distance, provided the cross sections of the poles are equal.Based on the experimental data for all the AMDs, the fraction of magnetic flux that leaves the boundaries of the poles to the overall magnetic flux that passes through the neutral cross section was calculated.The magnetic flux BS was calculated by dividing the neutral cross section into square 5 × 5-mm cells and assuming the induction to be the same within any cell (Table 2).As compared to AMD 3, AMD 1 has almost a twice larger distance between the pole centers, with the pole width of AMD 1 exceeding that of AMD 3 by only 1.12 times.It can be seen from the results provided in the table that reduction in lateral straying of magnetic flux in the neutral cross section and, hence, growth of the magnetization depth are affected to a much greater degree by reduction in the ratio of the width of the magnetized plate to the AMD pole width than by decrease in the ratio of the interpolar gap to the pole width.
Conclusion
A technique has been developed for determining the topography of magnetic induction in a ferromagnet that is magnetized by attached magnetic device.The distributions of all the magnetic-induction components over the volume of a bulky homogeneous isotropic sample of steel 40Kh that is locally magnetized by attached U-type electromagnets with different geometries.
It has been established that if the width of the magnetized plate is greater than the electromagnet-pole width, the distribution of magnetic flux in the plate at a distance to the plate surface of less than 1.3√ e is affected by both the cross section area of AMD poles and the distance between the centers of the poles of the electromagnet.
It has been demonstrated experimentally that the main factor that significantly affects the growth of the magnetization depth is reduction of the ratio of the magnetized plate width to the AMD pole width due to reduction of the fraction of strayed magnetic flux that occurs in this case.
The above results can prove useful in magnetic nondestructive testing when choosing the most expedient disposition of measuring transducers within the space between electromagnet poles, magnetic structural analysis when adjusting the depth of magnetization of a test product as well as verifying the adequacy of numerical modeling of processes of magnetization and magnetization reversal of products made of steel 40Kh.
The study was partially supported by UB RAS project No 15-10-1-22.
Fig. 2 .
Fig. 2. Disposition of electromagnet on the sample and orientation of Hall probes (a); electromagnet displacement (view from above) (b).
Figure 3
Figure3shows how the value of the x-component of magnetic induction Bx as measured with AMD 1 and AMD 2 for the maximum applied field depends on the distance x to the plane of AMD's neutral cross section for different positions along the y-axis at a depth of z = 5 mm from the sample surface.
Fig. 3 .
Fig. 3. Schematic arrangement of AMD 1 and AMD 2 on the sample and values of magnetic induction components Bx at different distances from the center of AMD neutral cross section at depth z = 5 mm from the sample surface at (a) y = 0; (b) 5; (c) 15; and (d) 20 mm.Curves 1 and 2 are obtained with AMD 1 and AMD 2, respectively.
Fig. 5 .
Fig. 5. Schematic arrangement of AMD 1 and AMD 2 on the sample and values of magneticinduction components Bz at different distances from the center of AMD neutral cross section at depth z = 5 mm from the sample surface for (a) y = 0; (b) 5; (c) 15; and (d) 20 mm.Curves 1 and 2 were obtained with AMD 1 and AMD 2, respectively.
Fig. 9 .
Fig. 9. Dependence of the modulus of total magnetic-induction vector |B| at the center of the neutral cross section (a) on the distance z to the sample surface and (b) on the value of /√ e .Curves 1, 2, and 3 were obtained with AMD 1, AMD 2, and AMD 3, respectively.
Table 1 .
Geometrical dimensions of used U-type electromagnets.
Table 2 .
Fraction of the lateral straying of magnetic flux in the neutral cross section of the plate when it is magnetized by AMDs with different geometrical dimensions. | 4,286.8 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
Experimental Study of Wind Booster Addition for Savonius Vertical Wind Turbine of Two Blades Variations Using Low Wind Speed
. The Wind turbine is a tool used in Wind Energy Conversion System (WECS). The wind turbine produces electricity by converting wind energy into kinetic energy and spinning to produce electricity. Vertical Axis Wind Turbine (VAWT) is designed to produce electricity from winds at low speeds. Vertical wind turbines have 2 types, they are wind turbine Savonius and Darrieus. This research is to know the effect of addition wind booster to Savonius vertical wind turbine with the variation 2 blades and 3 blades. Calculation the power generated by wind turbine using energy analysis method using the concept of the first law of thermodynamics. The result obtained is the highest value of blade power in Savonius wind turbine without wind booster (16.5 ± 1.9) W at wind speed 7 m/s with a tip speed ratio of 1.00 ± 0.01. While wind turbine Savonius with wind booster has the highest power (26.3 ± 1.6) W when the wind speed of 7 m/s with a tip speed ratio of 1.26 ± 0.01. The average value of vertical wind turbine power increases Savonius after wind booster use of 56%.
Introduction
The more advanced a nation, the greater the electricity needs of the people. Power plants supply this electricity demand, where the majority of energy sources of power plants still use fossil energy. While the availability of fossil energy is getting limited and the negative impact on the environment is quite much. The use of fossil fuels produces greenhouse gases [1]. These gases absorb heat energy that the earth will emit into space, causing a warming of the troposphere.
Efforts to reduce global warming continue to be carried out. Various countries have made innovations, ranging from making regulations to limit the use of energy that has an impact on global warming to the use of renewable energy to reduce global warming. Renewable energy is an energy that can be renewed and can be used continuously. Its sources are obtained from the sun, wind, gas, and others. It provides many advantages such as can be found anywhere and basically has a small effect on the environment. One of them is wind energy, it is clean energy and in the production process, it does not pollute the environment [2].
The use of renewable energy is still limited, this is due to a lack of research. While the use of it can be an investment in the future. Wind power has been used for 3000 years. Until the 20th century. At the beginning of modern industrial world, the use of wind energy sources is replaced by fossil fuel engines or electricity networks [3].
Wind turbines are a tool used in the Wind Energy Conversion System (SKEA). Wind turbines can produce electricity by converting wind energy into kinetic energy through a blade contained in the turbine and rotating the shaft on the generator to produce electricity [4]. The turbine is divided into two types, Horizontal Axis Wind Turbines (HAWT) and Vertical Axis Wind Turbines (VAWT). Horizontal Axis Wind Turbine (HAWT) is designed to produce electricity from wind at high speed. Vertical Axis Wind Turbine (VAWT) is designed to produce electricity from wind at low speeds. [5].
One of the turbines classified as VAWT is the Savonius wind turbine. This wind turbine has a simple construction that operates independently of the wind direction and starts at low wind speeds that were developed and patented by Sigurd J. Savonius in the 1920s. The best rotor has an efficiency of 31% while the efficiency of the prototype is 37% [6].
Based on data from BMKG, in Indonesia, especially in the city of Semarang, the average wind speed is around 2.5 m / s [7]. Even though wind speeds are low, Indonesia has wind potential that is available almost throughout the year. This makes it possible to develop a small-scale wind power plant system.
The purpose of this study was to analyze the value of the power obtained based on the speed of the incoming wind that pushed the turbine blade and the value of wind
Flow Chart
The flow chart used in the research process can be seen in
Wind Power Analysis
Wind power is energy that can be produced by the wind at a certain speed that hits a wind turbine in a certain area. The wind power produced by the one that hits the wind turbine can be known through equation 1 [8]. 3 1 2 where is a conversion factor with a value of 1.0 kg / (N.s 2 ), ρ is the density of air, A is the cross-sectional area, and Vi is the speed of the wind entering the wind turbine. The values of ρ and A are known from equations 1 and 2. While the value of Vi is known by using an anemometer.
P RT
(2) where P is the environmental pressure with a value of 1 atm, R is a specific gas constant with a value of 287.05 J / kg.K, and T is the ambient temperature with a value of 305 K.
A HD
(3) where H is the blade height of 0.65 meters and D is the blade diameter of 0.57 meters. Blade power is the power extracted by the blade from the wind. Blade power can be known through equation 4 [8].
Blade Power Analysis
where Ve is the speed of the wind coming out of the turbine. The Ve value is measured using an anemometer.
Coefficient Power (Cp) Analysis
The power coefficient (Cp) is the ratio between the power produced by the wind. The Cp value is known through equation 5 [8]. Betz's theory.
Tip Speed Ratio Analysis
Tip speed ratio (TSR) is the ratio of wind speed to blade rotational speed in wind turbines [9]. The TSR value is known through equation 6 [8].
where ω is the blade angle speed (rad/s). The value 2 60 N , N is the blade rotational speed (rpm).
Validation
The research journal used for this validation is a research journal conducted by Natapol Korprasertsak and Thananchai Leephakpreeda in 2016 [10]. In this study, the researcher was looking for the influence of wind speed on rotational speed as Fig. 4. Fig. 4. Validation of the wind speed effect on RPM Fig. 4 described that the rotational speed has an increase in wind speed. This is because wind speed will affect how fast the blade rotates. The greater the wind speed, the faster the blade rotation. The black line of the graph illustrates the magnitude of the rpm value in the previous study, while the red line describes the magnitude of the rpm value in this study. Black and red graphs have a tendency to both increase.
Analysis and Discussion
The value of the blade power has increased along with the increase in wind speed as in Fig. 5. This is because the blade power is strongly influenced by the speed of the wind in and the speed of the wind comes out. The greater the difference in the speed of the incoming wind and the speed of the wind coming out, the greater the wind extracted by the turbine. This graph is in accordance with the results of research conducted by Nur Alom and Ujjwal K. Saha [11]. Figure 6. This is due to the important factor in TSR calculation which is the ratio of blade rotational speed to wind speed. The greater the ratio of rotational speed with wind speed, the higher the speed ratio tip. This graph is in accordance with the results of research conducted by Frederikus Wenehenubun, Andy Saputra and Hadi Sutanto [12]. Speed ratio Fig. 7 shows the results of a study conducted by W. El-Askary, et al. [13] Fig. 8, it is known that the biggest power coefficient value for two blades without wind booster is 0.26 with a TSR of 0.94. While for two blades with wind booster is 0.38 with TSR 1.2. Using formula (5), it is known that the power coefficient value is affected by wind power and blade power. If the value of the ratio of wind power and blade power is greater, the power coefficient value is large.
Conclusion
The test results obtained are the highest blade power values produced from two blades wind turbines with a wind booster of 26.37 watts with a wind speed of 7 m / s. TSR value of the highest wind speed in wind turbines using wind booster is 1.26 at a wind speed of 7 m / s. The power coefficient value produced by two blades wind turbine without a wind booster is shown in Figure 7 (a) of 0.26 at a TSR value of 0.94. The power coefficient value produced by each variable has different values, in two blades wind turbine with a wind booster, the highest power coefficient value is 0.38 at a TSR value of 1.20. | 2,098.6 | 2019-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Applying Gel-Supported Liquid Extraction to Tutankhamun’s Textiles for the Identification of Ancient Colorants: A Case Study
The identification of the dyes present on a linen fragment from the tomb of Pharaoh Tutankhamun is the objective of the present study. Fiber optic reflectance spectroscopy (FORS) was applied to the archaeological sample for preliminary identification of the dyes and to better choose the extraction methodology for different areas of the sample. The innovative gel-supported micro-extraction with agar gel and the Nanorestore Gel® High Water Retention (HWR) gel were applied to the archaeological sample after testing of the best concentration for the extraction of the agar gels substrates, performed on laboratory mock-ups by means of UV–Vis transmittance spectroscopy. Immediately after extraction, Ag colloidal pastes were applied on the gel surface and Surface Enhanced Raman Scattering (SERS) analysis was performed directly on them. The combination of information deriving from FORS and SERS spectra resulted in the successful identification of both indigo and madder and, in hypothesis, of their degradation products.
Introduction
The use of dyed threads or dyed clothes in ancient Egypt could tentatively be traced back to the First Dynasty (3150-2925 BC), but it is only from the New Kingdom onwards (18th Dynasty, 1543-1292 BC) that cloth woven with colored thread was increasingly employed [1]. The most used fiber was linen and since not everyone could afford high quality linen, thread was often dyed using ochres rather than plant dyes [2]. The first was indeed utilized for dyeing in brown or dark red shades by hydrating iron oxide and mixing it with clay. Vibrant red textiles were obtained through the use of Mediterranean plants such as madder (Rubia tinctorum) or alkanet (Alkanna tinctoria) [3,4]. Yellow tones, instead, were achieved from safflower (Carthamus tinctorius) or turmeric (Curcuma longa), while indigo (Indigofera tinctoria) was the main source of blue colors. Other shades, such as green colors, were obtained by mixing blue and yellow dyestuffs [2,4]. In recent decades, the interest in studying organic dyestuff historically used to dye textiles has grown. These objects, in fact, possess great cultural importance and, in particular, the study of organic colorants allows for achieving historical and technological information about manufacturing, trades, exchanges, and civilization evolutions. However, the analysis of historical textile samples represents a complex challenge from the analytical point of view due to the usually limited amount of samples available, the low concentration of chromophores in the original material, and the presence of possible degradation products [5]. Textile fibers are generally susceptible to degradation mechanisms involving physicochemical and, consequently, mechanical processes that are eventually caused by microbiological attacks. These can result in the integrity loss of the fiber itself. The dye, in turn, can be subjected to photo-oxidative reactions, leading to fading and alteration of the original chromatic features [6]. Consequently, the place and methods of preservation of textile are fundamental for defining its current state of conservation. From this point of view, the archaeological contexts could represent "extreme" situations in which the microbiological attack is disadvantaged. In several cases (e.g., deserts, acidic peat bogs, alkaline lake muds, and perennial ice [7]), it is not uncommon to find remains of dyed fabric preserved up to the present day. Textile artifacts that survive are therefore extremely valuable; the definition of the dyeing matrices and the technologies employed to make the object not only help in reconstructing ancient cultures but also play a crucial role in developing and fine-tuning specific conservation methods.
Consequently, in the field of chemistry applied to cultural heritage, the analysis of the dye composition represents a powerful instrument, but it is also one of the most interesting analytical challenges. High pressure liquid chromatography coupled with an appropriate detector (diode array, HPLC-DAD; mass spectrometer, HPLC-MS) remains the most reliable and versatile identification method for organic colorants today [5,[8][9][10][11], while the identification of dyes directly on fabrics without separative methods is often complex because of the organic matrix, which interferes in most analytical techniques. Nonetheless, when dealing with objects of art, the application of techniques which require sampling is discouraged and a multi-technical approach which includes both non-invasive and micro-invasive techniques is always preferred. During the last decades, great effort has been undertaken to develop minimally invasive techniques with increased sensitivity. In this sense, fiber optic reflectance spectroscopy (FORS) and hyperspectral imaging in the UV-Vis-NIR range have been demonstrated to be efficient tools for the rapid, non-invasive, in situ preliminary characterization of many artistic materials [9, [12][13][14][15][16][17][18]. Additionally, Raman and Surface Enhanced Raman Scattering (SERS) spectroscopies have attracted the interest of many research groups for their ultrasensitive and high detection capability. In particular, in the latter case, SERS spectroscopy has proved to be a valid technique for the characterization and study of organic dyes [19][20][21][22][23][24]. Indeed, by exploiting metallic substrates, such as, for example, silver nanoparticles, it is possible to amplify the Raman signal significantly, and this enhancement allows for overcoming the problem of strong fluorescence emission-typical of organic compounds-due to a localized surface plasmon resonance (LSPR) phenomenon, which comes into play when the incident light has the same vibrational frequencies of the valence electrons in the metal nanoparticle [25,26]. In this perspective, the use of gel substrates for the micro-invasive extraction of dyes and their consequent analysis by the SERS technique has recently been applied to the study of cultural heritage [27][28][29][30][31]. The most used in this sense is the agar gel because it favors the interaction of silver nanoparticles with consequent enhancement of the SERS signals due to the shrinkage of its structure after drying [28]. For example, in 2015, Platania and colleagues [32] presented a methodology for the extraction and detection of indigo dyes in painting and textiles involving Ag-agar gel soaked into a reducing solution, resulting in a safe procedure for both laboratory samples and works of art. Despite this, other types of gels have been used, such as, for example, the Nanorestore Gel ® High Water Retention (HWR) gel-patented for cleaning surfaces-which was tested for the first time by Germinario in 2020 for the extraction of dyes from textiles [33]. Both agar and Nanore-Gels 2023, 9, 514 3 of 13 store Gel ® were employed in the aforementioned work for the extraction of madder and cochineal from wool mockups using the state-of-the-art ammonia-based solution devised in 2016 by Lombardi and colleagues [34]. This methodology, stated as 'mild', tested, for the first time, a basic environment using ammonia, Na 2 EDTA, and NaCl for the extraction of anthraquinones in order to preserve glycosylated moieties, which are sensitive and may be lost in traditional acidic methodologies. The work by Germinario, through a multi-technical approach, proved highly successful for SERS analysis, allowing discrimination between madder and cochineal on both gels. The research was pushed forward and the methodology was revised and implemented for hydrophilic paint layers in the study by Bosi [30], where agar at different concentrations (ranging from 1% to 12%) and Nanorestore Gel ® were tested for the extraction of madder lake pigments showing positive outcomes in terms of non-invasiveness. Agar gel concentrations below 4% permitted the extraction without ripping the paint and did not show any color change on the mockups. From the operative point of view, moreover, the methodology (which is defined as "gel-supported liquid extraction") is even very simple. Briefly, the gel is cut into cylinders and soaked in the extraction solution for a certain amount of time. Then, these substrates are removed with tweezers and put in contact with the sample surface in order to extract the analytes present on the surface [30]. This makes this approach suitable for different typologies of laboratories.
For all these reasons, in the present work, we decided to investigate more deeply the behavior of extraction of agar gel in the range of concentrations between 2% and 4%. In this way, in comparison to the previous literature, a deeper insight was accomplished in order to individuate the agar gel concentration, coupling the best extraction performances to handing features for the analytical methodology [30]. Agar gel was tested along with Nanorestore Gel ® on both paint and textile laboratory mock-ups tinted with madder and indigo; spectroscopic analysis including UV-Vis and Raman SERS spectroscopy were performed on these samples and also served as a reference spectral database. In this way, the methodology was hence applied to a wider set of artist matrices (for instance, the tempera mock-ups) and with a systematic approach in comparison to our previous works [30,33]. It was also employed for the diagnostics of a precious sample, an archaeological textile fragment from Tutankhamun's tomb. Preliminary spectroscopic analysis was conducted on the archaeological sample using FORS to understand which extraction procedure should be followed. Gels were then soaked in two different solutions according to the area of extraction (blue or red), Ag-colloidal pastes were applied immediately after extraction, and SERS analysis was performed to characterize the dyes. The combination of gel-supported liquid extraction with the use of colloidal paste represents a different approach in comparison to the previous studies [30,33,35], which takes advantage from the formation of extended nanoclusters for the enhancement of the Raman signal. The application of this sample allowed for achieving an actual valorization in a real case study, and it integrated the gel-supported liquid extraction in an analytical protocol that also involved non-invasive techniques: this promotes the transferability of the new technology in routine cultural heritage diagnostics.
UV-Vis Spectroscopy
The UV-Vis spectra of agar gel in the concentration range between 2% and 4% have been compared to determine the concentration of agar that is the most appropriate to employ for the extraction of the real case study. The spectrometric measurements of agar are taken before the extraction and after the extraction both on the paint and textile mock-ups. In Figure 1, the transmittance spectra at three different concentrations (2%, 3%, 4% in water w/w) are reported. For all tested concentrations, the transmittance is higher in the samples pre-extraction than in the samples post-extraction. Furthermore, the spectra of agar gel after the extraction on the textile sample show lower transmittance values, probably due to the higher absorption of incident radiation of the extracted dye. The UV-Vis spectra of agar gel exhibit the same pattern at all concentrations after being soaked in the ammonia solution, along with Na 2 EDTA and NaCl. Up until 300 nm, transmittance values are constant, then they start to quickly rise. The spectra of agar gel after extraction on the paint mock-up show a wide absorption band between 480 nm and 550 nm, which is due to the electron transitions n → π* of carbonyl groups present in the chromophore alizarin [36]. Samples with 2% and 3% of agar gel post-extraction are more susceptible to the UV-Vis radiation than samples at C w = 4%, with a more evident decrease in transmittance around 400 and 500 nm, which is due to the typical absorbance bands of madder, as shown in Figure 1. Spectra of agar gel after the extraction on the textile mock-up show slight absorption characteristic bands that confirm the major capability of extracting the dye from paint rather than textile. Nevertheless, when comparing all the concentrations, 2% agar gel and 3% agar gel clearly show the absorption band opposed to all other tested concentrations. paint mock-up show a wide absorption band between 480 nm and 550 nm, which is due to the electron transitions n → π* of carbonyl groups present in the chromophore alizarin [36]. Samples with 2% and 3% of agar gel post-extraction are more susceptible to the UV-Vis radiation than samples at Cw = 4%, with a more evident decrease in transmittance around 400 and 500 nm, which is due to the typical absorbance bands of madder, as shown in Figure 1. Spectra of agar gel after the extraction on the textile mock-up show slight absorption characteristic bands that confirm the major capability of extracting the dye from paint rather than textile. Nevertheless, when comparing all the concentrations, 2% agar gel and 3% agar gel clearly show the absorption band opposed to all other tested concentrations.
An evaluation of the invasiveness of the procedure for the mock-ups was performed by means of optical analysis and FORS-colorimetry. The observation at the microscope did not evidence gel residues on the surface of the mock-ups and no observation about the morphology of the mock-up was observed. From the point of view of color changes, the calculation of color variation ΔE00 using the CIEDE2000 formula resulted in values lower than three, which were considered as the upper limit of rigorous color tolerance. With the reference to these results, the methodology can be considered remarkably microinvasive because it extracts the analytes without causing damage or visible color variations on the artist matrices. However, with reference to more sensitive materials, further details about these aspects can be found in previous works [30].
Gel Micro-Extraction In Situ
Based on tests conducted on laboratory mock-ups and results obtained with UV-Vis spectroscopy, it was decided to use 3% agar gel for the extraction in the real case study as it showed the best results in terms of extraction capacity without leaving any residue.
Both agar gel and Nanorestore Gel ® worked well with the ammonia-based solution for the extraction of the red dyes. On the contrary, in the blue area, it was possible to use only agar gel since Nanorestore Gel ® appeared to be incompatible with the indigo reducing solution. Further evaluations must be performed in this aspect since it could be An evaluation of the invasiveness of the procedure for the mock-ups was performed by means of optical analysis and FORS-colorimetry. The observation at the microscope did not evidence gel residues on the surface of the mock-ups and no observation about the morphology of the mock-up was observed. From the point of view of color changes, the calculation of color variation ∆E 00 using the CIEDE2000 formula resulted in values lower than three, which were considered as the upper limit of rigorous color tolerance. With the reference to these results, the methodology can be considered remarkably micro-invasive because it extracts the analytes without causing damage or visible color variations on the artist matrices. However, with reference to more sensitive materials, further details about these aspects can be found in previous works [30].
Gel Micro-Extraction In Situ
Based on tests conducted on laboratory mock-ups and results obtained with UV-Vis spectroscopy, it was decided to use 3% agar gel for the extraction in the real case study as it showed the best results in terms of extraction capacity without leaving any residue.
Both agar gel and Nanorestore Gel ® worked well with the ammonia-based solution for the extraction of the red dyes. On the contrary, in the blue area, it was possible to use only agar gel since Nanorestore Gel ® appeared to be incompatible with the indigo reducing solution. Further evaluations must be performed in this aspect since it could be interesting to understand if the solution causes polymer degradation rather than sodium dithionite concrete forming inside the gel (Figure 2). interesting to understand if the solution causes polymer degradation rather than sodium dithionite concrete forming inside the gel ( Figure 2).
Figure 2.
Optical microscope (4× magnification) images of Nanorestore Gel ® soaked with the reducing extraction solution used for indigo extraction.
Fiber Optic Reflectance Spectroscopy (FORS)
FORS analyses on the archaeological samples were useful to hypothesize a first characterization of the dyes present. Indeed, by comparing spectra acquired on the bluish area and that of the indigo mockup (Figure 3a), it is possible to underline spectral similarities in the 650-700 nm region where a broad absorption band typically attributed to the π → π* of the C=C double bond is present [37]. Moreover, through applying the first derivative, it is possible to observe a flex around 700 nm, which is perfectly in line with what is reported in the literature for indigo and woad [37]. However, it is important to highlight that the weak maximum in the violet region, observable in the mock-up and cited in the literature, is not visible in the spectrum of the archaeological sample [37].
Quite the opposite is evident when comparing the results obtained from the reddish area with the reference spectra of red dyes and laboratory mockups (Figure 3b, showing the comparison with a reference mock-up of madder mockup). Indeed, the presence of a wide absorption band between 400 and 700 nm disallows any attribution to typical spectral features of known colorants of reddish shade.
Fiber Optic Reflectance Spectroscopy (FORS)
FORS analyses on the archaeological samples were useful to hypothesize a first characterization of the dyes present. Indeed, by comparing spectra acquired on the bluish area and that of the indigo mockup (Figure 3a), it is possible to underline spectral similarities in the 650-700 nm region where a broad absorption band typically attributed to the π → π* of the C=C double bond is present [37]. Moreover, through applying the first derivative, it is possible to observe a flex around 700 nm, which is perfectly in line with what is reported in the literature for indigo and woad [37]. However, it is important to highlight that the weak maximum in the violet region, observable in the mock-up and cited in the literature, is not visible in the spectrum of the archaeological sample [37].
interesting to understand if the solution causes polymer degradation rather than sodium dithionite concrete forming inside the gel (Figure 2).
Fiber Optic Reflectance Spectroscopy (FORS)
FORS analyses on the archaeological samples were useful to hypothesize a first characterization of the dyes present. Indeed, by comparing spectra acquired on the bluish area and that of the indigo mockup (Figure 3a), it is possible to underline spectral similarities in the 650-700 nm region where a broad absorption band typically attributed to the π → π* of the C=C double bond is present [37]. Moreover, through applying the first derivative, it is possible to observe a flex around 700 nm, which is perfectly in line with what is reported in the literature for indigo and woad [37]. However, it is important to highlight that the weak maximum in the violet region, observable in the mock-up and cited in the literature, is not visible in the spectrum of the archaeological sample [37].
Quite the opposite is evident when comparing the results obtained from the reddish area with the reference spectra of red dyes and laboratory mockups (Figure 3b, showing the comparison with a reference mock-up of madder mockup). Indeed, the presence of a wide absorption band between 400 and 700 nm disallows any attribution to typical spectral features of known colorants of reddish shade. Quite the opposite is evident when comparing the results obtained from the reddish area with the reference spectra of red dyes and laboratory mockups (Figure 3b, showing the comparison with a reference mock-up of madder mockup). Indeed, the presence of a wide absorption band between 400 and 700 nm disallows any attribution to typical spectral features of known colorants of reddish shade. Results obtained from SERS spectra acquired on agar gel after extraction (Figure 4a) confirm the presence of an indigo dye on the blue area of the Tutankhamun's fragment. Nonetheless, by comparing results obtained from the extraction of the archaeological sample and those from the indigo mockup (Figure 4b), the absence of the most intense signals of indigo is evident for the former, although characteristic peaks of both indigotin (1073 w, 1176 vw, 1369 w, 1470 vw) and indirubin (645 m, 971 m, 1404 w, 1585 vw) are present. Indeed, signals of indirubin are generally more intense than those of indigotin, probably due to thermal degradation of the dye [38]. In particular, peaks at 645, 971 and 1404 cm −1 can be attributed to bending modes of the C-C, C-H, and N-H bonds of indirubin [39], while the signal at 1073 cm −1 is attributed to a ring stretching and C-O rocking of indigotin [32]. The broad band centered at 1176 cm −1 also refers to indigotin and corresponds to C-C stretching and C-H bending [32]. The band at 1585 cm −1 , instead, can be attributed to stretching modes of the C-C, C=O, and C=C bonds of indirubin [39]. Finally, the peak at 673 cm −1 is probably due to agar gel, as proven by the comparison in Figure 4a, while the very intense signal at 1037 cm −1 has been hypothesized to be a degradation product, probably anthranilic acid, which is a known degradation product of indigotin. Indeed, according to Poulin [40], one of the main degradation products of indigo is isatin, which is formed when indigotin oxidizes. If the degradation goes further and a secondary reaction takes place, anthranilic acid is formed. In 2014, Chadha and colleagues reported a very intense SERS band at 1036 cm −1 attributed to the phenolic ring bending and to the NH 2 rocking of anthranilic acid [41]. The results about the strong affinity between anthranilic acid and the Ag-NPs reported in the aforementioned work support both the hypothesis of the presence of anthranilic acid as a degradation compound on the blue area of the Tutankhamun's textile fragment and also justify the relatively strong intensity of the SERS signal we observed. However, further analysis through HPLC/MS could help to clarify the nature of this compound. SERS analysis performed on gels after extraction of the red area of Tutankhamun's fragment gave good quality spectra for both agar (Figure 5a) and Nanorestore Gel ® (Figure 5b). In both cases, it is possible to recognize the characteristic peaks of madder; most notably, by comparing the spectrum acquired on agar gel after extraction from the archaeological sample and from the madder mockup, a clear correspondence between the peaks is observed (Figure 5a). In particular, the peak at 1448 cm −1 is present in the spectrum acquired Gels 2023, 9, 514 7 of 13 on agar and is attributable to C-O stretching, C-O-H bending, and C-H bending [20,28,42]; whereas, the peak around 1400 cm −1 , which is a characteristic signal of purpurin, is very clearly seen in the spectrum acquired on the Nanorestore Gel ® . Furthermore, the band around 1330 cm −1 present in both spectra is typical of madder and refers to the content of alizarin and purpurin, while the shoulder at 1290 cm −1 and the band around 1590 cm −1 are typical signals of anthraquinone molecules and correspond to the C-C and C=O stretching of the ring, respectively [28]. Results for madder were eventually confirmed and supported by LC/MS data obtained after re-extraction from the gels, whose results are the subject of another publication by the same authors. Figure 4. (a) Comparison between SERS spectra of 3% agar blank (gray) and 3% agar after extraction of the blue area (light blue); (b) comparison between SERS spectra of 3% agar after extraction of the indigo mockup (dark blue) and 3% agar after extraction of the blue area (light blue).
Figure 5. (a)
Comparison between SERS spectra of 3% agar gel blank (gray), 3% agar gel after extraction from madder mockup (light red), and 3% agar gel after extraction from red area of archaeological sample (Bordeaux); (b) comparison between SERS spectra of Nanorestore Gel ® blank (gray) and Nanorestore Gel ® after extraction from red area of archaeological sample (Bordeaux).
Conclusions
The multi-technical approach pursued in this research study, composed of gel microextraction and the subsequent spectroscopic characterization, enabled the detection of the dyes present on Tutankhamun's textile fragment without posing any threat to the artifact.
First, tests conducted on laboratory mock-ups permitted to investigate how different concentrations of agar gel and UV-Vis transmittance spectroscopy have contributed to the choosing of 3% agar gel as the suitable concentration. Figure 5. (a) Comparison between SERS spectra of 3% agar gel blank (gray), 3% agar gel after extraction from madder mockup (light red), and 3% agar gel after extraction from red area of archaeological sample (Bordeaux); (b) comparison between SERS spectra of Nanorestore Gel ® blank (gray) and Nanorestore Gel ® after extraction from red area of archaeological sample (Bordeaux).
Conclusions
The multi-technical approach pursued in this research study, composed of gel microextraction and the subsequent spectroscopic characterization, enabled the detection of the dyes present on Tutankhamun's textile fragment without posing any threat to the artifact.
First, tests conducted on laboratory mock-ups permitted to investigate how different concentrations of agar gel and UV-Vis transmittance spectroscopy have contributed to the choosing of 3% agar gel as the suitable concentration.
The application of the FORS technique on the archaeological sample was useful to obtain preliminary data in a totally non-invasive way and to consequently hypothesize the composition of the dyes present. This was true for the bluish area where the presence of an indigoid compound was even supported by the comparison with a reference spectrum of indigo. On the contrary, for the reddish area, the reflectance spectrum did not allow us to retrieve any information because of the wide absorption in the visible light range.
The application of the gel-supported liquid extraction protocol allowed a microsampling of the dyes both on the blue and red areas. The consequent detection was performed by SERS spectroscopy directly on the gel by contact of its surface with a silver colloidal paste. The SERS spectra confirmed the presence of indigo dye, from the characteristic signals of indirubin and (with lower intensity) indigotin, while, for the red area, it was possible to observe the characteristic signals of madder dye by comparison with reference spectra of the madder mockup. It is interesting to evaluate the potential and the complementarity of SERS applied to the gel extraction in comparison to FORS. For the indigo dye, the combination of FORS and SERS was useful in effectively detecting the presence of the blue colorants and in providing information about degradation processes.
While FORS suggested the presence of indigotin, the SERS data results were indicative of decomposition products. In the case of madder, FORS could not provide information for the dye identification, and only the on-gel approach allowed a clear identification of the chromophores. These aspects highlight the effectiveness and the information potential resulting from the gel-supported liquid extraction methodology.
It is fundamental to mention some aspects of the study, which require further research. At first, in this work, we limited the testing only to one typology of laboratory-made gel and to one parameter, the polymer/gel concentrations, but deeper studies would be necessary for better optimization of the gel-supported liquid extraction methodologies. Further gels could be studied, and the influence of other aspects (for instance, the thickness of the gel) should also be evaluated. The final protocol adopted for the historical sample requires integrating further analytical techniques, such as HPLC/MS, in order to provide the complete characterization of the dye chromophores and their final characterization. These aspects are the object of a future publication. Finally, regarding the hypotheses about degradation products observed by means of SERS, further analyses must be performed in order to confirm the presence of anthranilic acid.
Commercial Products
For the preparation of mock-ups, madder roots (Rubia tinctorum L.) and alum were purchased from Chroma Srl (Milano, Italy), while indigo in powder (Indigofera tinctoria L.) was purchased from Kremer Pigmente (Berlin, Germany). Cream of tartar (99.9%) and sodium carbonate (99.9%) were purchased at a local grocery shop.
Mock-Ups Preparation
A paint mock-up was prepared following traditional procedures reported in the literature [49]. First, preparation was performed by soaking 17 g of animal glue in 250 mL of water and leaving it overnight. The glue was then heated up at 45 • C until completely melted. Later, approximately 100-150 g of gypsum was added to the solution, which was applied on a brick in eight perpendicular coats. The whole was left drying for one week and then polished with sandpaper to make the surface smooth and homogeneous. The madder lake pigment was prepared following a recipe from Daniels et al. [49]. In brief, 5 g of madder roots were soaked in 150 mL distilled water and left overnight. The roots in water were heated up to 70 • C for 30 min. After filtration, 2.5 g potassium alum was added to the solution and the temperature was brought to 80 • C. Meanwhile, 0.94 g K 2 CO 3 was dissolved in 25 mL water and gently poured into the dye bath under continuous stirring. The lake pigment was left precipitating overnight, filtered, and ground. Lake pigment was hence mixed with egg yolk and applied in layers on the previously prepared brick.
Laboratory textile mock-ups were prepared by wrapping and compacting dyed wool yarn around a microscope slide to simulate the surface of a fabric. For the dyeing process, ancient recipes already described in the literature and historically employed for dyeing with natural dyestuffs were followed [4,33,34]. Concerning madder, the procedure was divided into two steps: mordanting and dyeing. The mordanting bath was prepared by mixing 310 mg of alum and 60 mg of cream of tartar in 250 mL of distilled water and the solution was heated up to 40 • C for ten minutes. Then, it was left to cool to 25 • C before adding 1 g of raw purged wool into the bath. Following this, the temperature was Gels 2023, 9, 514 9 of 13 slightly raised to 80 • C over a period of 40 min, and the wool was maintained in the bath at this temperature for 1 h under gentle magnetic stirring. After that time, the bath was cooled at room temperature for over 20 min and the yarn was squeezed out and left to dry. A dyeing bath was prepared by soaking 1 g of crumbled madder roots in 400 mL of distilled water. The mordanted wool was added to a lukewarm bath while the temperature was increased to 80 • C over a period of 40 min and kept for 1 h under gentle magnetic stirring. Subsequently, the wool was left cooling in the bath for 30 min, then squeezed and washed repeatedly until the water was completely clear. Finally, it was left to dry.
The mechanism for dyeing with indigo involves a redox reaction (vat dyeing) in which indigo-which is usually insoluble in water-is reduced to its soluble leuco-form (leuco indigo) in alkaline conditions [4,50]. This allows the dye to penetrate into the fibers; the final color is reached and maintained through oxidation during the drying process, which makes indigo insoluble in water and impossible to be washed out [50]. A dyeing bath was hence prepared by mixing 0.6 g of minced indigo powder in 10 mL of distilled water previously warmed at 45 • C. Then, a solution of 0.6 g of sodium carbonate dissolved in 6 mL of water and a solution of 1.5 g of sodium dithionite dissolved in 50 mL of lukewarm water (40-50 • C) were added. The whole solution was thus heated up to 55 • C and left at this temperature for 20 min. After that time, 3 g of raw purged wool was soaked into the bath and left for 10 min. The yarn was then extracted from the bath, squeezed, and left to air dry in order to allow the indigo to oxidize again and reach the final color. Finally, the wool was rinsed with distilled water until it was clear and then left to dry.
Archaeological Sample: Textile Fragment from Tutankhamun's Tomb
After collecting and cataloging the most valuable remains from the tomb of Pharaoh Tutankhamun, the archaeologist Howard Carter swept the remaining materials from the surfaces of the tomb and deposited them in a wooden box. The box was closed in 1933 and stored in the Egyptian Museum in Cairo until 2017. One year later, it was moved to the Grand Egyptian Museum in Giza (Egypt), where its materials started to be subjected to scientific analyses.
Fragments presented here are linen-dyed textile pieces dating back to 1325 BC, the year of Pharaoh Tutankhamun's death. They are part of a wider collection of textile objects (more than 750) discovered in the tomb and constitute the sole surviving royal wardrobe from the pharaonic period. This large number of textiles offers a significant glimpse into the use of fabrics in ancient Egypt, particularly during the 18th Dynasty. The textile under examination is woven with the tapestry technique, a distinctive method of weaving that incorporates decorative designs using colored threads on a loom. In particular, the fragments presented here have some tinted blue and red areas with some striped sections with both colors in them. More information about the sample can be found in the Figures S1-S3 and in previous publications [51].
Gel Preparation and Micro-Extraction In Situ
The micro-extraction in situ was carried out with two different kinds of hydrogels: agar gel and Nanorestore Gel ® High Water Retention (HRW) [33]. In previous studies, different concentrations between 1% and 12% have been tested with the best results for dye extraction observed at concentrations below 4% [9]. Therefore, here, we tested different concentration between 2% and 4% (2%, 2.5%, 3%, 3.5%, 4% in water w/w) of agar gel on naturally dyed wool and paint layer mockups before leading extraction on real case study samples. In brief, 0.16, 0.20, 0.24, 0.28 and 0.32 g of agar powder were dissolved in 8 mL of water, respectively, in three different beakers properly chosen to obtain a suitable thickness (~2 mm). The solution of agar was then heated up in a bain marie at 100 • C for 10 min and then cooled down for half an hour; afterwards, gels were stored in the fridge overnight before use. The Nanorestore Gel ® was utilized as a factory product. For UV-Vis analysis, agar gels at different concentrations were cut in squares with 2 cm sides, while, for SERS measurements, the extraction was performed using both Nanorestore Gel ® and agar gel cut in small cylinders (cylinders were cut in half for the extraction of the archaeological sample because of the fragments' dimensions for a final size of about 3.5-4 mm of diameter) obtained with the back of a Pasteur pipette and loaded in the respective extracting solutions for 90 min [33].
For the extraction of the red area, we prepared a solution of 1 mM NH 3 /Na 2 EDTA (1:1) with 4.7 mM NaCl following the procedure pointed out in [34]. Both agar and Nanorestore Gel ® were thus applied on the surface of the red area after the gel had lost 5% of its weight and it was left in extraction for 3 h. On the blue area, instead, the dye was extracted using a reducing solution containing NaOH/Na 2 S 2 O 4 (1:2) dissolved in water [32]. Gels were subsequently applied directly on the area of extraction at 100% of their weight but the exceeding solution was removed using adsorbent paper and letting it absorb the solution for 5 min [30]. The limited amount of time with respect to the ammonia-based solution extraction was decided, in this case, after several tests on the indigo mock-up in order to prevent the formation of a salty halo on the archaeological sample, which is due to, probably, the presence of sodium dithionite.
Preparation of Ag-Colloidal Pastes
Ag-colloidal pastes were prepared by adapting the procedure already described in [52,53]. In brief, Ag-colloids were prepared following Leopold and Lendl's methodology [54]: two solutions, one containing 0.021 g of NH 2 OH HCl in 5 mL of MilliQ water and the other one with 0.02 g of NaOH in 5 mL of MilliQ water, were added to a solution of 0.017 g of AgNO 3 in 90 mL of MilliQ water under gentle magnetic stirring to induce the formation of colloids. Then, 10 mL of Ag-colloids were centrifuged for 20 min at 4500 rpm and the supernatant was removed. Afterwards, colloidal pastes were applied onto the gel surface immediately after dye extraction using a Pasteur pipette, and the gels were left drying for 12 h [30].
Spectroscopic Analysis
UV-Vis transmittance spectra were collected in the wavelength range between 190 and 800 nm using a Perkin Elmer Lambda 1050+ spectrophotometer in the ENEA C. R. Frascati laboratories. Measurements were led by housing the sample in a homemade support specifically designed for measurements on solid samples and by exposing both sides of the gel (i.e., the one in direct contact with the mockup where the extraction was performed and the opposite side) to the radiation. For each side, three measurements were acquired in three different positions. Spectra were then averaged and processed using Origin9 (©OriginLab, Northampton, MA, USA).
Raman-SERS data were collected using a Horiba Jobin-Yvon HR-Evolution Raman spectrometer (Kyoto, Japan) coupled with a microscope equipped with a series of interchangeable objectives. In this case, 20x magnification was chosen to select the area of analysis, while 100× objective was used to focus the laser beam on the Ag-colloid spots observed on the gels to obtain good quality SERS spectra. Samples were excited using a He-Ne laser (λ = 633 nm), whose intensity varied between 0.15 and 0.75 mW. The acquisition time and number of acquisitions were varied for each sample to optimize the signal-to-noise ratio; up to five spectra were acquired in different points of the gels and, to ensure reproducibility, data were averaged and processed using Origin9 (©OriginLab). Fifth-grade polynomial baseline was subtracted for the background and the adjacent-averaging smoothing method was applied to reduce noise.
Preliminary analysis was led on the archaeological fragment using a BELPhotonic optical microscope (Bengaluru, India) equipped with interchangeable objectives. After a general visual evaluation, fiber optic reflectance spectroscopy (FORS) measurements were performed to have a first non-invasive hypothesis of the chemical class to which natural dyes belong to better understand the procedure of extraction to follow. Spectra were acquired using the EXEMPLAR LS BW TECH spectrometer (Plainsboro Township, NJ, USA), operating in the range of 180-1100 nm with a variable resolution from 0.6 to 6.0 nm. Samples were illuminated with a 5W BW TECH BPS101 halogen lamp with an emission spectrum between 350 and 2600 nm and a color temperature of 2800 K. Radiation was sent to (and collected from) the samples using THORLABS RP22 optical fiber bundles provided with a measuring head of 45 • inclination, which was suited to avoid the collection of specular reflectance radiation. Five measurements were acquired for each area (red and blue), and then spectra were averaged and processed using Origin9 (©OriginLab). The same instruments and methodology were used to evaluate chromatic variations in the mock-ups of yarns dyed with madder and the painting layer constituted by madder lake in egg tempera by exploiting the color analysis tool of the software BWSpec version 4.10. For the evaluation of the color invasiveness, further details about the experimental procedure are provided in [30].
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/gels9070514/s1, Figure S1: The historical samples from Tutankhamun's tomb analysed in the paper, constituted by archaeological textile fragments; Figure S2: A 10x magnification image of a red area on the analysed textile fragment; Figure S3: A 10x magnification image of a bluish area on the analysed textile fragment. Funding: This study was undertaken as part of "AGLAIA-Application of Gels supported Liquid extraction for Analysis and Identification of dyes in Artist matrices", Sapienza project "Bandi di Ateneo 2021" (funder: Sapienza University of Rome; code: 000004_21_ARCiccola). This work was also supported by the Ministry of Foreign Affairs and International Cooperation (MAECI) in the form of a grant in favor of foreign citizens and Italian citizens living abroad (IRE).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the fact no actual database is available and they are part of a current project. | 9,287.6 | 2023-06-25T00:00:00.000 | [
"Chemistry"
] |
Ethics , bioethics and deontology in naturology teaching in Brazil
The aim of this study was to discuss how the disciplines of ethics, deontology and bioethics are configured within the scope of Naturology teaching in Brazil from a Social Bioethics point of view. A qualitative-descriptive approach was used by means of documental and field investigations, based on two Brazilian universities. Six lecturers participated in this study via semi-structured interviews analyzing six teaching projects, three from each of the two courses. The data collected underwent content analysis, which resulted in four categories: 1) General aspects of the disciplines that include the subjects of ethics, bioethics and deontology; 2) Bioethics as a discipline in the course of Naturology: themes and references for analyses; 3) Theoretical approaches in ethics within Naturology; 4) The study of deontology in Naturology teaching. The need to increase teaching hours in the disciplines of ethics and bioethics was highlighted, as well as the need for constant reflection on professional practice within the social reality of Brazil.
Ethics, bioethics and deontology in naturology teaching in Brazil
Health care in extreme prematurity of situation presents an ethical dilemma.The current literature discusses studies that discuss the therapeutic decision making in situations of preterm birth in the points of view cultural, religious, financial, technological and moral.Among health professionals (neonatologists, obstetricians, nurses, physiotherapists, speech therapists etc.) involved in the direct care of premature infants, set the limit for gestational age, selected based on evidence, to judge when the intensive care becomes mandatory, optional or does not provide a guarantee of success, is controversial and sensitive topic.
Currently, is considered that the therapeutic decision between using comfort and palliative care, avoiding intensive care, it is entirely clear to parents and informed about the clinical conditions involving extreme prematurity and the options available therapy 1 .Several publications have clear results and consistent about the mothers' decision and pregnant women, from the counseling process in situations involving risk of death of babies, while share specific recommendations for medical intervention in cases of extreme prematurity [1][2][3] .
It is noticed that the morbidity gradually have replaced the mortality in preterm infants in different limits of viability 4 .Although it was observed significant decline in mortality of live births in extreme prematurity, Neuroevolutive injury rate remains high [5][6][7][8] .Are sparse forms of evaluation of the quality of neonatal care related not only to health services, but the Neuropsychomotor functionality and quality of life of patients 9 .One should take into account issues related to different areas of knowledge, which seek to influence the decision making, transdisciplinary in neonatal intensive care units 10 .
Given the variety and complexity of the factors involved in decisions about therapeutic interventions in extreme prematurity conditions, this study discusses ethical aspects of decision making considering the feasibility limit, extreme prematurity and neurodevelopmental.
Method
This study is a critical review of the literature, and we used the databases PubMed / Medline, Sci-ELO and Lilacs.A search was conducted by the "premature" descriptors, "ethics", "new born extremely low birth weight", registered in Descriptors of Health Sciences (MeSH), and the like in English "premature infant", "ethics", "extremely low birth weight infant "(according to the MESH); beyond the terms "viability threshold", "psychomotor development" and "therapeutic decision" considering, in addition, the respective intersections.
We included all studies that focused on the last ten years, without language restriction.Only jobs were deleted presented in summary / abstract.Whereas the subject is vast and controversial, the following aspects were considered in this review: feasibility limit, extreme prematurity, neurodevelopmental and ethical considerations on therapeutic decisions.
Limit of viability
There is variability in premature feasibility criteria between countries (developed and developing) and also depends on the type of health center which assists the mother and the newborn.It is possible, however, to establish the range of possibility of viability between 22 and 26 weeks 11.12 .Available data indicate that it is extremely unlikely survival of newborns less than 23 weeks of gestation and weighing less than 500 g at birth, with virtually zero chance of survival [13][14][15] .
Study by Doyle et al 16 showed that only 10% of the 22 weeks of gestation newborn survived hospitalization in intensive care units, and none remained alive for more than six months.Similar results were found by Markestad et al. 17 , for whom no newborn with gestational age less than 23 weeks survived.The results of these studies raise the possibility that the non-therapeutic investment in these newborns would be ethically correct attitude.
For babies who are born with more than 23 weeks gestation and weighing more than 500 g, survival and the results are uncertain and difficult to predict.These children would be in the "gray zone", and therapeutic decision-making should be based on careful evaluation of data relating to prenatal care, gestational age, birth weight and clinical condition at birth 4.You must indicate However, also the definition of "gray area" differs between studies and is considered the period of birth between 24 and 25 weeks 18 .However, Parikh et al 19 showed an overall probability of survival without profound dysfunction in the range 62% to 63% when the newly born at 25 weeks of gestation are subjected to intensive care.Therefore, in many centers, intensive care have been mandatory for children born at 25 weeks gestational age 3.However, in an interview with doctors from developing countries, were con-Ethics, bioethics and deontology in naturology teaching in Brazil sidered non-viable infants with gestational age up to 25 weeks and birth weight of 800 g 12 .
Most clinicians and researchers agree to apply the concept of "gray area" as the most consistent to define the limits of viability to much of the population of preterm infants 20 .The classification of patients by "gray zone" takes into account several factors such as, for example, the ability of clinicians to correctly classify the gestational age of the informed woman in labor before and immediately after delivery and to hold regular prenatal visits with the obstetrician and the family.
The neonatologist should participate in the decision-making process before delivery and meet the delivery of all newborn infants who is close to this limit of viability, since below the newborn limit is too immature to have reasonable chance of survival, and However, above, there is a greater chance of survival without severe dysfunction.In addition to gestational age, should consider other factors before making decision, positively influence the prognosis of premature: high weight for a given gestational age, single pregnancy, female and exposure to antenatal corticosteroids 18.21 .
Extreme prematurity and neurodevelopmental
Despite progress in the quality of perinatal care, which represents a decrease in mortality, there is still high risk of severe neurological injury 22 .The chance of survival without dysfunctions and / or significant deficiencies decreases with gestational age, although prevalence studies are heterogeneous and outcome of neurocognitive and neuropsychomotor deficiencies associated with extreme prematurity 11.18 .
Premature should be evaluated according to the international classification of functioning, which describes aspects of behavioral, socio-emotional and adaptive skills.The main anomalies identified in survivors: developmental delay, due to non-progressive chronic encephalopathy; blindness; deafness; and changes in social and cognitive skills 23 .
Unpleasant early experiences can modulate the endocrine function and change the pattern of development of neuronal circuits, which interferes with sensory, motor and cognitive systems.There are reports in the literature that premature newborns (PN) exposed to the stressful environment of prolonged neonatal intensive care unit have abnormal brain and sensory development, hearing loss and language problems [24][25][26][27] .
It is known that extremely premature infants have changes in visual development markers, even without apparent brain damage in imaging 28.Critical events and processes that occur in important phase of acquisition of the human visual system (the 20 th to 40 th week of gestation) such as exposure to excessive light or oxygen therapy can induce retinopathy of prematurity, since degrees lighter and treatable even more severe degrees, culminating in blindness.In addition to these consequences, revealed up changes in the central control of the visual system in areas such as thalamus, occipital cortex, hippocampus, parietal and frontal lobes.All these factors predispose to inappropriate visual development, which interferes with programming and learning of visual functions, visuocognitivas and visuomotor [29][30][31] .
In the long term, differences are observed in the dysfunctions in extremely preterm infants according to age group.Functional limitations commonly found in preschool involve motor skills, self-care and communication; school age, are identified delays in education in more than 50% of the survivors; in adolescence, are still present vocational limitations and there are reports of psychiatric disorders 32 .These activities require neural networks of well-established and functional attention, but children exhibit impairment in early development of attention, which can last up childhood 33 , affecting subsequent steps learning.
Ethical considerations on therapeutic decisions
With premature birth, the decision and maintenance intensive or palliative care is very difficult and involves a number of complex ethical issues.The introduction of advanced care can result in the survival of severely compromised newborns of psychomotor point of view, cognitive and affective; on the other hand, no resuscitation or not to impose intensive care at birth implies let the baby die and can suppress the possibility of life of a premature that normally develop 14 .The team's dilemma in decision making on premature to consider viable or not, lies in the recognition and perception of personhood that newborn, and value assignments to life according to cultural and religious factors 34 .
The increasing technological advances in health care and the need to seek a humane intervention make it imperative to reflect on bioethical issues in the routine of neonatal units.The new Ethics, bioethics and deontology in naturology teaching in Brazil philosophical concepts and the failure of biologist model have led to the rethinking of care practices, seeking to emphasize the humanistic and existential vision care 35 .It should be noted in this context that the bioethics objective is to seek benefit and ensuring the integrity of the person taking as a guideline the basic principle of protection of human dignity 36 .
In neonatology, the principle of autonomy is viewed with reservations.After all, who determines the choice of what is the best or most appropriate to newborn care: professionals or their parents?Whereas autonomy is the right person to make their own decisions and that babies are not able to express autonomy, parents are legally authorized to give consent to have done a treatment 13 .In this sense, are defendants discussions and dialogues continuous between the health team and the legal representatives for decisions to be made about the procedures to be used in the treatment of neonates 36 .
Maintain artificially vital functions without reasonable expectations of recovery may prolong the suffering of patients and their families, which comes to undermining the very dignity of the patient.This is not always the optimal balance between risks and benefits and implies very low quality of life, which can still lead to the exclusion of needy patients and viable 37 due to lack of resources to meet all premature.What has been discussed more recently is focused therapy to palliative care in premature infants below the gray area, which includes pain relief and suffering for the newborn and, for the family, psychological support and guidance for the next pregnancies, particularly in cases of congenital malformations 38 .
Health professionals should also consider the entire network of social support and support for extremely premature infants that survived needs therefore to improve the prognosis of functional outcome of those with mild to moderate dysfunction, you need to optimize community participation and support for family 32 , as early intervention programs appear to be positive in the short and medium term 39 .The team may also seek to ensure ease of access of patients to specialized centers, with follow-up programs after discharge implemented by an interdisciplinary team, focused to serve those with the greatest deficiencies.
Final considerations
Innovations in advanced life support, greater specialization of health professionals, frequency and adequacy of prenatal tests, progress in early diagnosis and intervention perinatal correctable affections are procedures which enable the survival of infants with gestational age and birth weight extremely low, pushing the boundaries of feasibility.However, given this possibility, it has been seen that a large number of extremely preterm infants is displaying neurobehavioral problems such as decreased cognitive reach, attention hyperactivity disorder in childhood as well as psychiatric disorders in adolescence, even in the absence of non-progressive encephalopathy, implying varying degrees of neurocognitive limitation, physical and functional dependence.This raises the question about the importance of valuing not only the survival of premature infants, but also to maintain their quality of life, since they are more likely to consequences in the short, medium and long term.There is still difficulty in defining the borderline level of prematurity that guide decision-making in relation to therapy that should be adopted (palliative or intensive), especially when considering the resources available on the drive that will receive the extremely premature.It is important to keep family involvement in decision making, so that is not extinguished the possibility of full life to a premature possibly viable. | 3,046.4 | 2014-01-01T00:00:00.000 | [
"Philosophy"
] |
RATS TRAPPING AT DIFFERENT TYPES OF LOCATION IN LIWA BOTANICAL GARDEN, LAMPUNG, INDONESIA
Motivation/Background: Rat is cosmopolitan animal which means they can live in all type of places worldwide, including highlands, lowlands, rice fields, forests, and settlements. A high rat population can have an impact on losses in various fields of human life. The Liwa Botanical Garden is one of the areas developed for tourism so the presence of rats is important to note. Method: This study aims to determine the success rate of rat catching in Liwa Botanical Garden by different type of trapping location, namely bamboo groves, houses, river banks, and gardens. The traps were set every day for 10 days. The observations included the number of individual, the type of rat, sex and size of the trapped mice. All the data obtained were analyzed descriptively. Results: The live trapping resulted in 11 individuals of small mammals belongs to three species namely Rattus exultant, Hylomys suillus and Suncus murinus. Among the four trapping location, bamboo groves is the place that catches the most, and the females were trapped more than the males.
Introduction
Liwa Botanical Garden is a recreational and educational facility for the community, especially in Lampung Province. In this area, there are many types of animals, one of which is a mammal which is a mouse. The Liwa Botanical Garden does not yet have much data about the types of animals in it, especially mice. Rats are cosmopolitan animals which means they can live in all places such as in the highlands, lowlands, rice fields, forests, and settlements. So it is suspected that the Liwa Botanical Garden will also find mice. A high rat population can have an impact on losses in various fields of human life. mouse traps using a single live trap are practical and safe traps for the environment. Environmental factors can affect several aspects such as human welfare and disease aspects. In some cases, the tendency for the disease is influenced by poor physical and biological environmental conditions or certain organisms to multiply. Rats are cosmopolitan animals which means they can live in all places such as in the highlands, lowlands, rice fields, forests, beaches and settlements [1].
Human life is often associated with wild animals such as mice. High rat population can have an impact on losses in various fields of human life, such as in the field of rat settlement can cause damage to residential buildings, offices, schools, and industry. In agriculture, rats often threaten various agricultural products and crop cultivation. Judging from the aesthetic value, the presence of rats can describe dirty environmental conditions and indicates poor environmental hygiene [2].
In the field of rat health also plays a large impact on human health, rats can be a reservoir of several pathogens that cause disease in humans. Leptospirosis can be caused by rat urine and saliva. The bubonic plague is caused by flea bites that occur in mice. Mice can also transmit several other diseases including murine typhus, salmonellosis, rich psial, rabies, and trichinosis. A zoonotic disease is a disease transmitted by mice or other animals to humans and vice versa. This disease can be fatal if it does not get proper treatment and leads to death [2].
The right environmental conditions make mice breed very quickly. Factors that can support mouse reproduction include the availability of food, drinks, and shelter. Places that have the potential to be found by rats include traditional markets, settlements, and agricultural areas. Mice have a sense of touch, and good hearing so they are classified as intelligent animals because they have a welldeveloped brain, this means mice can learn. The behavior of rats can be determined by instincts and external factors such as temperature, length of the day, rainfall, and previous experience [3]. Paying attention to the rat population is a way of preventing diseases caused by mice. Some of the rats in the tropical residential environment are Rattus tanezumi (house mouse), Rattus norvegicus (sewer rat), and Rattus-rattustanezumitemminh (roof rat) [4].
Materials and Methods
This type of research used in this research is a descriptive analysis of the data tabulated by the table and described. The variables in this study include locations where mousetraps were laid (bamboo clumps, houses, riverbanks, and gardens). The type of bait used is roasted coconut. The installation of traps is placed at predetermined locations (bamboo clumps, houses, rivers, and seedlings) The installation of traps is carried out in stages from the closest location to the farthest. The installation is carried out in the afternoon at 16.00 WIB (because the rat is active at night) and then the trap is taken the next day between 06.00 -08.00 WIB. The total traps used were 8 traps for 4 location points. The arrests were carried out 10 times in 10 days The captured mice are labelled based on the day and place of capture. After the trap is taken and the rat is anesthetized with chloroform, the trap is washed and brushed with soap and then dried in the sun before it is used again in the afternoon. The trapped mice were put into plastic which had been given cotton and chloroform. The mice obtained were then identified quantitative and qualitative characters using an identification key that is the Integrated Mouse Pest [5].
Results and Discussions
The condition of the location of the trapping in the Botanical Gardens Liwa, West Lampung is located at an altitude of 800-900 m above sea level. The weather conditions at the location of the capture include in Table 1. This research was conducted every day within 10 days of repetition and 2 days of heavy rain occurred at night, namely on the 4th and 8th day which resulted in no rats being trapped the next morning, but the next day many rats were trapped. The physiological condition of the mammalian body will require more energy when the ambient temperature is high. Weather conditions can affect foraging behavior. These conditions cause rats to starve quickly and need more food, so the mice are lured into traps where there is bait as a food source. Trap-shines are where the nature of rats can adapt to traps so that in the next trapping mice are difficult to catch in the same location shortly. Rats are animals that have a sense of smell and good hearing, and a well-developed brain so that rats can learn from experience [6].
But mice have poor eyesight. Mice are nocturnal animals, at night the mice move in a guided by the hair, a long mustache sensitive to touch. Mice love the sweet smell, especially those from human food. The habit of eating time is at night, rats are not happy in crowded places, for example, noisy by the sound of the machine but happy in food storage. Foraging for food is like in the trash, cupboards, sewers, and kitchens.
Rattus exullans have the same appearance as most rats in general, the ability to adapt easily to various environments, from shrubs to forests. Hylomys suillus and Suncus murinus are animals of the order with different families in the Hylomys suillus family Erinaceidae and Suncus murinus soricidae family. Significant differences were seen in the shorter Hylomys suillus tail. This small mammal inhabits the hills and is rarely found in lowland habitats, Hylomys suillus prefers areas with thick vegetation cover to build their nests in bushes. Its prey consists of soft-bodied vertebrates such as earthworms, grubs, and insects. Some differences between rats and shaws are the shape of the snout, the number and arrangement of teeth, the size of the tail, the speed of walking, dirt (faces) and the odor that is generated. Cecurut has a pointed snout, a shorter tail that signifies not being good at climbing, walking slowly, dirt smaller and wet, and a very sharp odor coming from the anal glands around hisanus that serves to defend the body and chase away enemies. Besides the arrangement of teeth, Also, the arrangement of teeth in his mouth is also very different from rats. Rat and shredded feed differed mice ate cereals, fruits, and vegetables, while shrews ate insects (animal protein) both living and dead insects. Cecurut adapt so that these animals have been able to adapt to food other than insects, which are the leftovers of human food as omnivorous animals (eating everything). This type of shaman belongs to the species Suncus murinus (shaman of the house), called shaman of the house because of its habitat in the area of our home.
The type of trap used to trap mice is a single live trap with a length of 35cm x width 25cm which only has one side of the entrance. The working principle of a single live trap is that the trap door will be closed when the bait is pulled by the rat, and the rat will be trapped. The success of the trap is 34.375% (Table 3). A total of 11 trapped in 10 days. The success rate of trapping in the area can be expected due to the presence of food, water, and bushes as mice nests. Another factor is the location of the trapping adjacent to a river which is a source of water for mice. The success of trapping can be influenced by several factors including the selection of bait (grilled coconut) and the position of laying a trap. The roasted coconut can give off a scent that can attract mice to enter the trap. Mice like roasted coconut, allegedly because roasted coconut has a strong aroma. The success rate of trapping by different locations is presented in Table 3. Trap results in different habitats are presented in Table 3 Rattus exullans and Hylomys suillus are more commonly found in bamboo clump habitats, while Suncus murinus is in the habitat of homes and garden yards. Habitat for each species is different, but does not limit the distribution area of species. Rats have territorial territory under their control. Observation results indicate that the bamboo clump habitat is a territorial area for the species of Rattus exullans. The number of Rattus exullans is more commonly found in bamboo clumps, it is thought that underneath the surface of the bamboo groves there is a rat's nest due to the smell of urine and the presence of rat droppings, because the habitat of the rats themselves likes to make their nests in the ground covered with bushes. Suncusmurinus is a species that is very adaptable and can live around human habitation, in this study Suncusmurinus was found in several houses in the house as many as 3 animals and in plantations as many as 2 individuals. The results of the capture of riverbanks are relatively the least due to moderate rainfall so that in riverbank areas water often overflows to the surface during heavy rains which results in wet riverbank locations.
Observation of the sex of mice is needed to determine the mobility of mice in the area, physically the sex of mice can be easily seen the difference between males and females. In this study, more female rats were obtained than male rats. According to Priyambodo [2] which states that female rats are more easily captured than male rats because in their group's female mice are individual foraging for their children while male mice play a greater role in protecting their nests or territorial areas. Female mice are more easily caught during the breeding and lactating season because females need large amounts of food [7].
Conclusions and Recommendations
There are three species of small mammals were caught in the Liwa Botanical Garden, namely others, Rattus exulans, Suncus murinus, and Hylomys suillus. The most mammals were captured at bamboo groves and among the captured mammals female were the most. | 2,739.8 | 2020-06-05T00:00:00.000 | [
"Biology"
] |
First time evidence of pronounced plateaus right above the Coulomb barrier in 8 Li + 4 He fusion
We investigate unprecedented experimental information on the fusion reaction induced by the radioactive projectile 8 Li on a 4 He gas target, at center-of-mass energies between 0.6 and 5 MeV. The main issue is the tendency of the dimensionless fusion cross section σ f π ¯ λ 2 to form well visible plateaus alternated to steep rises. This finding is likely to be the most genuine consequence of the discrete nature of the intervening angular momenta observed so far in fusion reactions right above the Coulomb barrier. A partial-wave analysis, exclusively based on a pure quantal penetration fusion model and sensitive to the interaction potential, identifies a remarkably low-height barrier. (http://creativecommons.org/licenses/by/4.0/). Funded
lation models, rather indicate that the link between the occurrence of an oscillation and the height of the corresponding centrifugal barrier might not be so simple [6]. Thus, attempting a totally different strategic approach is mandatory.
In this context, the behavior of light fusing systems should be investigated. Indeed, in light fusing systems, channel couplings may typically play a less important role. Moreover, very little has been experimentally established so far about the fusion of nonidentical light nuclei, lighter than 12 C + 12 C.
In the domain of possible applications, this kind of investigation could be an interesting and timely issue. The fusion between light ions plays an important role in astrophysical sites such as, for instance, in evolving massive stars, in white dwarf Type Ia supernovae and in surface explosions of neutron stars [7][8][9][10][11]. Light nuclei are utilized in accelerator-based inertial fusion for energy production purposes [12]. * Corresponding author. In this work, we obtain for the first time experimental information on the fusion of 8 Li + 4 He → 12 B. The choice of this unusual colliding system, because of the radioactive 8 Li (τ = 1.21 s), is motivated by the possibility of exploiting the great opportunity provided by the almost total absence of internal bound excitations in both colliding partners.
The fusion cross section σ f is established at energies E cm = 0.6-5 MeV. It is determined dividing experimentally available 11 B + n exit channel data by the corresponding experimentally available branching ratio data.
Regarding the 11 B + n exit channel, three independent concordant sets of 8 Li + 4 He → 11 B + n reaction cross section data are identified in [13,14]. We used such three sets of unbiased values to explore the role of exotic cluster structures in [15] and to formulate the recommended cross section in [16]. The resulting analytical expression was adopted in the astrophysical network of [17]. These three experiments can be grouped according to the detected species in: 11 B measurements [18,19] and neutron measurement [20,21]. In the 11 B measurements [18,19], a 4π multiple sampling ionization chamber (MUSIC) was used as an active gas target. The energy loss along the particle trajectories was measured and the detector thickness was sufficient to span the excitation function with a single beam energy. In the neutron measurement [20,21], a zero-energy-threshold 4π thermalization counter was used in combination with a passive gas target. The counter provided comparable sensitivity to all possible 11 B + n branches. Moreover, its characteristic capture time response [22] allowed unambiguous reaction-neutron yield separation even in presence of an intense background level. Left panels: unprecedented experimental information for 8 Li + 4 He, a) fusion cross section; c) dimensionless cross section. Right panels: experimental information for 12 C + 12 C, b) fusion cross section [26]; d) dimensionless cross section. In the left panels, different symbols correspond to the various 11 B + n exit channel data sources considered in this work for the evaluation of the fusion cross sections shown in panels a), c): open squares [18], open circles [19], filled circle [20], filled square [21]. In panel d) all ordinates are divided by 20. The solid curves are the results of the MINUIT data fits described in the text. The dashed curve in panel c) is the evaluated extrapolation according the adopted formalism.
All three data sets provide the requested 8 Li + 4 He → 11 B + n cross section summed over all 11 B final states.
For the sake of completeness we mention the other, exclusive, data set obtained by detecting 11 B and neutron coincident signals [23]. Such a set shows smaller cross sections than those considered above [18][19][20][21], in the entire energy region. Since there was a significant threshold on the neutron energy, some of the 11 B final states could completely escape detection, as discussed in detail in [13]. Therefore, such exclusive measurement cannot provide the cross section summed over all 11 B final states at each explored E cm . For this reason, the data set [23] is not suited for the specific aim of this work.
Concerning the experimental branching ratios of the dissociation of the 12 B * states into 11 B + n, the data given in [24,25] are considered. These branching ratios were obtained via the 9 Be(α, p) 12 We remark the sawtooth-like behavior of σ f versus E cm . However, with the error bars into play, a reliable oscillation analysis of the type in [3], based on the second derivative of the fusion cross section, is impossible. An alternative, more practicable, approach is necessary. We start by observing that the general trend of this excitation function for 8 Li + 4 He is considerably different from those in 12 In Figs. 1c-d we show these same fusion data from a different perspective [4]. We consider the dimensionless cross section expressed in units of πλ 2 . For 8 Li + 4 He, the σ f πλ 2 rises by as much as an order of magnitude with E cm increasing from 0.6 to 2 MeV. However, the rise is not totally monotonic. In fact, two nearly horizontal plateaus clearly alternate with steep rises. The first plateau, between 1 and 1.8 MeV, corresponds to the prominent structure at Previously known oscillatory structures, like those of the 12 C + 12 C cross section in Fig. 1b, plausibly arise from entrance channel effects, likely the progressive addition of higher partial waves with increasing energy, rather than from properties of the compound nuclear system [3][4][5]. In order to probe such an interpretation in the low energy scenario of the 8 Li + 4 He data, we start considering a formalism of ion-ion fusion that explicitly takes into account the angular momentum of the relative motion l. We assume a sharp angular momentum cut-off that allows all of the flux crossing the interaction barrier to fuse for values of l ≤ l max .
We also consider l max a monotonic increasing discrete function of E cm (see e.g. [28]). Accordingly, the dimensionless cross section is calculated as where l = l max = 1 for 8 Li + 4 He fusion, whereas l = l max = 2 for the identical even-even colliding ions case, for which all odd partial waves amplitudes vanish. In Eq. (1), T l is the energydependent penetration probability of the l-th partial wave through the interaction barrier. In particular, when all T l competing to a given energy tend to unity, more or less pronounced plateaus may appear at the altitudes for l = l max = 2.
For 12 C + 12 C, the altitudes (3) are drawn as horizontal lines in Fig. 1d. Only the lower plateau at 19 < E cm < 25 MeV is located at the correct altitude given by the large angular momentum l max = 12. For 8 Li + 4 He, the altitudes (2) are drawn as horizontal lines in Fig. 1c. Here, by contrast, apparent quantitative agreement is achieved for both the plateaus in the dimensionless cross section. Moreover, these altitudes correspond to much lower angular momenta than those in 12 C + 12 C. The first lower plateau is constituted by 25% l = 0, 75% l = 1, and is located between well separated p-wave and d-wave barrier penetration rises. The other plateau is purely the saturation of all penetrability T l at 1 for energies well above the highest l max = 2 barrier.
It should be noted that the above quantitative characterization of the involved angular momenta is performed regardless of the barrier shape. Now, we more deeply address our analysis to the barrier shape, for both 8 Li + 4 He and 12 C + 12 C. To this purpose, we implement the above analytical model so that barrier parameter values and related uncertainties are simultaneously determined from the data using standard error propagation procedures. Each quantal barrier penetration coefficient T l is approximated as in [29] by that of an inverted parabolic potential with l-dependent height B l and intrinsic energy width l formed in s-wave collisions by the interplay between the repulsive, Coulomb, and the attractive, nuclear, interactions. This is hereafter referred to as the Coulomb barrier. Ih −2 is the moment of inertia Table 1 Barrier height, intrinsic width and moment of inertia parameter sets determined for both reactions using MINUIT data fitting procedure. The radial distances R CB at the Coulomb barrier height, each determined assuming I = μR 2 CB , μ being the reduced mass, are listed in the last column.
R CB (fm) 8 Li + 4 He 0.34 ± 0.05 0.08 ± 0.02 1.72 ± 0.07 5.2 ± 0.2 12 C + 12 C 5 .70 ± 0.12 0.65 ± 0.14 5.88 ± 0.10 6.4 ± 0.11 at the radial distance R CB of the Coulomb barrier height B 0 . Last, we set 0 = .... = l max = . By inserting (4) into (1), data fit are performed using MINUIT, treating B 0 , and I as free parameters. For 12 C + 12 C, good fit is obtained in the whole data range by attenuating the transmission T l=14 by a factor 0 < A 14 < 1, treated as fourth free parameter, and assuming that all higher partial waves cease to fuse above 20 MeV. The value of A 14 = 0.35 ± 0.05 is obtained. For 8 Li + 4 He, we consider the partial waves up to l max = 2, though the onset of a largely attenuated f-wave around 4 MeV cannot be excluded to within the error bars. Below that energy, the semiclassical estimates of the grazing angular momentum l g coincide with those of l max . In the range above about 4 MeV, l g = 3. The resulting curves are shown in Figs. 1a-d. As an example, for 8 Li + 4 He, we find that the agreement in Fig. 1a between the calculated sawtooth and the data behavior is remarkably good. In the sawtooth, each rise is determined by a given steep barrier penetration and each falloff is proportional to . The values of the barrier parameters resulting from these data fit are listed in Table 1. We remark that for 12 C + 12 C the barrier parameters values are in agreement with those used in [5]. For the first time, here we give the uncertainties resulting from the data fit.
We now comment on the properties of the resulting 8 Li + 4 He potential.
The value of the radial distance R CB in Table 1 is quite consistent with summed projectile and target radii and with plausible values of the surface diffuseness parameter a of the exponential nuclear potential (see e.g. [5]). In particular a = 0.9 or 0.6 fm, depending on the nucleon radius r 0 = 1.2 or 1.3 fm, respectively. The same holds for 12 C + 12 C.
Similarly for the intrinsic width . In fact, σ f πλ 2 (1) is proportional to the energy weighted cross-section σ f E cm . The first derivative of (1), using (4), is the weighted sum of l-dependent barrier distributions, each centered at B l and having width very close to 2 l . Since the width of a typical barrier distribution is proportional to Z 1 Z 2 [5], is expected to increases by a factor 6 passing from 8 Li + 4 He to 12 C + 12 C, quite consistent with the value of 8.1 ± 2.7 resulting from Table 1.
Instead, the value of the height B 0 captures attention. In this regard, we recall that the experimental data in Fig. 1c seems to indicate the occurrence of a third plateau at the altitude σ f πλ 2 = 1 as expected from (2) for l max = 0. Namely, the transmission T 0 ∼ 1 already at E cm as low as 600 keV. This fact indicates that, actually, B 0 < 600 keV and that the available excitation function in Fig. 1a entirely develops above the Coulomb barrier. The fitting procedure does nothing but respond to this trend in the available data. We also recall, that the barrier height being the larger solution of d dR [V N (R) + V C (R)] = 0. Namely, no apparent simple scaling of B 0 with projectile and target charges can intuitively be envisaged. Consequently, the large reduction factor in Table 1, ∼ 16, relative to 12 C + 12 C might well be plausible, although in absolute the low value of B 0 determined here for 8 Li + 4 He represents a novelty that deserves further insights.
A relevant aspect in this matter is the evaluation of the experimental sensitivity to the fusion barrier and its dependence on the projectile-target system. In this context, for sure the existence of plateaus in σ f πλ 2 is of great importance. Because once a well pronounced extended plateau is formed, it acts as a pedestal for the well separated sub-barrier rise of the next entering higher l-wave. If a plateau-rise-plateau alternation is observed, as in Figs. 1c-d, that data portion is primarily and extremely sensitive to the barrier height B l and width l . In fact, the barrier height B l is directly identified to a good approximation by the energy at which the steeply rising data intercepts the half-distance between the two horizontal plateaus; the barrier width is linked to a good approximation to the slope of the rise. The adoption of a fitting procedure, of the type used above, inter alia, also serves to better determine these parameter uncertainties caused by the experimental errors. To quantify both the plateau resolving power and the sensitivity to the barrier shape, we adopt the successive barrier separation to barrier width dimensionless ratio B 2 as indicator. We, then, evaluate its values using the parameters in Table 1. In the 12 C + 12 C ( l = 2) reaction case in Fig. 1d eight partial waves are involved. The effects of lower l-waves are barely outlined, when not completely obscured. A barrier sensing plateau-rise-plateau scenario clearly manifests only at l max = 12, at center of mass energies as high as 15-20 MeV above the Coulomb barrier B 0 . The values B 2 = 2l max +3
2I
, increasing linearly with increasing l max , are reported in Fig. 1d for five of the involved partial waves from l max = 4 to l max = 12. There, we observe that the transition from inflections to horizontal plateaus, i.e. from low to high sensitivity, takes place for 3 < B 2 < 3.5. The plateau at l max = 12 is characterized by B 2 = 3.5, right at the transitional sensitivity. For 12 C + 12 C, it seems also instructive the fact that the fitting function extended to energies below 7 MeV, and to l max < 4 (dashed segment in Fig. 1b), does not show any type of visible modulation. Hence, one can state with reasonable confidence that, among the structure observed at energies below 7 MeV (see e.g. [30] and references therein), none of those observed right above the fusion barrier, between 5.7 MeV and 7 MeV, should be identified as oscillation with l max < 4. In other words, in 12 C + 12 C fusion data at E cm < 7 MeV, the structures right above the barrier should not have the same physical origin of those above 7 MeV in Fig. 1b.
Passing to the 8 Li + 4 He case in Fig. 1c, we stress once again that only three partial waves, l = 0, l = 1 and l = 2, contribute and that pronounced rise-plateau alternations are clearly observed already at the lower energies. Fig. 1c shows that, in this reaction case, the indicator values B 2 = l max +1 4I amount to at least ∼ 3.4, namely both plateau resolution and sensitivity to the barrier shape are significantly large, thanks to both small I and values. Consequently, in the 8 Li + 4 He case, it is right the data portion closer to the Coulomb barrier B 0 that imposes the most stringent constraints to the entire excitation function data fit.
To summarize, in this work we obtain unprecedented experimental information on the fusion reaction induced by the radioactive projectile 8 Li on a 4 He gas target, at center-of-mass energies E cm between 0.6 and 5 MeV, right above the Coulomb barrier. The main issue is the tendency of σ f πλ 2 to form two well visible plateaus alternated to steep rises. This is the observed fact. In the first instance, we can interpret this observation by a fusion model that solely includes the action of the relative motion angular momentum. The plateau altitudes are found to correspond to the values given by (2) for l max = 1 and l max = 2, regardless of the fusion barrier shape. If this is the proper description, the clear jump between the undistorted plateau altitudes in the 8 Li + 4 He data in Fig. 1c is likely to be the most genuine consequence of the discrete nature of the intervening angular momenta observed so far in fusion reactions. Concerning barrier shape determination, for it was not possible to reproduce well the fusion data in Fig. 1b by coupled channels calculations. Consequently, a pure single barrier penetration fusion model is adopted here for both reactions. Hill-Wheeler barrier penetration formula [29] is used. Most of the barrier parameters determined here for 8 Li + 4 He fusion are plausibly consistent with those for 12 C + 12 C fusion. The possible exception is B 0 : for 8 Li + 4 He, present work identifies a remarkably low Coulomb barrier that can be possibly linked to the presence of the loosely bound 8 Li. This result likely reflects the enhanced sensitivity to the fusion barrier interior using very light systems. It is apparently clear that the Coulomb barrier shape found here for 8 Li + 4 He, if confirmed by further investigations, may provide additional constraints to the nuclear interaction potential in terms of tail slope and/or pocket depth. Further, parallel investigations should also be aimed at evaluating the role of possible nuclear structure induced resonances, an alternative process that does not seem quantitatively supported by the presently available experimental evidences (see e.g. [32]).
In conclusion, this work has shown for the first time the existence of pronounced plateaus right above the Coulomb barrier in 8 Li + 4 He fusion. These plateaus allow enhanced experimental sensitivity to the fusion barrier given that the most barriersensing lowest partial waves are well separated. We expect that the present results for 8 Li + 4 He will promote further investigations of the fusion reaction mechanism between very light ions at energies much below the interaction barrier. For the moment, we believe that understanding the plateau origin in the cross section above the barrier will almost certainly be useful to corroborate the extrapolation to the important astrophysical region below the Coulomb barrier. | 4,692.8 | 2016-02-10T00:00:00.000 | [
"Physics"
] |
Simulation study of electron cloud induced instabilities and emittance growth for the CERN Large Hadron Collider proton beam
1098-4402= The electron cloud may cause transverse single-bunch instabilities of proton beams such as those in the Large Hadron Collider (LHC) and the CERN Super Proton Synchrotron (SPS). We simulate these instabilities and the consequent emittance growth with the code HEADTAIL, which models the turn-byturn interaction between the cloud and the beam. Recently some new features were added to the code, in particular, electric conducting boundary conditions at the chamber wall, transverse feedback, and variable beta functions. The sensitivity to several numerical parameters has been studied by varying the number of interaction points between the bunch and the cloud, the phase advance between them, and the number of macroparticles used to represent the protons and the electrons. We present simulation results for both LHC at injection and SPS with LHC-type beam, for different electron-cloud density levels, chromaticities, and bunch intensities. Two regimes with qualitatively different emittance growth are observed: above the threshold of the transverse mode-coupling (TMC) type of instability there is a rapid blowup of the beam, while below this threshold a slow, long-term, emittance growth remains. The rise time of the TMC instability caused by the electron cloud is compared with results obtained using an equivalent broadband resonator impedance model, demonstrating reasonable agreement.
I. INTRODUCTION
Instabilities, beam loss, and beam-size blowup due to electron cloud have been observed in several machines, such as the CERN Proton Synchrotron (PS), the Super Proton Synchrotron (SPS), as well as the KEKB and PEP-2 B-factories [1].Therefore, they represent a concern for the future Large Hadron Collider (LHC) at CERN.In this paper we discuss simulations of transverse singlebunch instabilities using the code HEADTAIL [2,3].
During the passage of a bunch, the electrons are accumulated around the beam center (pinch effect) and, if the head of the bunch is slightly offset, the rest of the bunch will experience a net ''wake'' force.The instability is similar to the regular transverse mode-coupling instability (TMCI) and induces both a centroid and a head-tail motion, with a substantial emittance growth.
HEADTAIL is a particle-in-cell (PIC) code which models the interaction of a single bunch with an electron cloud on successive turns, with the simplification that the cloud is localized at a finite number of positions along the circumference, instead of being continuously spread over the entire ring.Recently, electric conducting boundary conditions have been implemented in the code [4].They replaced the previous open-space boundaries.A description of the new boundaries will be given in Sec.II.The sensitivity of the code to numerical parameters, in particular, to the number and location of the interaction points (IPs) between the cloud and the bunch will be discussed in Sec.III.This and the following four sections show simulation results for LHC at injection.In Sec.IV we investigate the TMC-type instability and the emittance growth above the threshold as a function of the electron-cloud density, the bunch intensity and the chromaticity.Below the threshold of the strong head-tail instability there is evidence of a regime with slow emittance growth.Some preliminary studies of this phenomenon will be presented in Sec.V. We also discuss first results from an attempt to model the real lattice (Sec.VI).Specifically we have modified the code in order to represent the beta function varying around the ring, instead of considering an average value.In Sec.VII the possibility to model the electroncloud effect with a broadband impedance [5] is discussed and the results compared with the PIC simulations.Then simulations for the SPS ring with LHC-type beam are presented (Sec.VIII).Here we assume the electron cloud to be concentrated in the dipole field regions.Finally, Sec.IX summarizes the results and draws an outline for future work and development.
II. HEADTAIL CODE AND THE NEW CONDUCTING BOUNDARY CONDITIONS
The code HEADTAIL for the single-bunch instability has been described in Refs.[6,7].The simulation models the turn-by-turn interaction of a single bunch with an electron cloud, which is assumed to be produced by the preceding bunches and is usually taken to be initially uniform.Its density is inferred from parallel simulations with the ECLOUD code [8].For the purpose of the simulation, the electron cloud is assumed to be concentrated at one or more interaction points around the ring and a fresh uniform electron distribution is created at each IP prior to each bunch passage.Both the protons and the electrons are represented by macroparticles.The bunch is also divided into longitudinal slices which interact with the cloud on successive time steps.The principle of the simulation is schematically illustrated in Fig. 1.
The transverse electric interaction between the electrons and the protons of each slice (and vice versa) is computed by a 2D PIC module taken from a beam-beam code [9].In between, the beam is transported around the ring, where the betatron motion in both planes is modeled by a rotation matrix.The synchrotron motion is included, so that the particles slowly mix longitudinally.In particular, they can move from one bunch slice to another during several turns.The effect of chromaticity is also modeled, via an additional rotation matrix.In the code there is the further possibility to include space charge and the effect of a broadband resonator.Feedbacks and various nonlinear fields are optionally available as well.
Recently new boundary conditions of a perfectly conducting chamber wall have been implemented, as an alternative to the previously applied open-space conditions.With conducting boundaries, the electric potential is assumed to be zero on the wall.A fast-Fourier-transform Poisson solver for a rectangular pipe, based on sine transformations, is used.The electric field can significantly differ from the open-space case especially in the proximity of the boundary wall.
Theoretical ratios of the horizontal electric field computed for open-space and conducting boundaries for a beam centered in a rectangular chamber of half-width a and half-height b, at the wall x a; y 0 are expressed through the analytical formula [4], obtained by summing the contributions from the source and image charges: which results in This theoretical ratio is very satisfactorily reproduced by our Poisson solver.
The difference between the electric field in open space and in a rectangular box becomes more critical as we move closer to the box wall in both directions.Figure 2 shows the vertical components of the electric field on a line NBIN bunch slices
III. SENSITIVITY TO NUMERICAL PARAMETERS
For the purpose of checking the sensitivity to numerical parameters we have performed a series of simulations for the LHC at injection, assuming a typical electron-cloud density of 6 10 11 m ÿ3 [10].Throughout this paper, if not stated otherwise, we use the bunch and numerical parameters listed in Tables I and II.In Fig. 3 we show the vertical emittance as a function of time for different numbers of electron macroparticles.A number of 10 5 macroelectrons at every IP was chosen in the following.If the cloud is initialized with a transversely uniform distribution inside the chamber, this value corresponds to about 6.1 macroparticles per cell (the number of grid points over 10 is 128).The number of macroprotons is taken to be 3 10 5 and the bunch is divided into 70 slices, in order to resolve the transverse wakefield.Since during the passage of a bunch the electrons perform about 4 oscillations [11], this number of slices translates into about 17 time steps per oscillation period.
A key parameter which needs to be set carefully in the simulations is the number of beam-cloud interaction points per turn.The sensitivity to this parameter was first pointed out by Ohmi [12,13].Figure 4 shows the horizontal and vertical emittance as a function of time for different numbers of IPs per turn.In the vertical plane there is clear evidence of a different behavior for a small number of IPs.Looking at the snapshot of the vertical bunch shape (Fig. 5) in the case of only 1 point of interaction per turn, the emittance growth appears incoherent and it occurs almost uniformly along the entire bunch, while in the case of 5 IPs the growth is due to the strong head-tail instability.Hence, for the set of parameters listed in Table I, a number of IPs larger than 5 is required to capture the physics of the instability in the case of LHC at injection energy; in our simulations we have chosen nkick 10.
The location of the points of interaction along the ring and the phase advance between them is also important.In the code, the IPs are normally equally spaced, their position is fixed along the ring and does not change from turn to turn.Simulations were also performed for a random phase advance between IPs, where only the total number of IPs over the circumference is given, but their location and phase advance along the ring are chosen randomly on every turn.Figure 6 shows that in this case for a small number of IPs the growth is larger than for a constant phase advance and that the convergence is very poor, but the change is monotonic and there is no evidence of two different types of behavior.The larger growth is probably due to additional noise introduced by the random choice of phase advance leading to a permanent small mismatch.
We have also tried to consider IPs whose positions were chosen randomly (instead of uniform spacing) but stayed constant from turn to turn, or to concentrate IPs over one betatron wavelength only [14], but in neither case did we observe an improvement of convergence for smaller num- The effect of the distribution of rf cavities and regions with nonzero momentum compaction between the points of interactions has also been studied as a possible source of discrepancies for different numbers of IPs [15], but it was found to be insignificant, at least in the simulation for the LHC.
IV. INSTABILITY THRESHOLD AND EMITTANCE GROWTH IN LHC AT INJECTION
Using the parameters listed in Table I, we studied the effect of chromaticity, electron-cloud density, and bunch intensity on the development of the instability, again for the LHC at injection.
We first performed a scan of the electron-cloud density level in the chamber, over a range from 3 10 12 m ÿ3 down to 2 10 11 m ÿ3 .Figure 7 shows that for 3 10 11 m ÿ3 only a very small slow emittance growth remains.This value is roughly consistent with the threshold predicted by the analytical 2-particle model for the TMCI type instability [16] thre 2Q s r p C x;y ; (3) which amounts to 4:3 10 11 m ÿ3 , for these parameters, and it is similar to threshold values estimated for the KEK B-factory [16 -18] and for the CERN SPS [5].For the LHC at injection, the same threshold density of 3 10 11 m ÿ3 was first determined from simulations in [19].Figure 8 displays the emittance-growth rise time as a function of the electron-cloud density on a logarithmic scale.This figure suggests that though the emittance growth decreases for smaller electron-cloud densities, it never fully vanishes.Emittance growth on a longer time scale therefore is a concern even for moderate or low electron densities.
In Fig. 9 a scan of the bunch intensity, for an electron cloud of 6 10 11 m ÿ3 and low chromaticity, shows that at half the nominal bunch intensity we are below the thresh- old of the strong head-tail instability, and, at least for the first 50 ms, the emittance growth is strongly reduced.Assuming an electron-cloud density of 6 10 11 m ÿ3 , at nominal bunch intensity, increasing the chromaticity helps to reduce the emittance growth (Fig. 10), until for very high values of Q 0 30 we enter into a second regime, without a rapid instability, but with a persistent slow emittance growth.The threshold value of chromaticity for which the strong head-tail instability is suppressed depends on the electron-cloud density.The relation found in our simulations (see Fig. 11) is almost linear, as predicted by analytical computations for the TMC instability due to a broadband-resonator model [20].As indicated by Fig. 11, the second regime with slow emittance growth extends down to low electron densities and it can be found, below the TMCI threshold, even for zero chromaticity.
V. ''SLOW EMITTANCE GROWTH'' REGIME
A simulation campaign is ongoing to understand whether the persistent slow emittance growth which we found below the threshold is real or an artifact of the code.We note similar growth has been observed in some measurements at the KEK B-factory [21].Preliminary results show that increasing the number of macroprotons (NPR) helps reducing this linear growth.However, the growth does not seem to approach zero in the limit of very large NPR, as illustrated in Fig. 12, which shows the dependence on 1=NPR.
Changing the longitudinal bunch extent in the simulations from 2 z to 4 z , together with the number of slices, seems to modify the behavior.Figure 13 shows that considering 4 z , of a Gaussian bunch while keeping the number of macroprotons constant causes some artificial instability, probably due to the small number of macroprotons in the tails, which may introduce a large numerical noise.
Finally, simulations have also been done for electroncloud densities below the threshold of the fast (strong head-tail) instability, at different values of chromaticity (see Fig. 14).The rise time in this slow growth regime depends on the electron-cloud density via a power law where a 1:6-1:7; with only a weak dependence on the chromaticity.
VI. BETA FUNCTION
In the original HEADTAIL code and in the simulations presented so far, the beta function was assumed to be constant over the whole ring and equal to the average value.Recent modifications allow us to consider different values of at the different IPs, thus crudely modeling the effect of the variation of the beta function around the ring (pictures of the LHC optics can be found in Ref. [10]).Our Figure 15 shows the effect in the simulations, comparing different cases, both above the fast instability threshold ( e 6 10 11 m ÿ3 ) and below ( e 3 10 11 m ÿ3 ).
Above the threshold, for 3 IPs the beta-function variation affects the results, especially when it is large, but, as already shown in the previous paragraphs, with a small number of IPs the simulations are not accurate.Using 10 IPs, with different patterns, the curves differ only slightly.In particular, it seems that changing the value of at different locations makes the curves smoother.
In the case below density threshold (3 10 11 m ÿ3 , right picture), the growth rate is modified for high numbers of IPs and is larger when we consider the variation of the beta functions along the ring.Just like diffusion introduced by space charge in intense beams because of beta modulation [22], the increase of the growth rate can be in this specific case a physical effect and its convergence for different sets of numerical parameters needs therefore further investigation.A collaboration between CERN and the University of Southern California (USC) plans to investigate the effect of the real lattice with the code QUICKPIC [23,24], which thanks to its parallel capacities allows the use of more than 2000 IPs per turn.
VII. BROADBAND IMPEDANCE MODEL FOR THE ELECTRON CLOUD
The electron-cloud transverse wakefield responsible for single-bunch instabilities can be approximated by the one of a broadband resonator [5] where !r =2Q and ! !2 r ÿ 2 p .The longitudinal coordinate z, assuming negative values, refers to the position of the test charge with respect the driving charge.Q is the quality factor, c the cloud line density, c the light velocity, k a coupling parameter, taken to be equal to 2, and H enh is an enhancement factor due to the cloud size and the pinching of the electrons during the bunch passage.The quality factor Q has a finite value in the range 3-6, arising from the nonlinear force acting on the electrons and the resulting frequency spread.The longitudinal beam profile and the variation of the beam size around the ring (if varying beta functions are considered), both introduce additional spreads of the electron oscillation frequency, which would further lower the effective quality factor.
For the present study, aiming to understand the instabilities induced in LHC at injection energy, we have chosen Q 3 and H enh 9.These values were obtained by fitting the analytical formula (5) to the wakefield from a dedicated HEADTAIL simulation for e 6 10 11 m ÿ3 .Figure 16 shows the simulated wakefield and analytical curves for different combinations of Q and H enh values, with a constant product Q H enh .
In the HEADTAIL code we can model the effect of a broadband resonator [3].Given the resonant frequency and the shunt impedance, we have directly simulated the emittance growth using one of the fitted analytical resonator wakefields of Fig. 16, instead of performing an electron-cloud PIC simulation.Figure 17 shows that contrary to what is expected from a threshold calculation in coasting beam approximation [see Eq. ( 12) in [25] ], it is not only the product Q H enh which matters for the development of the instability, but the two variables Q and H enh enter independently.
Figures 18 and 19 compare results of electron-cloud PIC simulations for various electron densities, with those ob-tained using the corresponding broadband-resonator model.The assumed correspondence, from Fig. 16, is as follows.PIC simulations for an electron cloud of 6 10 11 m ÿ3 in the LHC at injection are compared with a resonator characterized by !r 2 1:199 GHz, Q 3, and Z t 115:3 M=m, being Z t =Q =m c=! r cR s =Qm ÿ2 Z 0 =4, with Z 0 377.For other densities, the resonator shunt impedance is varied in proportion to the change in electron density, whereas the resonator frequency, Q value and enhancement factor stay constant.
Concluding this comparison, the resonator model gives initial growth rates similar to the full electron-cloud simulation over a large range of electron-cloud densities.At large amplitudes the finite size of the field grid and the nonlinear force between beam and electrons slow down the
VIII. HEADTAIL SIMULATION FOR SPS
Simulations have also been performed for an LHC-type beam in SPS.The parameters of this beam are listed in Table III.The aim of these simulations is benchmarking the code against observations.
In the SPS, the electron cloud is mainly concentrated inside the bending magnets [26].For this reason in the simulations we have assumed the presence of a constant vertical magnetic field, which causes the electron motion to be frozen in the horizontal plane (strong field approximation).A feedback system has also been implemented in the code.It damps the transverse position of the bunch centroid, according to a specified gain.The damping time is presently assumed to be about 10 turns.The noise of the feedback system is also taken into account in the model and it is about 10 ÿ5 m.The damper is found to have little effect on the single-bunch emittance growth.In fact, its main operational purpose is to cure coupled-bunch instabilities and its 20-MHz bandwidth is too low to damp headtail motion inside a bunch.
The scan in chromaticity for an electron-cloud density of 10 12 m ÿ3 (Fig. 20) reveals that increasing the chromaticity only helps up to a certain value of Q 0 13.For larger values the emittance growth increases again.Including space-charge effects in the simulation drastically changes the results (Fig. 21).Now chromaticity is much more efficient in damping the instability; see also [3]. Figure 22 shows that for a lower electron-cloud density ( 6 10 11 m ÿ3 ), even without space charge the chromaticity significantly reduces the instability growth rate.
IX. CONCLUSIONS AND OUTLOOK
The code HEADTAIL with new conducting boundary conditions has been used to simulate single-bunch instabilities and emittance growth due to an electron cloud in the LHC and SPS rings.The sensitivity to several numerical parameters has been explored.In particular we discussed the choice of the number and position of the interaction points between the bunch and the electron cloud, which in the code are concentrated at a finite number of locations around the ring.
Simulations for LHC at injection show that chromaticity is a cure for the strong head-tail instability, but that it may not be efficient for suppressing a slow, long-term emittance growth which persists below the threshold and seems to scale with the electron density via a power law.Likely, both numerical noise and real physics contribute to this slow emittance growth.By increasing the number of macroprotons, the growth rate is reduced, but it does not approach zero in the limit of an infinite number of macroprotons.Changing the longitudinal extent of the bunch in the simulations also affects the results, but this dependence is attributed to the extremely small number of macroparticles representing the tails of the Gaussian bunch which can be a source of large numerical noise.At chromaticity Q 0 2 in the LHC we stay below the threshold of the TMCI type instability up to half the nominal bunch intensity for an electron density of 6 10 11 m ÿ3 .With nominal beam parameters, however, an electron density of 3 10 11 m ÿ3 or less must be achieved to stay below the threshold.The dependence on chromaticity has also been studied for the SPS, where we assume the electron cloud to be concentrated in the dipole field regions.For the SPS, the space-charge effect changes the beam response to the electron cloud and renders higher chromaticity a more efficient cure.
The broadband-resonator model for the electron cloud, and the PIC simulation seem to agree at the onset of the instability for a wide range of electron densities; later the nonlinear effects, which are not taken into account in the resonator model, and the finite size of the cloud and of the grid, used for the PIC computation, become important.This leads to a different behavior at large amplitudes, which is more optimistic in the case of the real field calculation with the PIC module.
Including a variation of the -function smoothens the evolution above the TMC threshold and changes the growth rate below the threshold, which may indicate that the long-term emittance growth seen in the latter case has a physical origin.
In the near-term future we are planning to compare SPS simulation results with ongoing experiments.Studying the behavior of the beam below the threshold of the strong head-tail instability, both via numerical and analytical approaches, is in our plans.Finally, the ongoing collaboration with USC will aim to benchmark HEADTAIL with the continuous plasma code QUICKPIC and, in more detail, investigate the effect of the real lattice on the simulation results.
FIG. 1. (Color) Schematic of the physical model for the cloudbeam interaction in the HEADTAIL code.
FIG. 2 .
FIG. 2. (Color)Vertical electric field as a function of the horizontal position along the axis y b=2 of a square (top) and of a rectangular chamber with a 2b (bottom), computed with and without conducting boundary conditions, for a beam centered in the chamber, with a transverse rms size b a=10.
FIG. 4 . 25 y 25 yFIG. 5 .
FIG. 4. (Color) Horizontal (top) and vertical (bottom) emittance as a function of time with different numbers of IPs, for the LHC at injection and an electron density 6 10 11 m ÿ3 .
FIG. 7. (Color) Vertical emittance as a function of time for different values of electron-cloud density, and Q 0 2.
FIG. 6 .
FIG. 6. (Color) Emittance growth for a turn-by-turn random phase advance between IPs; horizontal (top) and vertical (bottom) emittance as a function of time with different numbers of IPs, for LHC at injection and an electron density of 6 10 11 m ÿ3 .
FIG. 14 .
FIG. 14. (Color) Double logarithmic plot of the vertical emittance-growth rate as a function of the cloud density, for different chromaticities.
FIG. 17 .
FIG. 17. (Color) Vertical emittance vs time in LHC at injection for e 6 10 11 m ÿ3 from a HEADTAIL PIC simulation (red line) and from a HEADTAIL simulation with broadband-resonator model.For the latter, different combinations of H enh and Q are plotted, with a constant product Q H enh .
FIG. 16. (Color) Wakefield induced by an electron cloud ( e 6 10 11 m ÿ3 ), in LHC at injection.The red curve is from a HEADTAIL simulation, while the other lines represent the analytical expression (5) of the wakefield.
FIG. 21 .
FIG. 21. (Color) Vertical emittance as a function of time for the SPS, comparing different values of vertical chromaticity Q 0 at 10 12 m ÿ3 ; space charge is included here.
12 FIG. 19
FIG. 19.(Color)Rise times of the emittance growth as a function of the electron-cloud density obtained by the HEADTAIL code with the electron-cloud PIC simulation and for an equivalent broadband resonator (H enh 9 and Q 3); T1 denotes the time during which the emittance increases from 7:82 10 ÿ9 m (initial value) to 8 10 ÿ9 m, T the interval in which the emittance rises from 8 10 ÿ9 m to 8:2 10 ÿ9 m ( 2:5%).
FIG. 20 .
FIG. 20. (Color) Vertical emittance as a function of time for the SPS, comparing different values of vertical chromaticity Q 0 at 10 12 m ÿ3 without space charge.
TABLE II .
Computational parameters.
TABLE I .
Parameters used for LHC at injection.
4 6 5 FIG. 3. (Color) Emittance as a function of time for different numbers of macroelectrons.SIMULATION STUDY OF ELECTRON CLOUD INDUCED . . .Phys.Rev. ST Accel.Beams 8, 124402 (2005) ber of IPs.Moreover the emittance-growth level was similar to the one obtained with equally spaced IPs.
TABLE III .
Parameters used in the simulations for LHC-type beam in SPS at injection. | 5,959.8 | 2005-12-21T00:00:00.000 | [
"Physics"
] |
Initial Bacterial Adhesion on Different Yttria-Stabilized Tetragonal Zirconia Implant Surfaces in Vitro
Bacterial adhesion to implant biomaterials constitutes a virulence factor leading to biofilm formation, infection and treatment failure. The aim of this study was to examine the initial bacterial adhesion on different implant materials in vitro. Four implant biomaterials were incubated with Enterococcus faecalis, Staphylococcus aureus and Candida albicans for 2 h: 3 mol % yttria-stabilized tetragonal zirconia polycrystal surface (B1a), B1a with zirconium oxide (ZrO2) coating (B2a), B1a with zirconia-based composite coating (B1b) and B1a with zirconia-based composite and ZrO2 coatings (B2b). Bovine enamel slabs (BES) served as control. The adherent microorganisms were quantified and visualized using scanning electron microscopy (SEM); DAPI and live/dead staining. The lowest bacterial count of E. faecalis was detected on BES and the highest on B1a. The fewest vital C. albicans strains (42.22%) were detected on B2a surfaces, while most E. faecalis and S. aureus strains (approximately 80%) were vital overall. Compared to BES; coated and uncoated zirconia substrata exhibited no anti-adhesive properties. Further improvement of the material surface characteristics is essential.
Introduction
Due to its considerably higher strength with corresponding fracture resistance, yttria-stabilized tetragonal zirconia (Y-TZP) has been recognized as a favorable ceramic biomaterial the last two decades [1]. Y-TZP has been widely used in various biomedical applications, especially for the construction of femoral heads for total hip replacements and dental implants [2,3]. Nevertheless, Y-TZP and 3 mol% yttria-stabilized tetragonal zirconia (3Y-TZP) failed to remain stable over time. More specifically, in 2001 a total of 400 femoral heads had to be removed shortly after their implantation as a result of accelerated ageing in two implant batches fabricated with a new furnace technology [4]. Their predilection to low temperature degradation (LTD) in the presence of water, known as ageing, constitutes their Achilles heel [5,6]. A mechanism named tetragonal to monoclinic transformation (t-m transformation), responsible for converting the metastable tetragonal grains to monoclinic, seems to cause this disadvantageous feature [7]. Nowadays, the development of innovative structural ceramic biomaterials that can withstand high pressure over time remains a challenge in the field of implant research.
Oral biofilms are dynamic microbial structures that can adhere to various surfaces in the oral cavity [8,9]. These specialized bacterial communities can tolerate the harsh environmental conditions associated with gingival tissues, tooth and implant surfaces [10][11][12]. The discovery and comprehension of the biofilm formation mechanisms have been in focus of the biofilm research community over the past few years [13]. Oral biofilms have been proved to play a causative role in biofilm-mediated diseases such as caries, periodontis and periimplantitis. This fact has triggered the interest for the invention of material surfaces with antibacterial properties [14,15]. As far as dental materials are concerned, possible discrepancies in the initial rates of bacterial colonization on different implant surfaces need to be investigated [16]. Material surface characteristics such as average surface roughness (Sa), root mean square surface roughness (Sq), ten-point average roughness (Sz), skewness (Ssk), summit density (Sds), developed area ratio (Sdr) and texture aspect ratio (Str) can influence the initial microbial colonization rate as well as the strength and structural properties of biofilms [17]. Moreover, the counteraction of oral biofilms with other mechanical, physical and chemical factors relating to material substrata, microorganisms and adsorbed macromolecules complicate their methodological examination [18].
The aim of this study was to investigate the initial bacterial adhesion on four novel implant material surfaces in vitro utilizing bovine enamel slabs (BES) as a control. The implant material surfaces utilized for the examination of the initial bacterial colonization in vitro were: 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface (B1a), 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface with zirconium oxide (ZrO 2 ) coating (B2a), 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface with zirconia-based composite coating (B1b) and 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface with zirconia-based composite and ZrO 2 coatings (B2b). The state-of-the-art implant material Y-TZP served as a benchmark. The composite coating is a material designed to withstand ageing. Therefore, it is a potential material for future implants and as such, its behavior with respect to microbial adhesion is of great interest. As the employed plasma-deposited ZrO 2 coating is very hydrophilic, a good ingrowth of coated implants is expected. A possible influence of the ZrO 2 -coating on microbial adhesion was investigated in the present work.
Due to the structural similarity to human enamel, BES were considered to be appropriate for this purpose [19]. Staining with 4',6-diamidino-2-phenylindole (DAPI) aided the quantitative analysis of all adherent microorganisms, while their vitality was determined using live/dead staining. Additionally, the initial adherent microorganisms as well as the implant material surfaces were visualized by scanning electron microscopy (SEM). Table 1 and Figure 1 summarize the five different implant material surfaces (diameter, 15 mm; surface area, 176.62 mm 2 ; height, 1.5 mm) used for the examination of the initial bacterial colonization in vitro: 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface (B1a), 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface with zirconium oxide (ZrO 2 ) coating (B2a), 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface with zirconia-based composite coating (B1b) and 3 mol% yttria-stabilized tetragonal zirconia polycrystal surface with zirconia-based composite and ZrO 2 coatings (B2b). All experimental ceramic materials were fabricated at Swerea IVF (Mölndal, Sweden) and at NTTF Coatings GmbH (Rheinbreitbach, Germany). After the acquirement of the implant material surfaces, control surfaces were prepared as well. For the obtainment of the control specimens, the buccal surfaces of 140 bovine incisors of 2 year old cattle were removed and modified into cylindrical enamel slabs (diameter, 5 mm; surface area, 19.63 mm 2 ; height, 1.5 mm). The IDEXX Laboratories bovine spongiform encephalopathy (BSE) diagnostic kit (Ludwigsburg, Germany) confirmed their BSE-free status. Wet grinding with abrasive paper (400 to 4000 grit) was then used to polish the enamel surfaces of all BES samples. The protocol for disinfection of the enamel plates involved dislodgement of the superficial smear layer by ultrasonication in NaOCl (3%) for 3 min, air drying, and ultrasonication in 70% ethanol for another 3 min. The disinfected samples were ultrasonicated twice again in double-distilled water for 10 min and, subsequently kept in distilled water for 24 h to hydrate. The visualization of the implant surface morphology was performed by SEM. Moreover, various amplitude-, hybrid-and spatial parameters concerning the tested biomaterial surfaces were measured by interferometry (Table 2). In Table 2, the surface roughness parameters Sa, Sq and Sz increase with each coating step. After coating no polishing or any other further treatment was conducted. The total bacterial count on the investigated implant materials after 2 h is presented by boxplots in Figure 2. The same figure also shows representative DAPI-stained microorganisms, which have adhered to the material surfaces after 2 h of incubation.
Results and Discussion
The differences between the materials were statistically significant. The lowest bacterial count of E. faecalis was detected on BES (p = 0.004) in comparison to the highest bacterial count on B1a. Three implant material surfaces, B1a (p = 0.047), B2a (p = 0.013) and B1b (p = 0.001) presented significantly higher bacterial count of S. aureus against the control BES. B2b showed no significant differences to the control (p = 0.065). However, significantly higher C. albicans count was detected on control (BES) surfaces compared to the groups B1a (p ≤ 0.001), B1b (p ≤ 0.001) and B2b (p ≤ 0.001).
The surfaces belonging to group B2a harbored more C. albicans than B1a (p ≤ 0.001) and B1b (p ≤ 0.001). Different implant materials with different surface characteristics were tested in this study ( Table 2). The results indicated that the values of amplitude-, hybrid-and spatial parameters seem to play a major role with regard to initial adhesion. The smoother BES characterized by a lower surface roughness (Sa = 0.041 µm) behaved contrary to the rougher and micro-textured implant materials (Sa ≥ 0.2 µm) as far as the initial bacterial colonization is concerned. That is the reason why BES harbored less bacteria when compared to the tested material surfaces B1a, B1b, B2a and B2b. The fact that more C. albicans was recovered from B2a can be assumingly attributed to the anti-adhesive properties of zirconia-based composite coatings in B1a, B1b against fungi. Numerous studies have already highlighted the influence of surface topography on initial bacterial adhesion [20,21]. Al-Ahmad et al. [22] reported a positive correlation between high average surface roughness (Sa) and early microbial colonization after examining titanium and ceramic implants in vivo. This was confirmed in our study. According to the estimated bacterial counts the majority of microorganisms accumulated on the zirconia surfaces with high surface roughness (Sa ≥ 0.2 µm), while fewer microorganisms were detected on the smoothest surfaces of BES. The amount of the attached microorganisms can be attributed to various parameters such as surface characteristics, bacterial concentration, exposure time and temperature [23,24]. High surface roughness provides bacteria with broad adhesion areas with irregularities to fit in avoiding shear forces and desorption due to maximum bacteria-surface contact during the initial adhesion phase [25,26]. In the presence of shear forces, especially in situ, surface characteristics such as amplitude parameters have a great impact on early bacterial adhesion [27]. Average surface roughness (Sa) lower than 0.2 µm did not influence the microbial adhesion, whereas Sa values higher than 0.2 µm were proposed to allow for higher initial bacterial adhesion in previous reports [28]. Sa correspondents to root mean square surface roughness (Sq), the deviation from the mean line/plane [29]. The ideal ratio of Ra:Rq is equal to 23/2π [30]. The Skewness (Rsk) representing the spatial variation in height differs between high, narrow hills with flat valleys (positive) and large, flat hills with narrow valleys (negative) [30].
The spatial roughness variations are defined by spatial parameters such as summit density (Sds) as well as developed area ratio (Sdr). Sds expresses the number of summits per unit area, while Sdr shows the percentage of additional area contributed by the surface texture [31]. Sdr correlates to the hydrophobicity of the surface. Okada et al. [32] reported that the inhibition of biofilm development resulted from hydrophilization of the substratum. The texture aspect ratio (Str) equals less than 1 and represents the relation of the shortest to the longest repeating surface pattern. All the aforementioned parameters describe the substratum contact area provided for microorganisms and play important roles in the analysis of initial microbial adhesion.
The selected implant materials as well as the tested bacterial strains seem to influence the bacterial accumulation on different substrata [33]. Rimondini et al. [34] compared the initial colonization of specific oral bacteria on Y-TZP in vitro and noted that Streptococcus mutans adhered significantly more to rectified Y-TZP than to titanium (Ti) disks. Streptococcus sanguis was mostly detected on Ti samples, whereas Actinomyces spp. and Porphyromonas gingivalis showed no differences.
To further prevent bacterial adhesion antimicrobial coatings on the implant surfaces were introduced [35,36]. Zirconium oxide-based composites in terms of implant surface modification have a low surface free energy and were suggested for coating zirconia to increase osteosynthesis by reducing bioinactivity [1]. The latter results from deficient intercommunication between Y-TZP substrata and cells [37]. Zirconia coatings have been reported to stimulate surface bioactivity due to their beneficial mechanical properties [38]. In the present report, no statistically significant differences were found between coated and uncoated specimens concerning E. faecalis and S. aureus. This is in accordance with a previous report, in which the antibacterial properties of silver-polysaccharide coatings on porous fiber-reinforced composites for bone implants were tested [39]. Only dense and porous samples differed statistically concerning the initial adhesion of S. aureus independent from the presence of coatings. Figure 3 illustrates in box-plots and representative images the amount of vital adherent bacteria detected with the aid of live/dead staining on the investigated implant materials after 2 h. The vitality differences between the surfaces were statistically significant only for C. albicans. The smallest amount of vital C. albicans strains (42.22%) was detected on B2a surfaces in comparison to the group BES (98.56%, p ≤ 0.001) with the maximum percentage of living microorganisms. The surfaces belonging to group B1b sheltered significantly more vital C. albicans (78.21%) than B2a (p ≤ 0.001) and significantly less than BES (p ≤ 0.001). The investigated implant material surfaces showed no significant discrepancies regarding the detection of E. faecalis and S. aureus. The majority of E. faecalis strains were found alive on all implant substrata: 80.56% (BES), 84.39% (B1a), 95.01% (B2a), 86.88% (B1b) and 92.57% (B2b). The vitality percentages of S. aureus were similar: 75.44% (BES), 78.64% (B1a), 74.42% (B2a), 78.90% (B1b) and 87.87% (B2b). All the aforementioned percentages represent the mean values of the recovered vital microorganisms.
It is already known that microbial adhesion to surfaces is a result of specific molecular interactions in the presence of physicochemical adhesion forces such as electrostatic and Lifshitz-Van der Waals forces [40]. Recently, it has been shown that microorganisms react variously to different adhesion forces originating from distinct surface materials [41]. Positive-charged substrata, for instance, quaternary ammonium-coated surfaces, develop strong attachment forces capable of eliminating negative-charged microorganisms (stained red by live/dead) due to stress deactivation [42]. The lethal positive charge surface density varies among the numerous bacterial species [43]. However, other substrata including ceramic materials interact differently with bacteria whose response progresses with increasing adhesion forces. These bacteria remain alive (stained green by live/dead) and are capable of producing extracellular polymeric substances (EPS) over time [8]. That explains why most E. faecalis, S. aureus and C. albicans strains were found vital (green fluorescence) when stained with Baclight live/dead dye in our report. The fact that less vital C. albicans strains were recovered from B2a when compared to B1b could be related to the lethal adhesion forces of zirconium oxide (ZrO 2 ) coatings against fungi.
The microbial adherence to all examined implant materials was visualized by SEM. Figure 4 demonstrates representative implant materials with adherent microorganisms.
The chains, pair-wise or single cocci of E. faecalis and S. aureus on all substrata are shown in Figure 4A-J. A significant increase in bacterial colonization could be shown for the implant groups B1a, B2a, B1b and B2b against the control BES surfaces. SEM images of candidal initial adhesion ( Figure 4K-O) exhibited significantly more oval-shaped cells with characteristic cyto-plasmic evaginations on the BES than on the implant samples. The purpose of this report was to examine initial bacterial adhesion to different implant material surfaces in vitro. BES were suitable as a representative control group for the study of bacterial colonization on the implant surfaces due to their common physicochemical properties with human enamel [44]. In addition, homogenous BES can be easily gained, sterilized and are consistent in quality. Since the tested materials can be used for the fabrication of femoral heads for total hip replacements, oral and spinal implants, the microorganisms chosen can reside in the oral cavity and in other parts of the human body as well. The selection of the microorganisms was based on the fact that they represent characteristic organisms of persistent infections. The presence of microorganisms has been shown to be the main reason for implant failure [45]. Enterococcus faecalis, for example, is a Gram-positive strain, isolated from root-filled teeth with persistent secondary periapical lesions and endocarditis patients. S. aureus is an important nosocomial bacterium which can contaminate biomaterials. Moreover, S. aureus was isolated from periimplantitis patients and was shown to adhere to titanium implant materials [46]. C. albicans has been shown to constitute part of the oral biofilm. Furthermore, the present report aimed at testing the antimicrobial properties of the implant materials in terms of initial bacterial adhesion after 2 h. According to similar studies, this time period is considered satisfactory to confirm an anti-adhesive behavior of the surfaces. Their capability to allow for biofilm formation could be investigated in future studies. To date, no study has examined the initial adherence of the selected microorganisms to these novel implant biomaterials. For studying initial bacterial adhesion in vitro, visualization methods such as SEM and DAPI staining were used. SEM was also appropriate for the observation of the biomaterial surfaces ( Figure 1). Finally, live/dead staining added up some useful information about the vitality of the adherent microorganisms.
Total Bacterial Count (DAPI)
Upon binding to double stranded DNA, the DAPI-molecule (4',6-diamino-2-phenylindole) stains DNA and therefore fluoresces intensely. The maximum fluorescence was observed at a wavelength of 461 nm. To count all adhered microorganisms, their microscopic analysis was performed according to Schwartz et al. [47].
For staining, three specimens of each examined implant material were inserted into the wells of multi-well plates (12-well plate; Greiner bio-one, Frickenhausen, Germany) and incubated with microorganisms for 2 h (37 °C, 5% CO 2 ). Afterwards, the samples were covered with 2 mL DAPI (Merck, Darmstadt, Germany) solution (1 µg/mL in distilled water). After an incubation time of 10 min in a dark chamber, the DAPI solution was washed off with distilled water. The samples were then dried at room temperature and covered with Citifluor (Citifluor, Ltd, London, UK) on a slide. To quantify the total bacterial counts, the adherent microorganisms were analyzed using the Keyence BZ-9000 Fluorescence microscope (Keyence Germany; Neu-Isenburg, Germany). The average fluorescence intensity of the detected microorganisms was determined with the aid of "BZ image analysis application", an image acquisition software utilizing single cell tracing algorithm. The number of cells observed in 10 randomized microscopic ocular grid fields per sample was counted. The area of ocular grid (0.043 mm 2 ) allowed estimating the number of cells per cm 2 . The experiment was conducted in duplicates and repeated twice. Representative images were shot for illustration of the results.
Live/Dead Staining
For the main experiments, SYTO ® 9 stain and propidium iodide (PI) (Live/Dead ® BacLight™ Bacterial Viability Kit, Life Technologies GmbH, Darmstadt, Germany) were selected [48]. The fluorescent agent was dissolved in a 0.9% saline (NaCl) solution to a final concentration of 0.1 nmol•mL −1 . For staining, three specimens of each examined implant material were inserted into the wells of multi-well plates (12-well plate; Greiner bio-one, Frickenhausen, Germany). After incubation with microorganisms for 2 h (37 °C, 5% CO 2 ), the samples were stained with 2 mL SYTO ® 9/PI in 0.9% NaCl in a dark chamber for 10 min at room temperature and mounted with superglue (Loctite 401, Loctite Deutschland GmbH, Munich, Germany) on slides. Microscopic analysis using the Keyence BZ-9000 Fluorescence microscope (Keyence Germany; Neu-Isenburg, Germany) followed directly afterwards. The measurement of the average fluorescence intensity of the detected microorganisms was conducted with the aid of "BZ image analysis application", an image acquisition software based on single cell tracing algorithms. The number of microorganisms per cm 2 detected in 10 randomized microscopic ocular grid fields (0.043 mm 2 each) per sample was determined. The measurement was performed in duplicates and repeated twice. Representative images were acquired for demonstration of the outcomes.
Statistical Analysis
For descriptive demonstration of the data, boxplots were generated and graphically displayed, stratified by bacterial count/cm 2 and material. The presence of significant differences among the different measured parameters was analyzed by one way ANOVA and Tukey test. The significance level was p ≤ 0.05. Statistical analysis was performed by IBM SPSS statistics 19.0.
Conclusions
The results of this in vitro study confirmed the impact of various surface characteristics on the initial bacterial adhesion. Compared to bovine enamel slabs (BES) the coated and uncoated zirconia biomaterial substrata exhibited no anti-adhesive properties when contaminated with E. faecalis, S. aureus and C. albicans for 2 h. A further improvement of the zirconia-based surface parameters towards lower surface roughness values can be recommended to reduce the risk of bacterial adhesion leading to implant failure. | 4,579.4 | 2013-12-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Resonant-state expansion applied to three-dimensional open optical systems: A complete set of static modes
We present two alternative complete sets of static modes of a homogeneous dielectric sphere, for their use in the resonant-state expansion (RSE), a rigorous perturbative method in electrodynamics. Physically, these modes are needed to correctly describe the static electric field of a charge redistribution within the optical system due to a perturbation of the permittivity. We demonstrate the convergence of the RSE towards the exact result for a perturbation describing a size reduction of the basis sphere. We then revisit the quarter-sphere perturbation treated in [Doost {\it et al.}, Phys. Rev. A {\bf 90}, 013834 (2014)], where only a single static mode per each angular momentum was introduced, and show that using a complete set of static modes leads to a small, though non-negligible correction of the RSE result, improving the agreement with finite-element simulations. As another example of applying the RSE with a complete set of static modes, we calculate the resonant states of a dielectric cylinder, also comparing the result with a finite-element simulation.
I. INTRODUCTION
The electromagnetic spectrum of an open optical system is characterized by its resonances, which is evident for optical cavities such as dielectric toroid [1] or microsphere resonators [2]. Resonances are characterized by their spectral positions and linewidths, corresponding to, respectively, the real and imaginary part of the complex eigenfrequencies of the system. Finite linewidths of resonances are typical for open systems and are due to energy leakage from the system to the outside. Objects in close proximity of the cavity modify the electromagnetic susceptibility and perturb the cavity resonances, changing both their position and linewidth, most noticeably for the high-quality (i.e. narrow-linewidth) resonances. This effect is the basis for resonant optical biosensors [3][4][5] in which the changes in the spectral properties of resonators in the presence of perturbations can be used to characterize the size and shape of attached nanoparticles [6]. The whispering gallery mode (WGM) resonances in microdisks and spherical microcavities have been used in sensors for the characterization of nanolayers [7], protein [8] and DNA molecules [9], as well as for single atom [10] and nanoparticle detection [11,12]. Furthermore, the long photon lifetime of WGMs can result in their strong coupling to atoms [13]. Recently, optical resonances have become the core element of a more accurate modeling of multimode and random lasers [14,15] and of light propagation through random media [16]. In nanoplasmonics, the resonances of metal nanoparticles are used to locally enhance the electromagnetic field [17].
Due to the lack of a suited theory, the electromagnetic properties of such open systems were up to now modeled by using finite element method (FEM) and finite differ-ence in time domain (FDTD) solvers. Only recently, approximate approaches using resonance modes have been reported [18][19][20][21][22]. While the eigenmodes of resonators for a few highly symmetric geometries can be calculated exactly, determining the effect of perturbations which break the symmetry presents a significant challenge as the popular computational techniques in electrodynamics, such as the FDTD [23] or FEM [24], need large computational resources [25] to model high quality WGMs.
To treat such perturbations more efficiently, we have developed [26] a rigorous perturbation theory called resonant state expansion (RSE) and applied it to spherical resonators reducible to effective one-dimensional (1D) systems. We have demonstrated on exactly solvable examples in 1D that the RSE is a reliable tool for calculation of wavenumbers and electromagnetic fields of resonant states (RSs) [27], as well as transmission and scattering properties of open optical systems. We have recently developed the RSE also for effectively two-dimensional (2D) systems [28], and planar waveguides [29].
In this paper we extend the RSE formulation to arbitrary three-dimensional (3D) open optical systems, compare its performance with FDTD and FEM, and introduce a local perturbation approach. The paper is organized as follows. In Sec. II we give the general formulation of the RSE for an arbitrary 3D system. In Sec. III we treat the homogeneous dielectric sphere as unperturbed system and introduce the basis for the RSE, which consists of normalized transverse electric (TE) and transverse magnetic (TM) modes and is complemented by longitudinal zero frequency modes. This is followed by examples given in Sec. IV A-C illustrating the method and comparing results with existing analytic solutions, as well as numerical solutions provided by using available commercial software. In Sec. IV D we demonstrate the performance of the RSE as a local perturbation method for a chosen group of modes by introducing a way to select a suitable subset of basis states. Some details of the general formulation of the method including mode normalization and calculation of the matrix elements are given in Appendices A and B.
II. RESONANT STATE EXPANSION
Resonant states of an open optical system with a local time-independent dielectric susceptibility tensorε(r) and permeability µ = 1 are defined as the eigensolutions of Maxwell's wave equation, satisfying the outgoing wave boundary conditions. Here, k n is the wave-vector eigenvalue of the RS numbered by the index n, and E n (r) is its electric field eigenfunction in 3D space. The time-dependent part of the RS wave function is given by exp(−iω n t) with the complex eigenfrequency ω n = ck n , where c is the speed of light in vacuum. As follows from Eq. (1) and the divergence theorem, the RSs are orthogonal according to where the first integral in Eq. (2) is taken over an arbitrary simply connected volume V which includes all system inhomogeneities ofε(r) while the second integral is taken over the closed surface S V , the boundary of V , and contains the gradients ∂/∂s normal to this surface. The RSs of an open system form a complete set of functions. This allows us to use RSs for expansion of the Green's function (GF)Ĝ k (r, r ′ ) satisfying the same outgoing wave boundary conditions and Maxwell's wave equation with a delta function source term, where1 is the unit tensor and k = ω/c is the wave vector of the electromagnetic field in vacuum determined by the frequency ω, which is in general complex. The GF expansion in terms of the direct (dyadic) product of the RS vector fields is given by Ref. [28] This expansion requires that the RSs are normalized according to where E(k, r) is an analytic continuation of the RS wave function E n (r) around the point k n in the complex kplane and δ kn,0 is the Kronecker delta accounting for a factor of two in the normalization of k n = 0 modes. For any spherical surface S R of radius R, the limit in Eq. (5) can be taken explicitly leading for k n = 0 modes to where r = |r|, with the origin at the center of the chosen sphere. Static k n = 0 modes, if they exist in the GF spectrum, are normalized according to 2 = drE n ·εE n .
Their wave functions decay at large distances as 1/r 2 or quicker, and the volume of integration in Eq. (5) can be extended to the full space for which the surface integral is vanishing. The proofs of Eqs. (5) and (6) are given in Appendix A. The completeness of RSs allows us to treat exactly a modified (perturbed) problem in which the RS wave vector κ ν and the electric field E ν are modified as compared to k n and E n , respectively, due to a perturbation ∆ε(r) with compact support. We treat this problem by (i) solving Eq. (8) with the help of the GF, (ii) using in Eq. (9) the spectral representation Eq. (4), and (iii) expanding the perturbed wave functions into the unperturbed ones, This is the RSE method. The use of the of the unperturbed GF is an essential element of the RSE as Eq. (9) guarantees that the perturbed wave functions satisfy the outgoing boundary condition. The result of using Eq. (11) in Eq. (10) is a linear matrix eigenvalue problem which is reduced, using a substitution b nν = c nν κ ν /k n , to the matrix equation [26] n ′ This allows us to find the wave vectors κ ν and the expansion coefficients c nν of the perturbed RSs by diagonalizing a complex symmetric matrix. The matrix elements of the perturbation are given by In our previous works on RSE [26,28] we derived the intermediate result Eq. (10) using Dyson's equation for the perturbed GF. The present way to obtain Eq. (10) is equivalent, but is simplifying the treatment by not dealing explicitly with the perturbed GF. We note that in 2D systems the set of RSs of a system is complemented with a continuum of states on the cut of the GF [28]. In this case, all summations in the above equations include states on the cut which are discretized in numerics to produce a limited subset of isolated poles.
III. EIGENMODES OF A DIELECTRIC SPHERE AS BASIS FOR THE RSE
To apply the RSE to 3D systems we need a known basis of RSs. We choose here the RSs of a dielectric sphere of radius R and refractive index n R , surrounded by vacuum, since they are analytically known. For any spherically symmetric system, the solutions of Maxwell's equations split into four groups: TE, TM, and longitudinal electric (LE) and longitudinal magnetic (LM) modes [30]. TE (TM) modes have no radial components of the electric (magnetic) field, respectively. Longitudinal modes are curl free static modes satisfying Maxwell's wave equation for k n = 0. Longitudinal magnetic modes have zero electric field, and since we limit ourself in this work to perturbations in the dielectric susceptibility only, they are not mixed by the perturbation to other types of modes and are thus ignored in the following. Furthermore, owing to the spherical symmetry, the azimuthal index m and longitudinal index l are good quantum numbers of the angular momentum operator and take integer values corresponding to the number of field oscillations around the sphere. For each l value there are 2l + 1 degenerate modes with m = −l..l.
Splitting off the time dependence ∝ e −iωt of the electric fields E and D and magnetic field H, the first pair of Maxwell's equations can be written in the form where k = ω/c and D(r) =ε(r)E(r). Combining them leads to Eq. (1) for the RSs and to Eq.
is automatically satisfied, since ∇ × ∇ = 0. However, if k = 0, it is not guaranteed that solutions of Eq. (15) satisfy also Eq. (16). The spectrum of the GF given by LE : where f (r) is a scalar function satisfying the Helmholtz equation with the permeability of the dielectric sphere in vacuum given by Owing to the spherical symmetry of the system, the solution of Eq. (18) splits in spherical coordinates r = (r, θ, ϕ) into the radial and angular components: where Ω = (θ, ϕ) with the angle ranges 0 θ π and 0 ϕ 2π. The angular component is given by the spherical harmonics, which are the eigenfunctions of the angular part of the Laplacian,Λ where P m l (x) are the associated Legendre polynomials. Note that the azimuthal functions are defined here as in order to satisfy the orthogonality condition without using the complex conjugate, as required by Eq. (2). The radial components R l (r, k) satisfy the spherical Bessel equation, (24) and have the following form in which j l (z) and h l (z) ≡ h (1) l (z) are, respectively, the spherical Bessel and Hankel functions of the first kind.
In spherical coordinates, a vector field E(r) can be written as where e r , e θ , and e ϕ are the unit vectors. The electric field of the RSs then has the form for TE modes, for TM modes, and for LE modes. All the wave functions are normalized according to Eqs. (5)- (7), leading to the following normalization constants: The Maxwell boundary conditions following from Eq. (15), namely the continuity of the tangential components of E and H across the spherical dielectric-vacuum interface, lead to the following secular equations determining the RS wavenumbers k n : for TE modes and for TM modes, where z = k n R and j ′ l (z) and h ′ l (z) are the derivatives of j l (z) and h l (z), respectively. While the LE modes are the RSs easiest to calculate due to a simple power-law form of their radial functions, it is convenient to treat them in the RSE as part of the TM family of RSs. Indeed, for r R they coincide with the TM modes taken in the limit k n → 0: Note that k n = 0 is not a solution of the secular equation (31) for TM modes. However, using the analytic dependence of the wave functions of TM modes on k n [see Eqs. (25), (27), and (29)], the limit Eq. (33) can be taken in the calculation of the matrix elements containing LE modes. The same limit k n → 0 has to be carefully approached in the matrix eigenvalue problem Eq. (13) of the RSE, as the matrix elements are divergent, due to the 1/ √ k n factor introduced in the expansion coefficients. We found that adding a finite negative imaginary part to static poles, k n R = −iδ, with δ typically of order 10 −7 (determined by the numerical accuracy) is suited for the numerical results presented in the following section. We have verified this by comparing the results with the ones of the RSE in the form of a generalized linear eigenvalue problem Eq. (12), which has no such divergence, but its numerical solution is a factor of 2-3 slower in the NAG library implementation.
IV. APPLICATION TO 3D SYSTEMS WITH SCALAR DIELECTRIC SUSCEPTIBILITY
In this section we discuss the application of the RSE to 3D systems described by a scalar dielectric function ε(r) + ∆ε(r) =1[ε(r) + ∆ε(r)]. As unperturbed system we use the homogeneous dielectric sphere of radius R with ε(r) given by Eq. (19), having the analytical modes discussed in Sec. III. We use the refractive index n R = 2 of the unperturbed sphere throughout this section and consider several types of perturbations, namely, a homogeneous perturbation of the whole sphere in Sec. IV A, a half-sphere perturbation in Sec. IV B, and a quarter-sphere perturbation in Sec. IV C. We demonstrate in Sec. IV D the performance of the RSE as a local perturbation method for a chosen group of modes by introducing a way to select a suitable subset of basis states. Explicit forms of the matrix elements used in these calculations are given in Appendix B.
A. Homogeneous sphere perturbation
The perturbation we consider here is a homogeneous change of ε over the whole sphere, given by where Θ is the Heaviside function, with the strength ∆ǫ = 5 used in the numerical calculation. so that the perturbed modes obey the same secular equations Eq. (30) and Eq. (31) with the refractive index n R of the sphere changed to n 2 R + ∆ǫ, and the perturbed wavenumbers κ ν calculated using the RSE can be compared with the exact values κ (exact) ν obtained from the secular equations.
We choose the basis of RSs for the RSE in such a way that for a given orbital number l and m we select all RSs with |k n | < k max (N ) using a maximum wave vector k max (N ) chosen to result in N RSs. We find that as we increase N , the relative error κ ν /κ (exact) ν − 1 decreases as N −3 . Following the procedure described in Ref. 27 we can extrapolate the perturbed wavenumbers. The resulting perturbed wavenumbers for N = 1000 (corresponding to k max R = 800) are shown in Fig. 1 for the TM RSs and Fig. 2 for the TE RSs. The perturbation is strong, leading to WGMs with up to 2 orders of magnitude narrower linewidths. The RSE reproduces the wavenumbers of about 100 RSs to a relative error in the 10 −7 range, which is improving further by one to two orders of magnitude after extrapolation. The homogeneous perturbation does not couple LE modes to TE modes as LE modes have the symmetry of TM modes [see Eq. (33)] leading to vanishing overlap integrals with TE RSs. The contribution of the LE-mode RS in the TM polarization is significant, as is shown in Fig. 1 by the large decrease of the relative error by up to 8 orders of magnitude when adding them to the basis. This validates the analytical treatment of the LE-mode RSs in the RSE developed in this work. We have verified that taking a finite imaginary value of δ = 10 −7 in Eq. (13) for the LE-modes instead of using strict k n = 0 poles in Eq. (12), as done throughout this work, changes the relative error of the TM mode calculation by less than 10% and within the range of 10 −9 only. For practical applications, this limitation should not be relevant as the error in the measured geometry will typically be significantly larger.
B. Hemisphere Perturbation
We consider here a hemisphere perturbation as sketched in Fig. 3 which mixes TE, TM, and LE modes with different l, while conserving m. The perturbation is given by and increases ε in the northern hemisphere by ∆ǫ, while leaving the southern hemisphere unchanged. In our numerical simulation, we use ∆ǫ = 0.2. The calculation of the matrix elements is done using Eqs. (B7)-(B12) of Appendix B which require numerical integration. Owing to the symmetry of the perturbation, matrix elements between TM and TE RSs can only be non-zero when the RSs have m of opposite sign and equal magnitude, i.e. they are are sine and cosine states of equal |m|. Similarly, matrix elements between two TE RSs or two TM RSs can only be non-zero if both states have the same m. We can therefore restrict the basis to m = 3 TM states and m = −3 TE states for the numerical calculations of this section. We treat the LE RSs as TM modes with k n R = −i10 −7 and a normalization factor modified according to Eq. (33). The resulting RS wavenumbers are shown in Fig. 3. Due to the smaller perturbation compared to that considered in Sec. IV A, the mode positions in the spectrum do not change as much. The imaginary part of most of the WGMs decreases due to the higher dielectric constant in the perturbed hemisphere. However, some of the modes also have an increased imaginary part due to the scattering at the edge of the perturbation.
To the best of our knowledge, an analytic solution for this perturbation is not available and thus we cannot calculate the relative error of the RSE result with respect to the exact solution. However, we can investigate the convergence of the method in order to demonstrate how the RSE works in this case, for the perturbation not reducible to an effective one-dimensional problem. We accordingly show in Fig. 3(a) the perturbed modes for two different values of basis size N and in Fig. 3 are the RS wavenumbers calculated for basis sizes of N 1 ≈ N/2, We see that the perturbed resonances are converging with increasing basis size, approximately following a power law with an exponent between −2 and −3.
C. Quarter-Sphere Perturbation
We consider here a perturbation which breaks both continuous rotation symmetries of the sphere and is thus is not reducible to an effective one or two-dimensional system. The perturbation is given by and corresponds physically to a uniform increase of the dielectric constant in a quarter-sphere area, as sketched in Fig. 4. In our numerical simulation, we take ∆ǫ = 1. Again, the calculation of the matrix elements requires numerical integration. Owing to the reduced symmetry of the perturbation as compared to that treated in the previous section we now have modes of different l, m, and polarization mixing, although TE sine (TM cosine) and TE cosine (TM sine) modes are decoupled, owing to the mirror symmetry of the system. This allows us to split the simulation of all modes into two separate simulations called A and B, respectively, each of size N . The lifting of the m-degeneracy of the unperturbed modes can be seen as splitting off resonances in Fig. 4(a) and (b). In most cases the splitting in the real part of the resonant wavenumber is greater than the linewidth of the modes. The convergence of the RSE is well seen in Fig. 4(a) and (b) showing the perturbed RS wavenumbers for two different basis sizes N . An analytic solution for this perturbation is not available, so that we use the method described in Sec. IV B to estimate the error, and show in Fig. 4(c) the resulting absolute errors M ν for several values of N . A convergence with a power law exponent between −2 and −3 is again observed, resulting in relative errors in the 10 −4 to 10 −5 range for N = 8000.
To verify the RSE results, we have simulated the system using the commercial solver ComSol (http://www.comsol.com) which uses the finite element method and Galerkin's method, approximating the openness of the system with an absorbing perfectly matched layer (PML). We have surrounded the sphere with a vacuum shell followed by a PML shell of equal thickness D. The results are shown in Fig. 4(b) using D = R/2, and a "physics controlled" mesh with N G = 25k, 50k, 100k and 200k finite elements. We used the nearest unperturbed RS wave vector as linearization point (i.e. the input value) for the ComSol solver, and requested the determination of 40 eigenfrequencies, which we found to be the minimum number reliably returning all 15 nondegenerate modes deriving from the l = 7 unperturbed fundamental WGM. With increasing N G , the ComSol RS wavenumbers tend towards the RSE poles, with an error scaling approximately as N −1 G . This is verifying the validity of the RSE results.
To make a comparison between the RSE and Com-Sol in terms of numerical complexity we use the poles computed by an N = 16000 RSE simulation as "exact solution" to calculate the average relative errors of the poles shown in Fig. 4(b) versus effective processing time on an Intel E8500 CPU. The result is shown in Fig. 5, including ComSol data for different shell thicknesses D of R/2, R/4, and R/8, revealing that D = R/4 provides the best performance. This comparison shows that the RSE is 2-3 orders of magnitude faster than ComSol for Fig. 4 the present example, and at the same time determines significantly more RSs. The RSE computing time includes the calculation of the matrix elements which were done evaluating the 1dimensional integrals (see Appendix B 2) using 10000 equidistant grid points. The computing time of the matrix elements is significant only for N 2000, while for larger N the matrix diagonalization time, scaling as N 3 , is dominating. We have verified that the accuracy of the matrix element calculation is sufficient to not influence the relative errors shown.
We also include in Fig. 5 the performance of FDTD calculations using the commercial software Lumerical (http://www.lumerical.com). They were undertaken using a simulation cube size from 2.5R to 4R, exploiting the reflection symmetry, and for grid steps between R/8 and R/80, with a sub-sampling of 32. The simulation area was surrounded by a PML of a size chosen automatically by the software. The excitation pulse had a center wavenumber of kR = 5.1 and a relative bandwidth of 10% to excite the relevant modes, and the simulation was run for 360 oscillation periods. The calculated timedependent electric field after the excitation pulse was transformed into a spectrum and the peaks were fitted with a Lorentzian to determine the real and imaginary part of the mode. The parameters used were chosen to optimize the performance, and in the plot the results with the shortest computation time for a given relative error are given.
We can conclude that the RSE is about two orders of magnitude faster than both FEM and FDTD for this specific problem, showing its potential to supersede presently used methods. A general analysis of the performance of RSE relative to FEM and FDTD is beyond the scope of this work and will be presented elsewhere.
To illustrate how a particular perturbed RS is created as a superposition of unperturbed RSs, we show in Fig. 6 the contributions of the unperturbed RSs to the perturbed WGM indicated by the arrow in Fig. 4(b) with index ν and wavenumber κ ν , given by the open star in Fig. 6. The contribution of the basis states to this mode are visualized by circles of a radius proportional to 6 |c nν | 2 , where the sum is taken over the 2l + 1 degenerate basis RSs of a given eigenfrequency, centered at the positions of the RS wavenumbers in the complex k-plane. The expansion coefficients c nν decrease quickly with the distance between the unperturbed and perturbed RS wavenumbers, with the dominant contribution coming from the nearest unperturbed RS, a typical feature of perturbation theory in closed systems. The unperturbed RS nearest to the perturbed one in Fig. 6 has the largest contribution, and is a l = 7 TE WGM with the lowest radial quantum number. Other WGMs giving significant contributions have the same radial quantum number and the angular quantum numbers ranging between l = 6 and l = 9, see the small stars in Fig. 6 corresponding to l = 7 basis states. This is a manifestation of a quasi-conservation of the angular momentum l for bulky perturbations like the quarter-sphere perturbation considered here.
Generally, we see that a significant number of unperturbed RSs are contributing to the perturbed RS, which is indicating that previous perturbation theories for open systems would yield large errors for the strong perturbations treated in this work since they are limited to low orders [31,32] or to degenerate modes only [33].
D. Local Perturbation
The weights of the RSs shown in Fig. 6 indicate that a perturbed mode can be approximately described by a subset of the unperturbed modes, which typically have wavenumbers in close proximity to that of the perturbed mode. It is therefore expected that a local perturbation approach based on the RSE is possible. We formulate here such an approach.
We commence with a small subset S of modes of the unperturbed system which are of particular interest, for example because they are used for sensing. To calculate the perturbation of these modes approximately, we consider a global basis B as used in the previous sections, with a size N providing a sufficiently small relative error. We then choose a subset S + ⊂ B with N ′ < N elements containing S, i.e. S ⊂ S + , and solve the RSE Eq. (13) restricted to S + . The important step in this approach is to find a numerically efficient method to choose the additional modes in S + which provide the smallest relative error of the perturbed states deriving from S for a given N ′ . Specifically, the method should be significantly faster than the matrix diagonalization Eq. (13).
To develop such a method, we consider here the Rayleigh-Schrödinger perturbation theory based on the RSE and expand the RS wave vector κ up to second order, (38) as directly follows from Eq. (13). Note that the secondorder result in Eq. (38) is different from that given in Ref. 31.
We expect that the second-order correction given by Eq. (38) is a suited candidate to estimate the importance of modes. We therefore sort the modes in B according to the weight W n given by where D is the set of modes degenerate with the mode n in B. The summation over all degenerate modes is motivated by their comparable contribution to the perturbed mode, as known from degenerate perturbation theory. We add modes of B to S + in decreasing W n order. Groups of degenerate modes D are added in one step as they have equal W n . A special case are the LE modes in the basis of the dielectric sphere, which are all degenerate having k n = 0. They are added in groups of equal l in the order of reducing weight.
To exemplify the local perturbation method, we use the quarter sphere perturbation with two different perturbations strengths ∆ǫ = 1 and ∆ǫ = 0.2, and choose the degenerate l = 7 modes shown in Fig. 4(b) as S. The perturbed RSs deriving from S are shown in Fig. 7(a) and (b), as calculated by RSE using either a global basis B with N = 16000, or a minimum local basis S + = S with N ′ ∼ 10, or a larger S + with N ′ ∼ 100. As in the previous section we show the results separately for each class of RSs (A and B) decoupled by symmetry. We find that for ∆ǫ = 0.2 (∆ǫ = 1) the perturbation lifts the degeneracy of S by a relative wavenumber change of about 1% (5%), and that the minimum local basis S + = S of only degenerate modes reproduces the wavenumbers with a relative error of about 10 −4 (10 −3 ), i.e. the perturbation effect is reproduced with an error of a few %. Increasing the local basis size to N ′ ∼ 100 the error reduces by a factor of three, by similar absolute amounts in the real and the imaginary part of the wavenumber [see insets of Fig. 7(a) and (b)].
The relative error of the local-basis RSE is generally decreasing with increasing basis size, as shown in Fig. 7(c). It can however be non-monotonous on the scale of individual sets of degenerate modes. This is clearly seen for for ∆ǫ = 0.2 and small N ′ , where adding the second group increases the error, which is reverted when the third group is added. These groups are the l = 6 and l = 8 fundamental WGMs as expected from Fig. 6(b), which are on opposite sides of S (l = 7 WGMs) in the complex frequency plane. Adding only one of them therefore imbalances the result, leading to an increase of the relative error. Comparing results in Fig. 7(c) for two different values of ∆ǫ, we see that the second-order correction dominates the relative error, as in the wide range of N ′ the error scales approximately like a square of the perturbation strength. The global-basis RSE, also shown in Fig. 7(b), has for a given basis size significantly larger errors. Fur-thermore, a minimum basis size is required for the basis to actually contain S, in the present case N ≈ 500. The local basis thus provides a method to calculate the perturbation of arbitrary modes with a small basis size.
The local perturbation method described in this section enables the calculation of high frequency perturbed modes which have previously been numerically inaccessible to FDTD and FEM due to the necessity of the corresponding high number of elements needed to resolve the short wavelengths involved and inaccessible to the RSE with a global basis due to the prohibitively large N required. The example we used for the illustration shows that a basis of ∼ 100 RSs in the local RSE can be sufficient to achieve the same accuracy as provided by FDTD and FEM in a reasonable computational time [see Figs. 5 and 7(c)]. For this basis size, solving the RSE Eq. (13) is 6 orders of magnitude faster than FDTD and FEM, and the computational time in our numerical implementation is dominated by the matrix element calculation which can be further optimized. A detailed evaluation of the performance of the local basis RSE and a comparison of selection criteria different from Eq. (39) will be given in a forthcoming work.
V. SUMMARY
We have applied the resonant state expansion (RSE) to general three-dimensional (3D) open optical systems. This required including in the basis both types of transversal polarization states, TE and TM modes, as well as longitudinal electric field modes at zero frequency. Furthermore, a general proof of the mode normalization used in the RSE is given. Using the analytically known basis of resonant states (RSs) of a dielectric spherea complete set of eigenmodes satisfying outgoing wave boundary conditions -we have applied the RSE to perturbations of full-, half-and quarter-sphere shapes. The latter does not have any rotational or translational symmetry and is thus not reducible to lower dimensions, so that their treatment demonstrates the applicability of the RSE to general 3D perturbations.
We have compared the performance of the RSE with commercially available solvers, using both the finite element method (FEM) and finite difference in time domain (FDTD), and showed that for the geometries considered here, the RSE is several orders of magnitude more computationally efficient, showing its potential to supersede presently used computational methods in electrodynamics. We have furthermore introduced a local perturbation method for the RSE, which is restricting the basis in order to treat a small subset of modes of interest. This further reduces computational efforts and improves on previous local perturbation methods.
We prove in this section that the spectral representation Eq. (4) leads to the RS normalization condition Eq. (5) and further to Eq. (6). To do so, we consider an analytic continuation E(k, r) of the wave function E n (r) around the point k = k n in the complex k-plane (k n is the wavenumber of the given RS). We choose the analytic continuation such that it satisfies the outgoing wave boundary condition and Maxwell's wave equation n )σ(r) (A1) with an arbitrary source term corresponding to the current density j(r) = σ(r)ic(k 2 − k 2 n )/(4πk). The source σ(r) has to be zero outside the volume V of the inhomogeneity ofε(r) for the electric field E(k, r) to satisfy the outgoing wave boundary condition. It also has to be non-zero somewhere inside V , as otherwise E(k, r) would be identical to E n (r). We further require that σ(r) is normalized according to V E n (r) · σ(r) dr = 1 + δ kn,0 , with the Kronecker delta δ kn,0 = 1 for k n = 0 and δ kn,0 = 0 for k n = 0. This ensures that the analytic continuation reproduces E n (r) in the limit k → k n . Indeed, solving Eq. (A1) with the help of the GF and using the GF spectral representation Eq. (10), we find: and using Eq. (A2) obtain lim k→kn E(k, r) = E n (r) .
We now consider the integral (A4) and evaluate it by using Maxwell's wave Eqs. (1) and (A1) for E n and E, respectively, and the source term normalization Eq. (A2): On the other hand, rearranging the integrand in Eq. (A4) and using the divergence theorem, we obtain with S V being the the boundary of V . Here, we used that for two arbitrary vector fields, a(r) and b(r), we can write The divergence theorem therefore allows us to convert all volume integrals in Eq. (A4) into surface integrals over the closed surface S V , the boundary of V , taken with an infinitesimal extension to the outside area whereε(r) is homogeneous, so that both ∇ · E and ∇ · E n vanish on that surface leaving only the integral shown in Eq. (A6). Finally, using Eq. (A5) in Eq. (A6) and taking the limit k → k n we obtain the normalization condition Eq. (5). The limit in Eq. (5) can be taken explicitly for any spherical surface [26]. In fact, outside the system, wherê ε(r) =1 (or a constant) the wave function of any k n = 0 mode is given by E n (r) = F n (k n r), where F n (q) is a vector function satisfying the equation and the proper boundary conditions at system interfaces and at q → ∞. The analytic continuation of E n (r) can be therefore be taken in the form E(k, r) = F n (kr) .
We use a Taylor expansion at k = k n to obtain E(k, r) ≈ F n (k n r) + (k − k n )r ∂F n (kr) ∂(kr) k=kn = E n (r) + k − k n k n r ∂E n (r) ∂r (A9) and ∂E(k, r) ∂r ≈ ∂E n (r) ∂r + k − k n k n ∂ ∂r r ∂E n (r) ∂r , where r = |r| is the radius in the spherical coordinates.
Choosing the origin to coincide with the center of the sphere of integration S V = S R we note that ∂/∂s = ∂/∂r in Eq. (5). Substituting Eqs. (A9) and (A10) into Eq. (5) and taking the limit k → k n obtain Eq. (6).
the form of a piece of a homogeneous spherical shell layer. The latter is suitable for treating an arbitrary symmetric or asymmetric perturbation of the sphere and is used in particular for half-and quarter-sphere perturbations considered in Sec. IV B and IV C, respectively.
Homogeneous sphere perturbation
The homogeneous perturbation Eq. (34) does not mix different m or l values, nor does it mix TE modes with TM or LE modes. Using the definition Eq. (14) we calculate the matrix elements between TE RSs performing the angular integration which leads to the lm-orthogonality: The radial integration can also be done analytically, so that the matrix elements take the form for identical basis states n = n ′ and for different basis states n = n ′ , where x = n R k n R and y = n R k n ′ R . Similarly, for TM RSs we find × R 0 l(l + 1)R l (r, k n )R l (r, k n ′ ) + ∂[rR l (r, k n )] ∂r ∂[rR l (r, k n ′ )] ∂r dr , and after analytic integration we obtain j l (x) (B3) for identical basis states n = n ′ and for different basis states n = n ′ , where with x = n R k n R and y = n R k n ′ R . Note that LE and TM modes are mixed by the perturbation, and nonvanishing matrix elements between them are calculated using Eqs. (B3) and (B4), treating the LE modes as TM modes with k n = 0 and the normalization constants multiplied by l(n 2 R − 1), in agreement with Eq. (33).
Factorizing the radial and angular integrals and using the fact that χ ′ m (ϕ) = mχ −m (ϕ), the matrix elements of the perturbation Eq. (B6) become between TE modes, | 9,174.2 | 2014-03-06T00:00:00.000 | [
"Physics"
] |
Model of the influence of expense item ‘energy’ on excellence of school operation
Abstract This paper aims to investigate the influence of expense item ‘Energy’ on excellence of school operation. The study was conducted on a sample of 33 primary and 13 secondary schools established by the County of Varaždin, in the area of financing decentralised functions. The following study task was set: ‘Determine the Model of influence of expense item “Energy” on excellence of school operation’. Based on research that incorporated the Pareto Principle and descriptive statistics in the area of quartile, correlation and regression analysis, a model for determining the influence of expense item ‘Energy’ on excellence of school operation in primary and secondary education was provided for the area covered by local government. Based on the results and the generated model, suggestions were made for the improvement of the primary and secondary education system at local government level of operation.
Budgetary Funds Intended to Local/Regional Governments for Decentralised Functions of Local and Regional Government for , Croatian Official Gazette, 2015.
All of these expenses comprise decentralised functions. Decentralisation of the functions from the state level to regional self-government units is not only a transfer of resources, it also represents democratisation of decision-making (Lukeš-Petrović, 2002).
The method of managing operating expenses reflects the attitude of the County, as a founder, towards the system for which it is responsible, and the overall collaborating community in the area of education. Implementing the principle of excellence into management of expenses means that the educational system of the County is based on the traditional culture of work and professionalism of all its stakeholders (Oslić, 2008), (Belak, 2014).
Description of the study subject
The County, being the founder of primary and secondary schools in its territory, has managed the education system in compliance with the Law (Primary and Secondary Education Act -consolidated text, Croatian Official Gazette, 2012) and the County Development Strategy (County Development Strategy, Official Journal of the Varaždin County, 2010) for more than 10 years now, and has a large amount of operational management data for these schools. The majority of these data have been archived using information technology. In addition, new data are created daily, both in the County IT system and in educational systems (Varga, 2012). In a situation when there are insufficient funds to finance education, a question arises relating to the area of financing education: how to use data containing finance information in making business decisions?
Considering that the expense item 'Energy' is an extremely important item in financing education and also influences excellence of school operation, the subject of this study is to research the influence of the expense item 'Energy' on school operational excellence. The research conducted for the purposes of this paper was carried out in the Varaždin County school system. Varaždin County is the founder of 13 secondary and 33 primary schools attended by a total of 19,000 pupils, and employs 2,800 teachers in 108 facilities.
Research objective and tasks
The objective of this research is to create a model of influence of the expense item 'Energy' on excellence of school operation. The model will be used to rank schools according to four levels of school operational excellence and to determine threshold values for every level of excellence.
Creating a model of influence of the expense item 'Energy' on the excellence of school operation will also enable implementation of a decision support system to support the necessary actions which have to be taken to improve and develop operations.
Based on this objective, the following research tasks have been defined: • To investigate expenses financed in education, • To determine a model of influence of expense item 'Energy' on excellence of school operation; and • To determine a model of ranking schools by levels of the excellence of school operation.
Fundamental hypotheses of the research
H0 -Creation of a model of influence of expense item 'Energy' on excellence of school operation is possible; H1 -There is a large quantity of operative data stored in a digital form; and H2 -Data processing as a quick response to operative changes is possible.
The validity of H0 is to be proven by creating a model of influence of expense item 'Energy' on excellence of school operation.
The validity of H1 is to be proven during the analysis of school financing and during implementation of the model.
The validity of H2 is to be proven or denied by the results of the implemented model.
Research methodology
The proposed research is the first scientific research conducted in Croatian practice that deals with the creation of a model of influence of the expense item 'Energy' on the excellence of school operation. From studying the available literature, it is clear that this area has not been researched so far. The most relevant research in this area is found in Stoiljković & Stoiljković (2006), Erić, Stefanović, & Stevanović (2006) and Gelo (2010). In Gelo (2010), energy, or more precisely energy indicators, is used as indicator for country development, while Erić et al. (2006) and Stoiljković & Stoiljković (n.d.) explain reengineering and process improvement.
This study incorporated the Pareto principle and quantitative research methodology. The Pareto principle, also known as the 80/20 Principle, asserts that a minority of causes, inputs or effort usually lead to a majority of the results, outputs or rewards (Koch, 1998).
The quantitative approach relies on the theory or the hypothesis focused on a certain form of measuring or variable classification, and is a part of quantitative or explanatory paradigm which, in addition to the qualitative, that is, the paradigm of understanding, creates scientific paradigms in education research (Mužić, 2004), (Verčić Tkalac, Sinčić Čorić, Dubravka, & Pološki Vokić, Dubravka, 2010).
In addition to the Pareto principle, the research will use descriptive statistics in quartile, correlation and regression analysis, using MS Excel 2010 (Grčić, 2004), (Papić, 2014).
The analysis
School expense financial plans can be presented at the following three levels: group, subgroup and position (Rulebook on Budgetary Accounting and Account Plan, Croatian Official Gazette, 2010). Group and sub-group details are aggregate and are not operative. Details of position are detailed; they serve in operative management and are shown as a 4-digit number (Rulebook on Budgetary Accounting and Account Plan, Croatian Official Gazette, 2010).
Group-level data show the structure of the financial plan and the main types of expenses, which include 32 Material Expenses, 42 Expenses for Procurement of Nonfinancial Assets This study aims to recognise the influence and significance of budget expenses shown under position and presented as a 4-digit number (The Varaždin County Budget, Official Journal of the Varaždin County, 2014), including the description and the Pareto principle whose analysis demonstrates which expenses, as amounts and percentages, participate in school operation.
The Pareto principle is a tool that identifies the relative importance of the data of the execution of the financial plan. Execution of a financial plan is monitored at the section level (The Varaždin County Budget, Official Journal of the Varaždin County, 2014). Education financing plans for secondary education and primary education show expenses which can be influenced by good business practice and subsequently improved, but there are also those whose purpose is specifically defined by the law, collective agreements and other applicable legislation and their improvement is not possible (Huđek, 2014). These expenses, which cannot be influenced, are excluded from further analysis of financing education.
There are 19 expenses in secondary education whose improvement can be influenced (Table 1); they are analysed according to the Pareto principle.
Vilfredo Pareto introduced the concept of distribution (80-20 Rule) according to which 20% of the sample causes 80% of consequences, and established the principle of progress according to which progress as a distribution causes improvement for one without at the same time causing any harm or damage to the other (Grosfeld-Nir, Ronen, & Kozlovsky, 2007). Craft and Leake have performed a study to determine if the heuristic approach of the Pareto rule is applicable in a decision-making process (Craft & Leake, 2002). The survey outlined that a decision-making process based on an accepted management heuristic allows easy and speedy decisions for complex issues, increases the probability of returned value to the organisation, and is consistent with upper management's perceptions of value returned by individual project funded (Craft & Leake, 2002). The results of the Pareto analysis for secondary education, presented in Table 1, show that expense '3223 Energy' in secondary school operation includes 51.4% of financial assets, and that '3234 Utility Expenses' , '3221 Office Stationery' , '3222 Material and Raw Materials' and '3231 Telephone, Post Office and Transport Services' include 80.2% of financial assets. These expenses represent 20% of total expenses, and a priority for corrective action of activating the principle of progress will result in improvements. Analysis confirms that the expense item '3223 Energy' is of the utmost importance. The financial resources shown in Table 1 are used for financing decentralised functions in secondary schools. As the criterion for distribution/financing secondary schools the number of classes was selected, and the scale is defined using a polynomial model where x is the number of classes and y is the annual financial amount for the relevant school (Huđek, 2014).
Primary education expenses whose improvement can be influenced are shown in Table 2; there are 21 of them, and they are analysed according to the Pareto principle.
The results of the Pareto analysis for primary education presented in Table 2 show that expense '3223 Energy' in primary school operation includes 60.52% of financial assets, and that '3221 Office Stationery' , 3234 Utility Expenses' and '3231 Telephone, Post Office and Transport Services' include 80.43% of financial assets. These expenses are a priority for corrective action to improve operation in primary education. Analysis confirms that the expense item '3223 Energy' , just as in the case of secondary education, is of utmost importance. y = −361.09x 2 + 58, 005x + 7, 736 Financial resources shown in Table 2 are used for financing decentralised functions in primary schools. As the criterion for distribution/financing the primary schools the number of classes was selected, and the scale is defined using a polynomial model where x is the number of classes and y is the annual financial amount for the related school (Huđek, 2014).
Implementation
The implementation of a model of influence of expense item 'Energy' on excellence of school operation will use descriptive statistics in quartile, correlation and regression analysis by using MS Excel 2010.
The previous section has shown that financing of primary and secondary education includes expenses which can be influenced by the operation of schools themselves. The Pareto chart has shown that there are five expenses for secondary schools and four for primary schools which are subject to the 80-20 Pareto rule. Thus, excellence of business operation will be defined according to the ratio of managing these expenses. In this way, monitoring and improvements applied to these expenses result in more efficiency and excellence in operation, because the remaining '80%' of expenses represent '20%' of costs. Monitoring and improvements applied to over 20% expenses would require the same procedures and activities, but the efficiency would be much lower.
Also, it was determined that, amongst all of them, expense '3223 Energy' represents the highest financial amount and is of the utmost importance.
As proven, in addition to expense item '3223 Energy' in primary education amounting to 60.52% of costs, there is also '3221 Office Stationery' , '3224 Utility Expenses' and '3231 Telephone and Post Office Services' . These total 19.91% of overall costs. In secondary education, in addition to expense item '3223 Energy' amounting to 51.4% of costs. there is also '3234 Utility Expenses' , '3221 Office Stationery' , '3222 Material and Raw Materials' and '3231 Telephone and Post Office Services' . These items total 28.9% of overall costs. These expenses represent criteria used to determine excellence of school operation. The measuring unit is the monthly financial amount relating to expenses stated earlier compared with the number of classes, which represents the criterion for financing of schools (Huđek, 2014). Table 3 for secondary schools shows monthly amounts stated per class for expenses subject to the Pareto rule, in particular for expense item '3223 Energy' .
A measure of spread is used to describe the variability in a sample or population. It gives an idea of how well the mean represents the data. In our case, if the population is 33 primary and 13 secondary schools, the optimum is to use the quartiles because the spread of data in quartiles is small and the mean is representative. That means that the improvement is performed on quartiles and priority of improvement is on the schools in the 4th quartile. The same principle is used for the schools in the 1st quartile, except that these schools represent the excellence of school operation.
Expenses for schools have been distributed in four groups (quartiles) (Papić, 2014). The groups are graphically presented so that green represents the 1st quartile, yellow the 2nd, yellow-red the 3rd and red the 4th. y = 329.94x 2 + 3, 850.6x + 341, 704 The regression analysis shows that the coefficient of the financial value correlation per class and the quartile for expense item '3223 Energy' amounts to 0.95064967, which indicates strong correlation. It also shows that the coefficient of determination, as the indicator of regression model representativeness, amounts to 0.90373479, resulting in 90.37% connections between financial values and quartiles that can be demonstrated by a linear model.
where variable x is the financial value and variable y is the quartile used to determine excellence. Figure 1 shows a polynomial model which demonstrates that 96.71% of relationships between financial expense item '3223 Energy' and quartiles determining excellence of business operation and the rank of school can be presented by implementation of the following polynomial model: where variable x is the financial value for expense item '3223 Energy' per class for each secondary school, and variable y is the quartile used to determine excellence and rank of business operation of each secondary school. Figure 2 shows a polynomial model which demonstrates that 85.62% of relationships between financial expense item '3223 Energy' and the Pareto quartile determining excellence of business operation can be presented by implementation of the following polynomial model: where variable x is the financial value of expense item '3223 Energy' per class for each secondary school, and variable y is the quartile used to determine 'Pareto' excellence of each secondary school. Such a high level of model representativeness indicates the significant impact of expense item 'Energy' on the excellence of secondary school operation. Table 4 for primary schools shows monthly amounts stated per class for expenses subject to the Pareto rule, in particular for expense item '3223 Energy' . Expenses for schools have been distributed in four groups (quartiles) (Papić, 2014). The groups are graphically presented so that green represents the 1st quartile, yellow the 2nd, yellow-red the 3rd and red the 4th.
The regression analysis shows that the coefficient of the financial value correlation per class and the quartile for expense item '3223 Energy' amounts to 0.939577406, which indicates strong correlation. It also shows that the coefficient of determination, as the indicator of regression model representativeness, amounts to 0.882805701, resulting in 88.28% connections between financial values and quartiles that can be demonstrated by a linear model.
where variable x is the financial value and variable y is the quartile used to determine excellence and rank of school. Figure 3 shows a polynomial model which demonstrates that 90.09% of relationships between financial expense item '3223 Energy' and quartiles determining excellence of business operation can be presented by implementation of the following polynomial model. where variable x is the financial value for expense item '3223 Energy' per class for each primary school, and variable y is the quartile used to determine excellence and rank of business operation of each primary school. Figure 4 shows a polynomial model which demonstrates that 83.21% of relationships between financial expense item '3223 Energy' and the Pareto quartile determining excellence of business operation can be presented by implementation of the following polynomial model: where variable x is the financial value for expense item '3223 Energy' per class for each primary school, and variable y is the quartile used to determine 'Pareto' excellence of each primary school.
Such a high level of model, 85.72%, indicates a significant impact of expense item 'Energy' on the excellence of primary school operation.
Conclusion
There is no other research such as that proposed in this study, and this study has resulted in the following contributions: 1. Research and a comprehensive study of expenses at the level of position, which are to be financed in education, were conducted on a sample comprising 33 primary and 13 secondary schools funded by Varaždin County. The research demonstrated that there are 27 expense items in both and secondary and primary education. The Pareto principle was also used to analyse the percentage of each expense in school operation. The conducted research and completed analysis have proven the validity of hypothesis H1, that there is a large quantity of operative data stored in a digital form. 2. Research and a comprehensive study of expenses allowing for improvements were conducted. In secondary education improvements are possible for 19 expenses, and in primary education 21. The Pareto principle was also used to analyse the percentage of each expense in school operation, and it was determined that, when it comes to financing, expense '3223 Energy' is of utmost importance both in primary and y = −9E − 07x 2 + 0.0054x − 3.1122 secondary education. The conducted research and completed analysis have proven the validity of hypothesis H1, that there is a large quantity of operative data stored in a digital form. 3. A scale for financing secondary schools by implementing the polynomial model was determined, where x represents the number of classes in a secondary school and y the annual financial amount for school operation (Huđek, 2014). Model representativeness is 85.72%. The scale for financing primary schools by implementing the polynomial model was determined, where x represents the number of classes in a primary school and y the annual financial amount for school operation (Huđek, 2014). Model representativeness is 85.67%. 4. A model of influence of expense item 'Energy' on excellence of school operation was determined and created. For secondary schools it was determined by implementing the polynomial model with representativeness of 96.71% (Figure 1). Likewise, the model for primary schools was also determined by implementing the polynomial model with representativeness of 90.09% (Figure 3). The regression analysis of samples confirmed validity of samples in Tables 3 and 4 for secondary and primary schools, respectively. Determination and creation of these models have proven the validity of hypothesis H0 that creation of a model of influence of expense item 'Energy' on excellence of school operation is possible. 5. A model of ranking schools by levels of operational excellence was determined and created. The ranking was conducted by implementation of quartile analysis and a polynomial model. Ranking of secondary schools is shown in Table 3, and the related model in Figure 2. Ranking of primary schools is shown in Table 4, and the related model in Figure 4. Determination and creation of these models and the resulting ranking of schools by levels of excellence has proven the validity of hypothesis H2, that data processing, as a quick response to operative changes, is possible because any change relating to the scope of activities depending on the number of classes, which is the criterion for financing or relating to energy consumption, may influence the results attributed to excellence of school operation.From a practical aspect, the model and its results define priorities for improvement. The priorities are schools of the 4th quartile, and these are to be included in the operative programmes of improvements in the area of energy efficiency and renewable energy sources. Such an approach to the expense item 'Energy' and determination of excellence of operation is also used to determine schools which are set as an example of excellence. y = −361.09x 2 + 58, 005x + 7, 736 y = 329.94x 2 + 3, 850.6x + 34, 704 y = 3E − 06x 2 − 0.0056x + 3.3122 y = −1E − 06x 2 + 0.0059x − 3.4781 Table 5 shows threshold values for the entire primary and secondary school system given per quartile. Figure 5 demonstrates a model of the impact of expense item 'Energy' on the excellence of business operation for both primary and secondary schools.
Disclosure statement
No potential conflict of interest was reported by the authors. | 4,813.2 | 2018-01-01T00:00:00.000 | [
"Education",
"Economics"
] |