text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Optimally Distributed Kalman Filtering with Data-Driven Communication † For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. Introduction The efficient processing of sensor data is a central topic in a wide variety of research areas, which is underlined by advances in sensor technology and capabilities, e.g., for odor [1] and taste recognition [2] or by advances in visual information processing [3] as well as by applications in robotics [4] and sensor networks [5,6]. In particular to process data of multiple sensors, the well-known Kalman filter [7] has evolved into a key component of data fusion algorithms. Multisensor data can directly be transmitted to a data sink that employs a centralized Kalman filter to process all the accrued sensor readings. Such a simple filter design stands in stark contrast to the communication costs expended to transmit the data. The idea behind distributed Kalman filter implementations is to use local processing power to combine and condense sensor data locally so that the processing results can be transmitted more efficiently and less frequently. As the local processing results comprise the information from all past observations, the results can be sent to the data sink after arbitrarily many time steps without losing information from the past measurements. As compared to a centralized Kalman filter, distributed implementations 1. All nodes have to send their data at the same time, and 2. the central cannot infer any information about the state between the sending times. Hence, the standard implementation of the ODKF implies transmissions of either all or none of the nodes. As a further consequence, full-rate communication of the nodes is required if the data sink needs an estimate at every time step. In this article, extensions of the ODKF are proposed that can operate under lower communication rates. This is achieved by introducing data-driven transmission strategies [29,30]. In particular, the local estimates can asynchronously be transmitted to the data sink. In order to guarantee consistency, bounds on the estimation errors are provided. These bounds are only required in situations when not every local estimate is available at the data sink; the optimal estimate as provided by a central Kalman filter is still obtained each time when all local estimate have been sent to the data sink. As compared to the standard formulation of the ODKF, the advantage of the proposed extension is that the data sink can now compute an estimate of the state based on a subset of local estimates. This article continues the work in [31] by introducing an additional criterion for the data-driven transmission strategy, providing more details, and extending the evaluation and discussions. The paper is structured as follows. Section 2 provides a description of the centralized and the optimally distributed Kalman filter as well as a problem formulation. In Section 3, the first extension is introduced which enables the data sink to treat missing estimates. In Section 4 and in Section 5, we describe the second and the third new distributed algorithm, respectively, which implement data-driven transmission schemes and allow for omitted estimates at the fusion center over multiple time steps. The results of an experimental evaluation are discussed in Section 6. Finally, the article is concluded in Section 7. Centralized and Optimally Distributed Kalman Filtering We consider a sensor network with N local sensor nodes and a central node, which serves as a data sink and computes an estimate on the state. The true state of the system at time step k is denoted by x k , which evolves according to the discrete-time linear dynamic system where A k is the system matrix and w k denotes the process noise, which is assumed to be zero-mean Gaussian noise, w k ∼ N (0, C w k ) with covariance matrix C w k . At each time step k, each sensor i ∈ {1, . . . , N} observes the state through the model where H i k is the measurement matrix and v i k the measurement noise, which is assumed to be Gaussian noise with zero mean, v k ∼ N (0, C z,i k ) with covariance matrix C z,i k . The measurement noise terms of different local sensors are assumed to be mutually uncorrelated. Also, the process and measurement noise terms for different time steps are uncorrelated. For the centralized Kalman filter (CKF), each measurement is sent to the data sink, and fused by means of the formulas These equations correspond to the information form [32,33] of the measurement update of the standard Kalman filter.x e,c k and C e,c k denote the state estimate and the corresponding error covariance matrix after the fusion step, respectively.x p,c k and C p,c k denote the state estimate and the corresponding error covariance matrix computed by the prediction step of the Kalman filter. The prediction step is carried out at the center node by where these formulas are also given in the information form [26,32]. Since these equations correspond to the standard Kalman filter, the CKF is unbiased and optimal with respect to the minimum mean squared error. In particular, the computed error covariance matrix is equal to the actual estimation error, i.e., In [20][21][22][23], a distributed implementation of the Kalman filter algorithm has been derived, which is algebraically equivalent to the centralized scheme, i.e., which is also unbiased, minimizes the mean squared error, and which fulfills (3). This is achieved by defining a local filtering scheme such that the fusion result is equal to results (1) and (2) of the CKF. We will describe this algorithm-the optimally distributed Kalman filter (ODKF)-in the following. The local sensor nodes run modified versions of the Kalman filtering algorithm. They use so called globalized local states estimates and error covariance matrices (Although, strictly speaking, the local estimate and covariance matrix do not represent consistent estimates of the state, we denote them as local estimates). To initialize the ODKF, the local initial estimates and covariance matrices (x e,i 0 , C e,i 0 ) at the sensor nodes i ∈ {1, . . ., N}, which are usually identical, have to be replaced by the globalized estimatesx Since the globalized error covariance matrix is equal for each sensor, the sensor index i is omitted. This equality also applies to all future time steps. The local prediction step is replaced by the globalized prediction equationsx The local filtering steps are globalized bȳ The processing steps (4) and (5) are computed locally on each sensor node. Hence, measurements are not directly transmitted to the central node-instead, the local estimates are sent to the central node. As the globalized covariance matrix is equal for each node, it can also be computed in the central node. In order to compute an estimate at an arbitrary time step k, the central node receives the globalized estimates (x e,i k ,C e k ) from each sensor i and fuses them according to x e,d k denotes the state estimate after the fusion step at the data sink with corresponding error covariance matrix C e,d k . From Equations (6) and (7), we can easily accept that The same equations apply to the fusion of the predicted estimates and error covariance matrices in (4). In [20], it has been shown that the results are optimal, i.e., x e,d k =x e,c k , wherex e,c k and C e,c k are computed by the CKF, i.e., (1) and (2). The disadvantage of the centralized Kalman filter is that each sensor node has to transmit measurements of each time step to the central node. For the ODKF, we observe that communication in past time steps does not influencex e,d k and C e,d k , i.e., the equalities hold independently of the past communication pattern in the distributed network. As a consequence of (8) and (9), we can see that is equal to (3)-the ODKF is optimal. A significant drawback of the ODKF implementation becomes apparent in the following situation. If only m < N sensors transmit their estimates to the fusion center at time step k, Equations (6) and (7) become The resulting ODKF estimatex e,d k and error covariance matrix C e,d k are different from the centralized estimatex e,c k and the error covariance matrix C e,c k , which are In particular, we notice that the covariance matrix (13) differs from the ODKF covariance matrix in (11). A consequence of this mismatch is a possible bias in the fused estimate as discussed, e.g., in [24]-hence, the ODKF may provide inconsistent estimates in case of missing transmissions. This issue will be addressed in the following sections. Although the CKF does not suffer from inconsistency, (12) and (13) unveil a critical downside of the CKF: Missing measurements at time step k are lost for all future time steps. By contrast, the local estimates of the ODKF incorporate past measurements, which states the reason why the ODKF may outperform the CKF if we can solve the inconsistency problem of the ODKF. In this section, the ODKF has been revisited; it provides the same results as the CKF but offers the advantage that transmissions can take place at arbitrary instants of time. However, the ODKF still requires that all nodes send their local estimates at the transmission times to compute (6) and (7). As a consequence, the data sink typically operates at a lower rate than the local nodes as it is idle between transmission times. In the following sections, extensions are provided that enable the nodes to transmit their local estimates asynchronously. The data sink can then operate at the same rate as the sensor nodes, i.e., it is able to provide an estimate at every time step k. By employing data-driven strategies, the communication rate of each node can be significantly lower than 1, where the value 1 corresponds to transmissions at every time step k. In Section 3, we develop a consistent ODKF extension than can cope with situations where sensor nodes may send their estimates at every second time step. This algorithm still provides results equal to the CKF. With this algorithm, we are able to reduce the communication rate by half. Sections 4 and 5 introduce a second and third algorithm that can even reach a lower communication rate by applying bounds on the missing pieces of information. Distributed Kalman Filtering with Omitted Estimates We consider the ODKF algorithm as described in the previous section. At time step k, only sensor nodes 1, . . . , m send their estimates to the fusion center, but the estimates of sensor nodes m + 1, . . . , N are not. In this section, we assume that the data from the nodes m + 1, . . . , N had been available in the fusion center at time step k − 1. Thus, the fusion center can compute the predicted estimatesx p,m+1 k , . . . ,x p,N k for time step k by using (4). In place of the ODKF fusion Equations (6) and (7), the fusion result is now computed by The resulting estimatex e,d k and the error covariance matrix C e,d k are equal to the estimatex e,c k and the error covariance matrix C e,c k computed by a centralized Kalman filter according to (12) and (13). A proof for the equality is provided in Appendix A. Since (14) and (15) are equivalent to the CKF, unbiasedness, optimality, and (10) are accordingly inherited from the CKF. We have generalized the original ODKF algorithm such that full-rate communication is not required anymore. The novel fusion algorithm merely requires that if a particular sensor does not communicate with the center at time k, it has sent its data at time k − 1, i.e., each sensor has to communicate with the center at least every second time step. Hence, the required communication rate can be reduced by half to 0.5. However, a higher communication rate-and thus, the incorporation of the information contained in additional measurements-will always result in a lower mean squared error (MSE). Thus, we have to deal with the trade-off between a low communication rate and a low MSE. Nevertheless, it is possible to achieve a smaller MSE while keeping the same communication rate by using a data-driven communication strategy and thus, scheduling the data according to the information contained. Valuable results have already been achieved using data-driven communication in distributed sensor networks [34][35][36][37][38][39][40][41][42]. The idea is that each local sensor evaluates the distance between the predicted estimatex p,i k and the filtered estimatex e,i k . If the distance is large, the measurement z i k adds much new information to the prediction. Only in this case, the sensor should send its current estimatex e,i k to the center node. It is important to emphasize that the globalized parametersx p,i k andx e,i k are not unbiased estimates of the actual state. It can be shown [24] that, in contrast to the difference between the standard Kalman filter estimates,x k is not zero on average, but may even diverge. Therefore, in order to evaluate the influence of a measurement z i k , we study the difference between the predicted and updated estimates of the standard Kalman filter, which is related to the weighted difference between the measurement and the prediction, i.e., where K i k denotes the standard Kalman gain. For this purpose, the standard Kalman filtering algorithm has to run in parallel to the globalized version of the Kalman filter at each sensor node. The following data-driven communication strategy can be applied: Still, experiments (see Section 6) will show that by using the data-driven communication strategy instead of random communication, for a fixed communication rate an improvement of the MSE of the fused estimatex e,d k can be achieved. However, a drawback of the algorithm is the assumption that if a particular sensor does not communicate with the center at time k, it has communicated at time k − 1, i.e., each sensor communicates with the center at least every other time step. Thus, communication rates lower than 0.5 cannot be achieved. This will be addressed by the following extensions. Data-Driven Distributed Kalman Filtering with Omitted Estimates over Multiple Time Steps-Version 1 If we want to achieve communication rates lower than 0.5 in the sensor network, we have to allow that a particular sensor does not send its estimates to the fusion center over multiple time steps. In this case, the fusion center has to perform multiple consecutive predictions. Let us assume that the last communication of sensor i with the center occurred at time k − l. The predicted estimate for time step k is computed as shown in the following scheme. Prediction refers to the application of Equation (4). Note that the predicted estimates are now marked with "pp" instead of "p" to emphasize that possibly multiple prediction steps were applied consecutively. In case that prediction has been performed only once, we havex The new estimatex e,d k can be expressed in terms of the estimatex e,d k from (15) as follows: For the yet to be defined triggering criterion, we consider the distance The expected estimation error covariance matrix is then given by Obviously, the expected estimation error cannot be computed exactly at the fusion center, since the difference d i k is not available. Nevertheless, it is possible to obtain an upper bound on the estimation error, if we alter the communication test (16). We ensure that in case of communication the matrix k . An alternative possibility to avoid the divergence of the difference is to debias the local globalized estimates. A strategy to debias the estimates using debiasing matrices has been proposed in [24,25]. In each prediction and filtering step, each local node computes a new debiasing matrix. This matrix is initialized by ∆ p,i 0 = I. In the filtering step, the debiasing matrix is computed by In the prediction step, the debiasing matrix is computed by ∆ pp,i k is computed by applying Equation (21) multiple times, until the next communication with the center node occurs. By multiplying the inverse of the debiasing matrix with the globalized estimates, we can debias the estimates [24,25], i.e., It can be easily shown that the same applies to the predicted estimate over multiple time steps, i.e., We new definex pp,i k Thus, in general the differencex We can now define the new fusion equations as where m is the number of sensors which communicate with the center at time k and l is the number of sensors which do not communicate with the center at time k, but for whichx pp,i k −x p,i k = 0 hold. Note that the fusion formulas are equal to (14) and (15) for N = m + l, i.e., for the case that each sensor sends its estimate to the center at least every other time step. The resulting estimate is consistent, i.e., A proof for the consistency condition (22) is provided in Appendix C. The drawback of the presented algorithm is that it needs two parameters B and α to perform the communication test. Both parameters influence the communication rate. Thus, it is difficult to find the parameters which ensure the desired balance between a small communication rate and a small estimation error. Experiments using the particular dynamic system are needed to find the best combination of both parameters. Thus, we will now present another algorithm which only uses one parameter B for the communication strategy. Data-Driven Distributed Kalman Filtering with Omitted Estimates over Multiple Time Steps-Version 2 For the second data-driven algorithm, fusion Equation (15) is now generalized by The difference to the previous Equation (17) is the covariance matrix (C e k ) −1 in the second sum. The new estimatex e,d k can be expressed in terms of the estimatex e,d k from (15) as follows: In order to define a communication strategy, we consider the difference which is compared against the matrix B by do not send estimate to the fusion center else send estimate to the fusion center. B denotes a user-defined symmetric positive definite matrix. This time we do not need the Euclidean distance x pp,i k −x e,i k ≤ α since the distance between predicted and filtered estimate is included in d i k . We can now define the new fusion equations as where m is the number of sensors which communicate with the center at time k. With the same arguments as in Section 4 it can be shown that the resulting estimate is consistent, i.e., In the experimental evaluation of the algorithms we will see that although this version of fusing the estimates has the advantage that only one parameter has to be chosen, the estimate of the fused error covariance matrix is not as good as as in the previous version. Simulations and Evaluation We apply the CKF algorithm as well as the three data-driven ODKF algorithms to a single-target tracking problem. The system state x k is a six-dimensional vector with two dimensions for the position, two dimensions for the velocity, and two dimensions for the acceleration. A near-constant acceleration model is used. The system matrix is given by with the sampling interval ∆ = 0.25 s. The process noise covariance matrix is given by We have a sensor network consisting of six sensor nodes and one fusion node. Two sensors measure the position, two sensors measure the velocity and two sensors measure the acceleration. The measurement noise covariance matrices are given by C z,i k = 1 0 0 1 for i ∈ 1, . . . , 6 . Monte Carlo simulations with 500 independent runs over 100 time steps are performed. Since (A k , C w k ) is stable and (A k , H k ) is detectable the error covariance matrix and the MSE converge to a unique values [43]. Based on the simulation, actual error covariance matrices and MSE values are computed for each algorithm. Monte Carlo simulations are performed for different average communication rates for each of the three algorithms. For the CKF, communication is performed randomly, but with different average rates. Note that only current measurements are communicated. If the measurement z i k is not sent to the fusion center at time k, the information will not be available at the center at any future time. The first ODKF algorithm (Algorithm 1) from Section 3 is performed with random communication as well as with data-driven communication. In the latter case, the parameter α is varied to achieve different rates. The second algorithm (Algorithm 2) from Section 4 and the third algorithm (Algorithm 3) from Section 5 are performed with data-driven communication. For Algorithm 2 both parameters α and B are varied, for Algorithm 3 the parameter B is varied. The compared methods are: Algorithm 1: ODKF algorithm from Section 3 with random and data-driven communication. Algorithm 2: ODKF algorithm from Section 4 with data-driven communication and parameters α, B. Algorithm 3: ODKF algorithm from Section 5 with data-driven communication and parameter B. The simulation results are shown in Figure 1. The MSEs and the traces of the error covariance matrices are depicted relative to the average communication rate in the network. Since for Algorithm 2 different parameter combinations lead to different results, we have only included the results with the smallest error covariance matrices in the plot. Only for the centralized Kalman filter and for the Algorithm 2 and 3, communication rates lower than 0.5 are given. We can observe that for Algorithm 1 data-driven communication leads to an improved estimate compared to random communication. However, it also leads to a larger trace of the error covariance matrix and thus, to a larger uncertainty reported with the estimate. We also can observe that for communication rates in range [0.5, 1] the results of Algorithms 2 and 3 with data-driven communication are almost equal. This can be explained by the fact that Algorithm 2 extends Algorithm 3 share a common triggering criterion, and the fusion formulas for both algorithms are equal if each sensor communicates with the center at least every other time step. Figure 1 shows that for each of the algorithms the MSE is always smaller than or equal to the trace of the error covariance matrix. This illustrates that the estimators provide consistent results. The traces are good estimates of the MSEs except for low communication rates in Algorithm 3 and very low communication rates in Algorithm 2. Thus, the trace of the error covariance matrices-the uncertainty reported by the estimators-is not significantly larger than the actual uncertainty in most of the cases. Each of the distributed fusion algorithms performs better in terms of the MSE as compared to the centralized algorithm. This can be explained by the fact that in the distributed network the fused estimates contain the information of all past measurements, while in the centralized network only the current measurements are fused. Conclusions In this article, the optimally distributed Kalman filter (ODKF) has been extended by data-driven communication strategies in order to bypass the need for full communication that is usually required by the ODKF to compute an estimate. Since the ODKF may provide inconsistent results if data transmissions are omitted, the missing estimates are replaced by predictions from previous time steps and consistent bounds on the error covariance matrix are computed. The first proposed technique allows for communication rates in the range [0.5, 1] while the second and the third algorithm allow for any communication rate in range [0, 1]. In a centralized Kalman filter (CKF), where measurements are directly sent to the central node, missing or lost transmissions to the center node need to be repeated in order to avoid loss of measurement data. In this regard, the proposed extensions of the ODKF can significantly outperform the CKF: The local estimates of each sensor node comprise the entire history of local measurements and hence, suspended transmissions do not lead to a loss of information in the network. Conflicts of Interest: The authors declare no conflict of interest. Appendix A We will now show that the resulting estimatex e,d k and the error covariance matrix C e,d k are equal to the estimatex e,c k and the error covariance matrix C e,c k computed by the CKF according to (12) and (13). Proof: C e,d k = C e,c k holds due to Accordingly,x e,d k =x e,c k holds due to Appendix B In the following, relationship (20) is proven. Proof. As a symmetric positive definite matrix, B can be written as a product B = AA T , where A is a lower triangular matrix with positive diagonal entries, using the Cholesky decomposition. A T is then an upper triangular matrix with positive diagonal entries. As triangular matrices with positive diagonal entries, the matrices A and A T are invertible. We define b := A −1 d i k . We have then We also have We still have to show that First, we will show "⇒". From I − bb T ≥ 0 we have a T a − a T bb T a = a T I − bb T a ≥ 0 ∀a = 0. With a = b we have We will now show "⇐". From b T b ≤ 1 we have a T b ≤ a b ≤ a and b T a ≤ a b ≤ a ∀a. We have then a T bb T a ≤ a T a ∀a. It follows a T I − bb T a = a T a − a T bb T a ≥ 0 ∀a = 0 and thus, I − bb T ≥ 0 . Appendix C We will now show that the resulting estimate is consistent, i.e., relationship (22) holds. Due to the orthogonality principle [44] we have We can now write (A1) as Proof. We havex To complete the proof we still have to show that
6,312.6
2018-03-29T00:00:00.000
[ "Computer Science" ]
Effect of Proanthocyanidin, Fluoride and Casein Phosphopeptide Amorphous Calcium Phosphate Remineralizing Agents on Microhardness of Demineralized Dentin. Objectives The aim of this study was to evaluate the effect of dentin remineralization using proanthocyanidin (PA), fluoride varnish and casein phosphopeptide amorphous calcium phosphate (CPP-ACP) paste and their various combinations on microhardness of demineralized root dentin. Materials and Methods One-hundred and twenty freshly extracted sound human premolars were selected and randomly divided into eight groups for dentin treatment as follows. C: Deionized water (control); PA: 6.5% PA solution; F: fluoride varnish (5% NaF, 22600 ppm fluoride); CP: CCP-ACP; PAF: 6.5% PA + fluoride varnish; PACP: 6.5% PA + CCP-ACP; FCP: fluoride varnish + CCP-ACP and PAFCP: 6.5% PA + fluoride varnish + CCP-ACP. All specimens were subjected to Vickers microhardness test (500 g, 10 seconds, 3 points). Data were analyzed using one-way ANOVA and Tukey's post hoc test. The significance level was set at 0.05. Results The mean and standard deviation (SD) values of Vickers hardness number (VHN) in groups C, PA, F, CP, PAF, PACP, FCP and PAFCP were 37.39±4.97, 38.68±4.62, 48.28±2.68, 41.91±3.32, 48.59±2.55, 53.34±2.57, 48.413±4.00 and 55.20±1.82, respectively. Pairwise comparisons of the groups revealed that there was no significant difference between groups C and PA, PA and CP, F and PAF, F and FCP, PAF and FCP, and PACP and FPACP (P>0.05); but significant differences were observed between other groups (P<0.05). Conclusions The results of this study showed that the tested dentin treatments increased the microhardness of demineralized root dentin except for PA application. INTRODUCTION With an increase in number of individuals preserving their natural teeth until old age, the challenge of oral healthcare provision for the aging population is becoming more significant. An increase in exposed root surface area in individuals older than 65 years of age increases the risk of root caries in them compared to younger populations [1]. Several studies have been performed on enamel remineralization; however, dentin remineralization is more challenging because dentin has lower mineral content and subsequently lower hardness and higher modulus of elasticity compared to enamel, which affect its properties [2,3]. Considering the microstructure of dentin, collagen fibers in dentin serve as a scaffold for mineral crystals that strengthen the matrix. Mineralized dentine matrix plays an important role in preventing crack propagation and thus maintaining the function of tooth. Therefore, it can be stated that remineralization of carious dentin restores the functionality of dentine [4]. Nowadays, preventive and minimally invasive dentistry offers various techniques to detect and restore minimal changes in tooth structure [5]. Fluoride is known as a remineralizing agent, which interacts with oral fluids on the enamel surface and subsurface and bonds to calcium and phosphate ions to form fluorapatite [6]. It has been reported to efficiently prevent enamel and dentin caries, and to arrest initial carious lesions in children and adolescents [7]. Casein phosphopeptide amorphous calcium phosphate (CPP-ACP) is a recently introduced remineralizing agent. This nanocomplex, derived from milk protein, was introduced as a supplemental source of calcium and phosphate ions [8]. The favorable effects of CPP-ACP are related to its ability to keep supersaturated levels of calcium and phosphate ions in the oral cavity when applied on the tooth surface [9,10]. The effect of natural products with antibacterial and remineralizing properties has been previously investigated on root dentin caries [11,12]. Grape seed or Vitis vinifera extract (GSE) contains flavonoids. Its other active ingredients include proanthocyanidin (PA), flavan-3-ols and catechin [13]. Studies have shown that glutaraldehyde and extracts rich in PA can be efficient for improving the mechanical stability of dentin and preventing collagen degradation [14,15]. In addition, PA increases the synthesis of collagen, promotes conversion of insoluble collagen to soluble collagen during development and decreases the rate of enzymatic degradation of collagen matrix [16]. Due to the lack of comprehensive studies comparing preventive approaches for root caries, dentin surface change in order to confer resistance against dental caries may serve as a novel modality in this respect. The objective of this study was to evaluate the effects of dentin biomodification using PA-rich GSE, fluoride varnish and CPP-ACP individually and in combination with each other on demineralization of root dentin in vitro using microhardness test. The null hypothesis was that there would be no significant difference between microhardness of the control group and the groups treated with remineralizing agents. MATERIALS AND METHODS This study was approved by the ethical committee of the Vice Chancellor of Research, Hamadan University of Medical Sciences. Specimen preparation: One-hundred and twenty freshly extracted sound human premolars without stains, morphological changes or cracks were used in this study. They were completely cleaned of organic debris and stored in 0.5% chloramine solution for 24 hours and then immersed in distilled water (grade 3, ISO 3696). The roots were cut from the crowns at the cementoenamel junction with a high-speed hand piece and diamond bur (Isomet 1000, Buehler, Lake Bluff, IL, USA). Four-millimeterthick slices were cut out of the cervical third of the roots. The root surfaces were polished under running water with 1,000-grit silicon carbide paper to remove cementum. A total of 120 slices were obtained and sealed with acid-resistant nail varnish (Revlon Corp., New York, NY, USA) except for a 3х4 mm window. The root sections of each tooth were horizontally mounted in selfcure acrylic resin (Aropars, Marlic Dental, Tehran, Iran) in prefabricated molds such that the buccal root surface remained exposed. The pH cycling and treatment: The specimens were then subjected to demineralization by pH cycling. Each group was individually immersed in 500 mL of demineralizing solution (2.2 mM CaCl2, 2.2 mM KH2PO4, 50 mM acetic acid, pH of 4.3) for eight hours, immersed in distilled water for one hour, dried with absorbent paper and immersed in 500 mL of remineralizing solution (20 mM HEPES, 2.25 mM CaCl2, 1.35 mM KH2PO4, 130 mM KCl, pH of 7.0) for 15 hours. The treatment/pH cycling was continued for 15 days and all the solutions were made fresh and changed daily [17]. After 15 days, the root sections were rinsed with distilled water and dried. The specimens were randomly assigned to eight groups (n=15) for the following treatments: Group C (no treatment), group PA, group F, group CP, group PAF, group PACP, group FCP and group PAFCP. In group C, the teeth were rinsed with deionized water, blotted dry with absorbent paper and received no dentin treatment [18]. For PA solution preparation, powdered GSE was added to deionized water to a concentration of 6.5%. In group PA, PA solution was applied with a microbrush on the surfaces of specimens for two minutes. PA was rinsed off with deionized water for 10 seconds, and the teeth were blotted dry with an absorbent paper [19,20]. In group F, 5% (22600 ppm) fluoride varnish (Duraphat; Colgate-Palmolive, Piscataway, NJ, USA) was applied with a microbrush; varnish was left on the surface of the samples for one minute according to the manufacturer's instructions [21]. In group CP, specimens were covered with CPP-ACP paste (GC Tooth Mousse, GC, Tokyo, Japan) for five minutes according to the manufacturer's instructions [17]. In group PAF, the PA solution was applied with a microbrush on the surface of specimens for two minutes. PA was rinsed off and the teeth were blotted dry with an absorbent paper. After that, fluoride varnish was applied with a microbrush; varnish was left on the surface of the samples for one minute. In group PACP, PA solution was applied with a microbrush on the surface of specimens for two minutes. PA was rinsed off and the teeth were blotted dry with an absorbent paper [19,20]. After that, specimens were covered with CPP-ACP paste for 5 minutes and then, it was rinsed off and the teeth were blotted dry with an absorbent paper. In group FCP, specimens were covered with CPP-ACP paste for five minutes and then, it was rinsed off and the teeth were blotted dry with an absorbent paper. After that, fluoride varnish was applied with a microbrush; varnish was left on the surface of the samples for one minute. In group PAFCP, PA solution was applied with a microbrush on the surfaces of specimens for two minutes. PA was rinsed off and the teeth were blotted dry with an absorbent paper. After that, specimens were covered with CPP-ACP paste for five minutes and then, they were rinsed off and the teeth were blotted dry with an absorbent paper. Fluoride varnish was then applied with a microbrush; varnish was left on the surface of the samples for one minute. The specimens were finally placed in a container containing artificial saliva (Hypozalix, Biocodex, France). The main ingredients of artificial saliva were potassium chloride, sodium chloride, magnesium chloride, calcium chloride, dipotassium phosphate and monopotassium phosphate. Microhardness test: All specimens were subjected to Vickers hardness tester (Micrometer, Buehler, Lake Bluff, IL, USA). The measurements were made at three different points for all samples. The load was 500 g for 10 seconds at room temperature. The mean surface microhardness in three points of all specimens was recorded as Vickers hardness number (VHN) in kgf/mm 2 . Statistical analysis: The means and standard deviations (SDs) of microhardness were calculated. SPSS software version18 (SPSS Inc., IL, USA) was used to analyze the data via one-way ANOVA and Tukey's test. The confidence level was set at 95% (α =0.05). RESULTS The lowest mean VHN was observed in group C (37.39±4.97) and the highest in group PAFCP (55.20±1.82). Table 1 shows the mean and SD values of VHN in the eight groups. One-way ANOVA showed significant differences among the groups (P<0.001). Pairwise comparisons of the groups revealed no significant differences between groups C and PA (P=0.972), PA and CP (P= 0.189), F and PAF (P=1.000), F and FCP (P=1.000), PAF and FCP (P=1.000), and PACP and PAFCP (P= 0.826); but significant differences were found between other groups (P<0.05; Table 1). DISCUSSION The prevalence of root caries has increased in the recent years due to increased life expectancy. Also, number of older people retaining their natural teeth has increased due to improved dental care. However, the teeth often have exposed root dentin due to gingival recession in older individuals [22]. Dentin remineralization is more complex and less effective than enamel remineralization because enamel has higher content of mineral crystals compared to dentin [23]. The concept of minimally invasive dentistry via remineralization of demineralized tooth structure is of great significance to preserve the remaining tooth structure [24,25]. Although numerous studies have shown the remineralizing potential of different agents, there have been limited comparative studies on the efficacy of fluoride varnish, CPP-ACP and PA solution mineralizing agents. It should be noted that PA has a chelating mechanism with calcium ions, which enhances mineral deposition on the surface of dentin [26]. When a higher concentration is used, PA needs to be delivered in a controlled manner with an additional external source of calcium and phosphate ions. In the current study, an in vitro model was used to evaluate and compare the remineralizing potential of fluoride varnish, CPP-ACP paste and 6.5% PA solution. The in vitro pH-cycling models are used to simulate the dynamic range of deposition of minerals occurring in the natural process of development of carious lesions [12]. Panich and Poolthong [27] reported that the beneficial effects of CPP-ACP on microhardness of demineralized enamel increased in artificial saliva. Thus, for simulation of oral environment, the specimens were kept in artificial saliva in our study. The GSE used in this study is mainly composed of PA, which is a powerful antioxidant with known vasodilation and anti-inflammatory, anti-bacterial and anti-cancer effects [28]. Several studies have shown that GSE has remineralizing properties as well [12,14,15]. GSE has a high PA content. The PA-treated collagen matrices are non-toxic and inhibit the enzymatic activity of glucosyltransferase, F-ATPase and amylase glucosyltransferases, which are produced by Streptococcus mutans and polymerize the glucosyl moiety from sucrose and starch carbohydrates into glucans. This is the basis of the sucrose-dependent pathway of Streptococcus mutans and is critical for plaque formation and development of caries. Also, the adherent glucan contributes to the formation of dental plaque, accumulation of acids and subsequently localized decalcification of the enamel surface by facilitating bacterial adherence to the tooth surfaces, inter-bacterial adhesion and accumulation of biofilm. Therefore, inhibition of glucosyltransferases by PA can prevent dental caries [12,29,30]. In the current study, the microhardness values of group F versus C and CP, CP versus FCP and PAF, and also PA versus PAF and F were significantly different, while the difference in this respect between the groups CP and PA was not significant. These results were in agreement with those of Chokshi et al [6]. They showed that among the groups tested, fluoride varnish was the most effective remineralizing agent followed by CPP-ACP paste and FTCP. TCP is a new hybrid material produced with the milling technique that fuses beta tricalcium phosphate (ß-TCP) and sodium lauryl sulfate or fumaric acid. This blending results in "functionalized" calcium and "free" phosphate, designed to increase the efficacy of fluoride remineralization [31,32]. Also, Shirahatti et al. [33] concluded that fluoridated dentifrices have substantial protective effect against caries formation; however, CPP-ACP paste did not have any additional influence on reducing the progression of lesions and its effect was similar to that of nonfluoridated dentifrices. Fluoride is believed to prevent dental caries through several mechanisms. One of them is formation of a calcium fluoride (CaF2)-like layer on the tooth surface enhancing deposition of minerals such as fluorapatite or fluorohydroxyapatite [34]. It also decreases acid production by microorganisms, inhibits intracellular and extracellular enzymes, and replaces hydroxide ions [21]. It should be noted that, the microhardness values of group F versus PAF and FCP were not significantly different in this respect, and no considerable increase was observed. One explanation is that the fluoride varnish application has a dominant effect to increase microhardness of the study groups. Fluoride varnish was applied as the final remineralization agent and it was not rinsed off and was allowed to dry. Thus, fluoride formed a superficial layer on the surface. On the other hand, microhardness values of groups PACP and PAFCP were not significantly different in our study. Although the fluoride ions increase tooth surface resistance to demineralization, the resulting remineralization, despite its advantages, is a self-limiting phenomenon, which prevents penetration of calcium and phosphate ions into deeper layers [35]. Nevertheless, fluorosis and toxicity in high dose are the side effects of fluoride. Thus, efforts to find effective cariostatic compounds with minimal adverse effects are ongoing [36]. In the present study, the degree of remineralization (VHN) of group CP versus C, PACP versus PA and PAFCP versus PAF was significantly different. This finding was similar to the results of studies performed by Lata et al, [35] and Pulido et al, [37] suggesting that treatment with CCP-ACP significantly increases tooth surface microhardness. The CPP includes peptides that are derived from milk protein (casein) forming complexes with calcium and phosphate. The CPP contains a cluster of phosphoseryl residues that stabilize nanoclusters of ACP in metastable solution. The CPP binds to surfaces such as plaque, bacteria, soft tissue and dentin (owing to its sticky nature), providing a reservoir of bioavailable calcium and phosphate in the saliva and on the surface of teeth [6]. It can diffuse into the porous lesions and penetrate deep into the demineralized lesions [35]. CCP-ACP has an action similar to that of fluoride to inhibit cariogenic bacteria and demineralization and enhance remineralization. It has been further reported that in contrast to fluoride that mostly remineralizes superficial areas of the lesion, CPP-ACP can remineralize deeper areas of the lesion due to its smaller molecular size [38,39]. In the present study, the mean microhardness values of group CP versus PACP and FCP versus FPACP were significantly different. The PA is capable of increasing the number of collagen cross-links in dentin, resulting in improved mechanical properties of tooth [14,40]. Shi et al. [41] showed that the PA positively affects the remineralization processes of artificial dentinal carious lesions, and may be a promising natural agent for remineralization therapy instead of fluoride. This finding was confirmed by Benjamin et al [42]. Additionally, PA is acidic and pH of CPP-ACP paste may be reduced by the addition of PA [26]. This would enhance remineralization by releasing more amorphous calcium phosphate ions into the carious lesion. On the other hand, the calcium-binding effect of PA may also allow greater mineral deposition within the carious lesion. Despite the slight increase in microhardness following the application of PA, there was no statistically significant difference between the group PA versus CG and F versus PAF in this regard. The remineralizing effect of PA appears to be distinct from the action of fluoride [28]. A possibility is that the behavior of PA molecules may have played a role. Effect of PA may be restricted to the superficial layer due to high molecular weight of PA molecules and that it cannot penetrate deep into the underlying layers [43]. The current findings were in accord with those of Arumugum et al, [44] and Broyles et al [19]. In fact, PA contributed to mineral deposition on the lesion surface only, which inhibited further mineral deposition in the deeper part of the lesion [12]. Similar to the current study, Shi et al. [41] showed no statistically significant difference between groups NaF versus GSE + NaF. Further studies are needed to identify the active constituents of GSE and maximize its effect on the substrate. Combined use of PA, CPP-ACP and fluoride varnish in PAFCP group had a synergistic effect on remineralization of root caries as well as optimal interaction of the minerals with the collagen matrix, because of regaining some part of the mechanical characteristics of the collagen matrix [45]. The microhardness value in this group was higher than that in other groups. This finding was in agreement with the results of Epasinghe et al [46]. In combined groups of this study, the significant difference between the microhardness values of groups FCP and PAFCP may be due to the presence of PA that leads to small increase in microhardness. The suppressing effect of PA on enzymes, such as collagenase, is an added benefit in stabilizing the collagen matrix. Apart from the collagen matrix, PA may also bond to noncollagenous proteins. Some proteins may play a role in mineral deposition in dentin structure [47]. However, future clinical studies are recommended since remineralization in vitro may be largely different from the dynamic biological system in vivo. On the other hand, nanomechanical testing is necessary to understand the mechanical recovery of remineralized dentin at the nanostructural level. It should be noted that improved remineralizing methods are required to arrest the process of dental caries, particularly in individuals at high risk of caries, and future studies are recommended in this field. CONCLUSION Within the limitations of this study, it can be concluded that all remineralizing agents successfully caused remineralization of artificial carious lesions after treatment, except for PA solution in group PA. Also, group PAFCP showed the highest remineralization potential followed by PACP.
4,438
2017-03-01T00:00:00.000
[ "Medicine", "Materials Science" ]
www.mdpi.com/journal/jlpea/ Review Energy Efficient Design for Body Sensor Nodes This paper describes the hardware requirements and design constraints that derive from unique features of body sensor networks (BSNs). Based on the BSN requirements, we examine the tradeoff between custom hardware and commercial off the shelf (COTS) designs for BSNs. The broad range of BSN applications includes situations where either custom chips or COTS design is optimal. For both types of nodes, we survey key techniques to improve energy efficiency in BSNs and identify general approaches to energy efficiency in this space. system design and implementation [5].Although BSNs share many of these challenges and opportunities with general wireless sensor networks (WSNs)-and can therefore build off the body of knowledge associated with them-many BSN-specific research and design questions have emerged that require new lines of inquiry.Unlike generic WSNs that have many nodes doing the same thing, BSNs are likely to have a small number (<10) of nodes with each node dedicated to a specific task.For example, a sensor node monitoring acceleration at the ankle for gait analysis clearly cannot also measure brainwaves using an EEG since both the location and sensing hardware are so different.To achieve widespread adoption, BSN nodes must be extremely noninvasive, which means that the nodes must have a small form factor that is not overly inconvenient to use.Smaller nodes imply smaller batteries, creating strict tradeoffs between BSN node energy consumption and the fidelity, throughput, and latency requirements of BSN applications.Therefore, while the diverse BSN application space results in wide ranging system requirements, all BSN applications-whether real-time or delay insensitive, continuous high data rate streaming or infrequent small packet bursts, etc.-demand energy efficiency while meeting data fidelity requirements.The battery size versus battery life tradeoff plays a major role in defining any BSN system, and applying design techniques to reduce energy consumption can improve both size and lifetime.If energy consumption can be reduced far enough, perpetual operation on harvested energy becomes a possibility.Thus, BSN node sensing, processing, storage, and wireless transmission must all be done in a way that reliably delivers the important data but with the lowest possible energy consumption, thus minimizing battery size (which dominates BSN node form factor) and maximizing time between battery recharge (which is a key factor in wearability), both of which can impact the performance and practicality of possible applications. The best approach for optimizing the tradeoff between energy consumption and other requirements varies depending on the specific BSN application.To illustrate this, we can consider the tradeoff between battery lifetime (e.g., the application requirement for how long the system must work between re-charges) and effective wireless communication data rate (e.g., the average rate at which data from the node must reach the base station) across different applications.Figure 2 estimates how different applications map to this tradeoff space.Some applications like pulse oximetry (measures saturation of peripheral oxygen, SPO 2 ), ambulatory blood pressure (ABP), or electromyography (EMG) for muscle activity require monitoring lifetimes between an hour (e.g., in the clinic) and a day or two (assuming sensors can be recharged at night), but the quantity of data that must be transferred varies dramatically.Continuous glucose monitoring (CGM) sensors may need to have lifetimes approaching a month, but they do not need to send much data on average.Some RFID-like sensors may only need to work for a second after being queried by a base station acting as a reader, but some long term sensors implanted in the body or incorporated into clothing may need to last for years.This great variety in requirements defies a single solution to solve the energy constraint problem. For life-critical applications that require continuous high fidelity sensed data for real-time assessment and intervention (e.g., fall detection, heart arrhythmia detection, etc.), which would be very costly to transmit wirelessly, reduction or elimination of wireless transmission may be necessary to meet longer battery life and wearability requirements.Such applications may need to make intervention/actuation decisions on-node and only employ wireless transmission when events of interest are detected.This system level design decision will help to reduce node power consumption sufficiently to satisfy the other system requirements.BSNs for delay insensitive applications, such as those employed by clinicians to gather information in large volume, may alternatively leverage lower power on-node storage, rather than wireless transmission, to increase battery life.Such store-and-forward use cases, including Holter monitoring and activity logging, are capable of acquiring high fidelity data for later assessment off-node.In such cases, on-node processing is limited, as more resource rich or expert assessments are made off-node.Finally, real-time applications involving wireless transmission and high fidelity data (e.g., gait analysis, activity monitoring, gaming, etc.), combine on-node signal processing with radio management to meet battery life demands of hours to days.Value to the user will ultimately determine each technology's success.BSNs must effectively transmit and transform sensed phenomena into valuable information and do so while meeting other system requirements, such as energy efficiency.The value of a BSN therefore rests in large part on its ability to selectively process and deliver information at fidelity levels and rates appropriate to the data's destination, whether that is to a runner curious about her heart rate or a physician needing a patient's electrocardiogram.These disparate application requirements require the ability to aggregate hierarchical information and integrate BSN systems into the existing information technology infrastructure.Increased value of the BSN to the user will also increase user tolerance of non ideal wearability or other technological difficulties. In this paper, we describe methods for developing efficient hardware within the unique set of requirements of BSNs for different parts of the BSN application design space.Due to the ubiquitous and strict energy constraint on all BSNs, we focus on energy efficiency.In addition, the approaches for achieving energy efficiency in BSNs designed with COTS components sometimes differ from those designed with custom hardware, and this paper explores both paradigms.Finally, this analysis is done within the context of current and projected BSN applications and use cases.The final decision between a custom design versus a COTS design must account for the previous points combined with the economics of the intended application.Designing a COTS system is orders of magnitude faster and cheaper than building a custom IC based node, and COTS nodes provide excellent solutions in many lower lifetime BSN scenarios.For example, low volume research platforms or nodes intended for short term clinical monitoring applications may be more economically produced via a COTS design.In such applications, the final device operational characteristics are much less well defined, and engineering costs are ongoing.In this case the economies of reducing such costs via the employment of a flexible platform outweigh the benefits of extra efficiency that an ASIC solution would offer.For example the TEMPO3 system mentioned in Section 1 may be reprogrammed to operate in a clinical environment in which continuous data streaming is a requirement, or in a more longitudinal study in which data may be stored on node and offloaded after an extended measurement session.Additionally, COTS devices have steadily been improving in computing performance.When TEMPO1 was introduced in 2006, the processor employed had 48 kB of flash memory, 2 kB of RAM, and operated at a maximum clock frequency of 8 MHz.There are now available pin compatible drop-in devices from the same family that have over 100 kB of flash, 8 kB of RAM, and are capable of operating at 20 MHz within similar power budgets.This clearly leads to an expanded application space for a given sensing technology, with little or no non-recurring engineering (NRE) costs for hardware design.High volume, single purpose, mass market devices favor ASIC approaches in which the NRE costs are amortized over many units.Even if COTS systems provide a weaker solution (e.g., by limiting lifetime) than ASICs, simple economics will make COTS the better choice for many BSN applications that cannot provide the volume required to justify an ASIC solution. General Strategies for Energy Efficient BSN Hardware We have emphasized that many design decisions depend on the specific BSN application in question, but we can also identify general strategies that should influence any BSN design.In this section, we examine several key tradeoffs that affect BSN design and that provide important opportunities for saving energy regardless of the specific BSN application.Specifically, we examine balances between on-node computation and communication, flexibility and efficiency, and data fidelity and energy consumption.Before describing these tradeoffs, we introduce supply voltage management as a means of energy minimization in circuits, which provides an important foundation for custom energy efficient circuit design of energy constrained systems like BSNs. Supply Voltage Management Lowering the supply voltage to a circuit is a well known approach for reducing energy.In this section, we first discuss the limit of lowering voltage to reduce energy consumption, and then describe how dynamic voltage scaling can allow us to tradeoff energy and performance. For digital circuits, energy of computation varies as the square of the supply voltage (V DD ), which makes it desirable to operate at the lowest possible voltage while preserving functionality and meeting timing constraints.Taking this principle to the extreme, we observe that sub-threshold (sub-V T ) operation of digital integrated circuits provides one important option for energy efficient processing.Sub-V T circuits use a V DD that is below the threshold voltage, V T , of the transistors.This makes the transistors "off" by conventional definitions, but the change in transistor gate-to-source voltage (V GS ) produces a difference in sub-V T conduction current that allows static digital circuits to operate robustly, although slower than they would be at higher voltage.The lower speeds are still more than sufficient for many BSN operations (up to 10's of MHz).Both the off-current and the on-current of the transistors vary exponentially with V DD in the sub-V T region (V GS < V T ).Nevertheless, the on-current in sub-V T remains larger than the off-current by enough (1000× or so) to enable proper functionality of the digital gates.Due to the quadratic relationship between energy and V DD , the main advantage of sub-V T operation is a reduction in energy consumption of over 10× compared to traditional circuit implementations.In fact, sub-V T operation has been shown to minimize energy per operation in conventional CMOS circuits [6].For this reason, sub-threshold operation will play an important role in custom hardware for BSNs. There are some challenges to making sub-V T digital circuits work.Most notably, the reduced I on /I off ratio combines with process variations in the threshold voltage to increase the potential for circuit failure.Sub-V T circuits also must be level converted to interface with super-V T design, such as radios or sensors.Additionally, design of sub-V T circuits is not yet commonplace.Standard cells used in designs are rarely designed for this voltage of operation, in which transistor strengths change.Nevertheless, sub-V T operation is an emerging approach that is very useful for BSN nodes [7].Operating at a low voltage all of the time may not be a viable option for all BSNs, because lower voltages slow down circuit speed.Given that a BSN's processing latency and throughput requirements may change during execution in response to real-time data and mode changes, dynamic voltage scaling (DVS) can be employed to minimize V DD given those requirements.When high performance is necessary to meet system level requirements, the circuits can operate at the energy-costly higher voltage level.By reducing the circuit's V DD , quadratic energy savings can be achieved instead of just the linear savings obtained through power gating (Figure 4).Different DVS schemes propose different approaches to scaling in terms of the circuit topology and interval at which the voltage is changed, and the overhead of most schemes are minimal compared to the energy savings accomplished, especially when that scaling includes dropping to sub-V T levels when permissible [8]. Some COTS chips provide built-in DVS capabilities or allow for development of DVS schemes.For instance, the TI MSP430 and other similar microcontrollers (MCUs), have on-board clock generation hardware that allows the MCU to programmatically change the operating clock frequency.This is accomplished in the MSP430 through the use of a Digitally Controlled Oscillator (DCO) that may be calibrated using a low frequency (32 kHz) watch crystal as a reference.Frequency agility is accomplished by switching different programmable constants into two clock control registers.The actual change in clock frequency occurs within approximately 10 µs.Furthermore, this microcontroller operates over a wide range of voltages.The clock oscillator may be varied over a 16 to 1 range and the supply voltage over a 2 to 1 range.Within this envelope, a combination of processing rate and power requirements exist, making COTS embedded processors of this type ideal candidates for inclusion in a DVS scheme for BSNs. Figure 5 shows potential DVS operating points measured for the MSP430F2131 processor as explored in [9].For longer lifetime BSN applications where the savings from Figure 5 are still inadequate, a similar DVS scheme can apply to a custom chip.Figure 6 compares a custom MCU design [7] to the MSP430.The custom design offers a 100× improvement in energy per instruction.However, this does not come free of tradeoffs.In this case, the custom built MCU does not have its own clock generation hardware, and frequency agility is not as straight forward since there is only one, single frequency main clock.Though this custom designed MCU also operates over a wide range of voltages and is capable of supporting DVS, additional design effort is required to build in these operating modes.What's more, custom designed hardware does not enjoy the complete suite of mature and compatible peripherals as COTS components, which degrades custom hardware's flexibility. Communication versus Computation As is the case in most WSNs, wireless transmission of sensed data is the largest power consumer in most current BSNs [9].This problem is particularly acute in medical BSN applications, in which sensor data rates may be high relative to many WSN applications.Figure 7 illustrates this relationship with the COTS TEMPO platform as an example, where the high power consumption of the Bluetooth transceiver swamps the low power consumption of the TI MSP430 microcontroller during raw data transmission.We could improve this situation by using a lower power radio (e.g., COTS implementing a different protocol, or a custom design), by duty cycling and sending data in bursts, or by other strategies.In this section, however, we focus on the strategy of using computation on the node to reduce the cost of communication, which can influence all types of BSN design regardless of hardware choice.Significant power reduction can be achieved through the development of on-node signal processing and data management which can dramatically reduce the number of bits to be transmitted.By reducing the number of bits to transmit, we effectively allow more substantial duty cycling of the radio (e.g., leaving it off for a larger fraction of the time).Methods to reduce communication data include traditional compression along with advanced signal processing techniques such as pattern classification and feature detection algorithms.Low power signal processing therefore becomes increasingly important to BSN power efficiency.We can quantify the impact of this tradeoff on the overall node energy using a simple energy model.Assume that E r is a ratio of the average energy to transmit one bit (E tx ) to the average energy to process one bit (E proc ).This ratio is typically large (i.e., E r >> 1) and is determined by a number of factors, including processor energy per operation, the signal processing algorithm and implementation, the packet organization and coding, the networking protocol, transmit power, etc.Also assume that the compression ratio (CR) achieved by on-node signal processing is the ratio of the number of raw bits to the number of transmitted bits.The ratio of average processing energy (E proc ) to average total energy (E total ) is therefore: Figure 8 plots this ratio as a function of CR for different values of E r .It is clear that the importance of low power signal processing increases with more effective pre-transmission compression techniques, even at high E r ratios.As a point of reference, 25 nJ/bit is typical for state of the art custom Bluetooth radios targeting 1 Mb/s [10,11], and an MSP 430 consumes roughly 1 nJ/bit.If the system had no other processing costs (e.g., ignoring memory, etc.), it would have an E r of only 25, indicating that processing energy becomes quite important if the CR is even 10.Applying simple generic compression schemes could compress raw data streams by this amount.Even more substantial compression is possible by extracting important features on chip and only transmitting those instead of the raw data.This motivates the need for reducing the hardware energy costs of on-node computation, especially with custom radio solutions.As we described above, sub-threshold operation is one excellent method for decreasing E proc by over 10× compared to operation at the nominal V DD . Figure 8. Percentage of total energy contributed by on-node signal processing for different E r = E tx /E proc ratios as a function of the bit compression ratio (CR) [9]. The power required to transmit data wirelessly can be high even with reduced duty cycle, so the radio remains a critical component even with on-die data compression.Therefore, it is worthwhile to understand the requirements for BSN radios.BSNs have slightly different radio requirements compared to radios in typical WSNs with low data rate communication for monitoring environmental conditions.For BSN applications, the network is probably arranged as a star-hub topology with the hub acting as a base station [1].All communication from the nodes is to the hub, which can be assumed to have substantially more resources than the nodes (e.g., the hub may be a smart phone).Therefore, the nodes need a small, low power, and short range (1-2 m) radio.This means that a BSN radio could be optimized to operate at a much lower transmission power compared with radios designed for WSN applications.Judging from Figure 2, the ability to accommodate variable communication data rates will be important.Also, since we have observed that processing to reduce the communication data rate allows us to save power by using the radio less often, the radio should save energy when operating at lower data rates or should allow energy efficient transitions to and from active mode.We note that turning off the radio for longer times creates the need to re-synchronize the radio when it turns back on.Since the hub has more resources than the nodes, it can remain active permanently to listen for communication from the nodes.Most radio traffic in a BSN system is from the nodes to the hub, alleviating the need for the nodes to run a receiver continuously looking for messages from the hub.However, the cost of synchronization may still be significant depending on the specific BSN application and communication protocol. BSN applications can span a large range of data rates from a few bits per minute to almost 1 Mbps depending on the application [5].Currently, there are a few low power radio and radio protocols such as ANT and Zigbee that are commonly used in wireless sensor networks.However, these radios and protocols can only operate with data rates of 10 kbps and 150 kbps, respectively.This severely limits their usefulness for the upper range of BSN applications such as motion assessment, ECG (electrocardiogram), EMG (electromyogram), and EEG (electroencephalogram).Conversely, the high data rate COTS radios and protocols, such as Wi-Fi have data rates which easily cover the entire span of BSN applications.However, these radios and protocols consume so much energy that they are impractical for use on BSNs with longer battery life requirements.Bluetooth is a radio and protocol that sits somewhere in between high data rate and low data rate radios.The protocol uses a large amount of energy due to the fact that it was designed as a very general purpose radio for applications spanning outside of the area of BSN.Bluetooth is convenient for BSN development platform purposes because of its widespread adoption, but its relatively poor energy efficiency leaves room for optimization with custom radios and protocols.Nevertheless, Bluetooth is a convenient and viable option for lower lifetime BSN applications.Due to inefficiencies in existing radios and protocols, there are other protocols that are being developed to accommodate the area of BSNs such as 802.15.6.This protocol specifically targets body sensor devices and the medical applications that can span a wide range of data rates [12].This new protocol supports data rates greater than 850 kbps and allows flexibility for the PHY layers supporting ultra wide band, narrow band, medical implant communication bands, and human body communication PHY.While this new protocol is not a standard yet and is still in working group, it is being designed to be more efficient for BSNs compared to existing options, and their main challenges will be providing efficient access for the broad range of BSN applications (see Figure 2) and achieving the pervasiveness that Bluetooth and Wi-Fi have achieved in smart phones and other personal computing devices. While there are many COTS radios and protocols available today that are serviceable for BSN applications, there is still a large opportunity for custom radios that provide better energy efficiency for the applications of the BSN community.These radios can take advantage of the small transmission distance and asymmetry of the channel and provide a substantial benefit to the power consumption of the devices.For example, [13] presents a 830 pJ/bit 2.4 GHz radio that can transmit with a data rate of 500 kbps.[14] presents a 2 Mbps low power receiver that consumes 0.18 nJ/b and [15] shows a 0.65 nJ/b 100 kbps receiver at 1.9 GHz.For sub 1 GHz transmission, [16] shows a 1 Mbps OOK transceiver that operates at 10 nJ/bit with very fast startup time of 2.5 µs to allow for efficient duty cycling.All the previously mentioned low power transmitters and receivers take advantage of short range requirements of BSNs and consume much less energy compared to common COTS radios such as Bluetooth and Zigbee.With these improvements in energy consumption, BSNs can run much longer on a single battery charge or the device can be made smaller by allowing the same runtime but with a smaller battery.It is worth noting that since standards for BSNs are still under development, a concrete guideline for low power radios is not readily available. Flexibility versus Efficiency The tradeoff between flexibility and efficiency in hardware is well known and very prominent in a comparison of conventional hardware paradigms [17,18].The most flexible category of hardware is general purpose processors (GPPs).GPPs exhibit poor energy efficiency due to the overhead of fetching and decoding the instructions that are required to perform a given operation in the datapath.For low power embedded applications like BSNs, general purpose computation is generally performed in fairly simple microcontrollers [7,[19][20][21].Sophisticated operations like a fast Fourier transform (FFT) or data processing algorithm will thus require numerous instructions in the simple core.For example, several sub-threshold processors provide energy per instruction nearing 1 pJ per operation, but they also tend to use small instruction sets and thus result in more instructions to run an operation [7,[19][20][21]. The most efficient hardware is hardwired to do its specific task or tasks (e.g., ASIC).ASICs achieve very efficient operation, but they can only perform the function for which they were originally defined.Examples of hardwired implementations in sub-threshold circuits include [22][23][24][25].Different types of hardware in sub-threshold systems reveal a similar trend as their above-threshold counterparts.Microcontrollers like the one in [19] consume as low as 2.6 pJ/instruction and provide excellent flexibility since they can be reprogrammed for arbitrary tasks.The ASIC implementation of a JPEG co-processor in [24] consumes 1.3 pJ/frame for VGA JPEG encoding.The numbers for energy/operation are similar, but the individual operations on the microcontroller (e.g., instructions) are simple integer computations like addition.Executing a complete JPEG encoding would take many (100s or 1000s) instructions on such a light weight processor, making the total energy per frame much higher than on the ASIC.Of course, the GPP can perform a much broader range of tasks than the JPEG encoder, so this comparison exemplifies the tradeoff between energy efficiency and flexibility.Some BSN nodes may be implemented as complete ASICs like the JPEG processor, but more commonly, ASICs may appear in BSNs as auxiliary hardware accelerator modules, performing commonly occurring functions in the context of a larger system on chip (SoC).Good examples of hardware acceleration are multipliers, floating point units, or FIR filters.These operations can take several instructions over many clock cycles to complete using a GPP, consuming a large amount of energy and time.A hardware accelerator can process data quickly and efficiently.Here, these commonly used components take advantage of the energy and computational efficiencies of the accelerators, whilst their designs need not change.Hardware accelerators provide an opportunity to process data in very specific ways more efficiently than on accompanying programmable hardware. Microprocessor operations are largely inefficient, as we described above.Field Programmable Gate Arrays (FPGAs) are reprogrammable hardware that provide an intermediate choice between ASICs and processors in terms of flexibility and efficiency.An FPGA is configured to act like specific hardware, similar to an ASIC, but the configuration can be changed an arbitrary number of times.The cost of this flexibility is that FPGAs consume 10~100 times more energy than an ASIC due to energy overhead from interconnects, which may account for 85% of the total energy consumption.Most commercial FPGAs target high performance applications to compete with processors, but a sub-threshold FPGA [26] demonstrates that custom FPGA implementations can offer a good tradeoff for flexibility and energy efficiency for energy constrained applications like BSNs. To demonstrate the performance of different hardware platforms in the context of BSN applications, we simulated a typical heart rate (R-R) extraction algorithm that calculates the heart rate of a user based on the raw data of an ECG, which was run on the three different platforms designed in the same technology operating at the same operating voltage (0.4 V) while targeting the same data rate.The results are shown in Table 1. We make two observations from Table 1.First, not only is ASIC > FPGA > GPP with respect to energy efficiency, but ASIC > FPGA > GPP in terms of potentially speed and performance capacity.The second observation is that there is a drastic improvement in efficiency (>100×) between GPPs and FPGA/ASICs.Therefore, it makes sense to assign on-node processing to FPGA and ASIC platforms, while using GPPs strictly for control or rarely occurring operations.Given the large space of BSN nodes and their applications, there is no obvious optimal platform for all nodes.Though ASICs are extremely efficient in terms of energy minimization and computational capability, they are highly inflexible as their functionality is set.Thus, they must be revisited and redesigned whenever the functionality changes.This is a major drawback, as it leads to increased design time and design cost.Furthermore, ASICs are limited to a certain application space.Therefore, flexibility is another requirement for BSNs that must be examined during the design of a node for a specific application or set of applications. On the other end of the spectrum, GPPs offer a highly flexible option for on-node processing.Along with popular peripherals, such as the aforementioned floating point unit or multiplier, GPPs are able to perform almost any job and run any processing algorithm for the BSN node.Thus, they are useful in building most nodes, serving as a central controller for the node.The flexibility advantage is most noticeable in generic nodes, where the specific algorithm or signal processing requirements are not pre-determined, but coded into instruction memory.However, this advantage comes at the cost of energy efficiency.GPPs are highly inefficient because of unused logic components and resources within the GPP for each instruction executed.Also, given the instruction per cycle limitations of GPPs, programs cannot fully take advantage of instruction parallelism, resulting in greater latency and energy consumption per package of data processed.State-of-the-art low power COTS GPPs can meet energy and speed requirements for many BSN applications.For example, TI's MSP430 supports a wide range of applications, running on clock frequencies up to 25 MHz while consuming 165 μA/MHz [27].Custom GPPs will be even more efficient but will incur the development costs of an ASIC. In summary, increasing the flexibility of processing to cover more scenarios will sacrifice energy efficiency.This means that platforms encircling larger regions of Figure 2 will necessarily be less efficient than more targeted solutions, resulting in shorter lifetimes and/or larger form factors. Data Fidelity versus Energy The last key tradeoff we will explore involves looking into how much processing and communication is necessarily needed and relevant in an application.Previous work has shown the existence of an energy-fidelity tradeoff in BSNs with digital signal processing employed to examine tremor in a Parkinson's patient [28].This research used Haar wavelet compression and rate-resolution scaling as an example lossy data reduction scheme for use in exploring the tradeoff space since it met the following three criteria: • capable of being implemented on resource-constrained BSN embedded processors; • capable of executing in low-latency and soft real-time applications; • adjustable by key knobs to alter expected data reduction rates. Mean Squared Error (MSE) was used to assess fidelity as is commonly done in the signal processing community.The results indicated there is a large energy-fidelity exploration space possible in BSNs. Figure 9 shows a small portion of this space using the Haar wavelet transform and run length encoding for data compression and highlights another interesting fact: the input signal characteristics change the possible energy-fidelity operating points. Moreover, it is interesting to note that the data shown in Figure 9 is from a single patient over the course of a single clinical visit.The amount of "information" present in the sensor signals changes over time along with the rate-distortion curve pointing to the need for dynamic management of energy-fidelity tradeoffs in these embedded environments.To illustrate further, Figure 10 depicts a time domain distortion plot for fixed data compression, yielding a compression ratio (CR) of approximately 18, for a 40 minute tremor dataset.Thus, merely choosing a static operating point on a curve of Figure 9 is not sufficient for application fidelity regulation or energy efficiency.Instead, runtime adjustment of processing methods should be performed for more optimal, data-centric operation.BSN devices must therefore possess energy awareness (knowledge of how much energy has been consumed), data awareness (knowledge of how compression affects current data), and computational resource awareness (knowledge of how algorithm execution affects processing and memory resources) to effectively tradeoff runtime and output fidelity in a way that is executable on resource constrained platforms and that meets real-time requirements.These tradeoff decisions can be made based on efficiently meeting requirements (e.g., maximum lifetime for a given minimum fidelity, maximum fidelity for a given minimum lifetime, etc.) or minimizing bounded cost functions (e.g., minimizing lifetime −α •fidelity −β given minimum lifetime and fidelity requirements, where α and β are determined based on metric priorities). BSN devices also need adaptable and efficient data rate scaling mechanisms to fully exploit energy-fidelity tradeoffs at the node-level in real-time.For instance, if a MSE ≤ 100 were required for application fidelity to remain acceptable, then any distortion below this level would be considered energy inefficient (marked as the lower region in Figure 10) because data rate could be further reduced to meet the application requirement; and data above this level would not have high enough fidelity to meet the requirement (marked as the upper region in Figure 10).Only by adjusting a data rate knob at runtime would the node operate in an application-specific energy-fidelity optimized range (marked as the middle region in the shaded box of Figure 10). Case Study of COTS System: TEMPO TEMPO 3.2 is the latest version of the TEMPO platform which has been designed for a range of BSN applications and illustrates the aforementioned tradeoffs between flexibility and efficiency in a COTS based system platform.Specifically, TEMPO was designed to meet requirements for human motion analysis: a broad category that can contain many specific applications.With this application area in mind, a low power non-invasive device is needed that is still flexible enough to address applications that vary from gait analysis to tremor assessment and activity detection.TEMPO 3.2 uses MEMS accelerometers and gyroscopes to perform inertial sensing to measure and study human motion and wirelessly transmit this data to central aggregator such as a smart phone or PDA.Accelerometers and gyroscopes were chosen for inertial measurement because they are small in size, self contained, and inexpensive when compared with other technologies like optical motion capture or magnetic localization.In order to enable communication to PDAs and smart phones while still keeping power consumption as low as possible, Bluetooth was selected as the communication protocol.While Bluetooth and other standard protocols allow for interoperability, they are not the optimal choice for BSNs, and a networking protocol tailored to these types of systems may be necessary if systems are to be effective and energy efficient.However, standard protocols such as Bluetooth enable quick prototyping for initial data collection which can be beneficial for showing the value of new and emerging BSN technologies. Also, since the devices need to be a small form factor and wearable, TEMPO 3.2 was created in the form of a large wristwatch.This enables it to be wearable and flexible from design perspective, but puts other significant limitations on the system.The size of the device not only puts restrictions on the size of the electronics, but also has a significant impact on the size and capacity of the battery that powers the device.Therefore, efficiency becomes a large concern for TEMPO 3.2 as it is required to have a runtime of several hours up to several days. In order to meet the runtime constraints mentioned above, TEMPO 3.2 uses the ultra-low power TI MSP430 microcontroller that still provides the ability to program and load a wide range of functions and digital signal processing techniques.The microprocessor gives the ability to optimize power consumption of the system by compressing data or performing the digital signal processing techniques that let us transmit less data over the radio (the main consumer of power). However, as is common with many systems, there is a desire to be able to adapt as new technology emerges and as application requirements change.TEMPO 3.2 remains flexible by including a daughter board connector that allows the addition of another sensor for addressing a wider range of applications.Likewise, radio technology is a constantly changing field as newer and lower power technologies are being developed.So TEMPO 3.2 includes the option of taking out the Bluetooth radio and replacing it with a different radio that communicates over the UART or SPI protocols.This leaves TEMPO 3.2 as a general platform that can address a wide range of applications, but does not address a specific application as power and size efficiently as custom hardware could.In summary, this platform utilizes the advantages of COTS based systems such as hardware expandability and interoperability with other commercial devices.Hardware capabilities may be swapped out with pin-compatible ICs and daughter boards to facilitate application reuse along with changing technology standards and software functionality is easily modified and tested which can be beneficial for applications in which the processing requirements are still unknown.These advantages are typical of COTS platforms.However, careful consideration must be given to ensure form factor is kept relatively small even when modifications are done to fully make an attractive BSN node, which is exemplified in [30][31][32].The Mica mote platform presented in [30] sits atop two AA batteries side by side resulting in a form-factor difficult to place on the human body.It contains an 802.15.4 radio with a maximum data rate of 250 kbps and a small 8-bit Atmel processor.No sensors exist on the main circuit board, but a 51-pin expansion connector allows for easy expandability at the expense of wearability.The Telos mote platform presented in [31] provides similar functionality to the Mica mote, comes with a commercial Texas Instrument microcontroller, 16-pin expansion, and optional light, temperature, and humidity sensors on the main board.However, the Telos platform still sits atop two AA batteries with a similar form factor which makes it undesirable for many BSN applications.The BSN node presented in [32] contains the same processor and radio as Telos, but focuses on form factor more extensively.Sensors must be added via a daughter board and the expansion connector with 6 analog channels and two serial ports which adds size.The main board however, is only 26 mm × 26 mm which promotes better wearability. Case Study of Custom IC System One example of a custom built BSN is the ECG chip presented in [7].It is a 0.13-μm bulk CMOS sub-threshold (sub-VT) mixed-signal system-on-chip (SoC) that acquires and processes an ECG signal for wireless ECG monitoring.The die photo is shown in Figure 11.The system consists of an adjustable gain instrumentation amplifier (IA), an 8-bit analog to digital converter (A/D), a microprocessor that operates in the sub-threshold region (sub-V T ), and a universal synchronous receiver/transmitter (UART) to communicate with an external radio.The SoC uses a sub-threshold digital microcontroller (μC) for adaptive control of the sub-VT biased analog components and for processing the ECG data.The microcontroller core is a customized variant of the Microchip PIC 16C5X [33].This base unit has 33 instructions and memory sizes of 24 to 73 bytes of RAM.A simple differential IA topology was chosen for the ECG amplifier.Because the amplitude of an ECG signal varies depending on the placement of the recording electrodes and physiological variations between individuals, the IA has a digitally adjustable gain.The 8-bit A/D digitizes the amplified ECG signal at a 1 kHz sampling rate.The A/D uses a dual-slope, integrating architecture.This architecture was chosen for its simplicity, low power consumption, and its insensitivity to device variation.By adjusting the A/D supply voltage, we can trade off power consumption with resolution.This allows for lower system power, since the A/D power can be reduced when the system does not require the full fidelity capabilities of the A/D.The microcontroller, based on a PIC architecture, operates from 0.24 V to 1.2 V and consumes as little as 1.51 pJ per instruction at its minimum energy voltage of 0.28 V.The entire SoC (analog front end, ADC, and digital processor) consumes only 2.6 μW while providing raw ECG data or processed heart rate data.This level of energy efficiency far exceeds the abilities of COTS implementations and makes the idea of an energy harvesting ECG sensor feasible. When only heart rate information is required, the onboard computation of heart rate reduces the wireless channel data rate by a ratio of 500:1, which allows complete beat-by-beat heart rate information to be communicated with much less energy expended in the radio.This chip exemplifies how low energy processing can be used to increase the effective CR in a BSN by extracting the important information from raw data prior to communication. This custom SoC platform takes advantage of the fact that the application is well defined, and therefore platform flexibility can be traded off for optimization in energy efficiency and form factor.This platform utilizes the general strategies of communication versus computation and fidelity versus energy to obtain better energy efficiency.Other examples of custom platforms include [34][35][36][37].[34] is a 0.5 × 1.5 × 2 mm 3 size, 5.3 nW intraocular pressure sensor with microprocessor and transmitter that is used to detect glaucoma.It achieves low power by duty cycling, the use of low power clocks, and on board processing.[35] integrates a glucose sensor with a wireless transmitter in a contact lens for diabetes monitoring with a power of 3 µW that is wirelessly transmitted.It utilizes a sub-µW low-power regulator and bandgap reference to achieve its low power profile.[36] is a fully integrated platform that processes heart rate detection and ECG for 445 nW and 895 nW, respectively.It lowers its power profile through the ability to utilize a low power, less precise clock or a higher power, more accurate clock as well as power-efficient biasing analog components.[37] is designed to be used in fabric to monitor vital signs, utilizing only 12 µW.[37] selects their topologies of their low-drop out regulator, analog front end, and A/D carefully to remain under their power budget.As can be seen, design efforts for energy efficiency and very small form factor are two features common for custom platforms. Conclusions In this paper, we have explored strategies and methodologies for energy efficient design of BSN nodes.Starting from the characteristics of BSNs that arise from their application space and make them unique (including significant differences from traditional WSNs), we have identified the tradeoff metrics available for design optimization.We then elaborate on general strategies for designing energy efficient hardware, focusing on the tradeoffs of computation versus communication, flexibility versus efficiency, and data fidelity versus energy.We examine key tradeoffs in the BSN space that ultimately may lead to the decision between a COTS based platform or a custom IC design.Finally, we present two cases of previous work to show examples of a COTS based node and a custom designed hardware node.As the field of BSNs continues to grow, we anticipate that a rich selection of design techniques will lead to creative solutions leveraging both types of hardware design and resulting in numerous successful BSN deployments. Figure 2 . Figure 2. Broad design space for BSN, but size limits energy for all applications. h Figure 4 . Figure 4. Energy-workload curve of normal operation and dynamic voltage scaling (DVS). Figure 6 . Figure 6.Energy-delay curves for DVS in a COTS microcontroller and a custom design. Figure 11 . Figure 11.Die photograph of the ECG SoC.The analog front end (instrumentation amp (IA) and A/D) and microcontroller (μC) comprise only 0.0633 mm 2 of active area [7]. Table 1 . Comparison of different hardware platforms. the BSN node, based on the flexibility versus efficiency discussion.Decide whether a COTS component or custom built hardware is more suitable based on application, flexibility, cost, and purpose.To provide examples for this process, we present two designs for BSN nodes.
9,678.2
2011-04-11T00:00:00.000
[ "Computer Science", "Engineering" ]
Einstein-Yang-Mills-Lorentz black holes Different black hole solutions of the coupled Einstein-Yang-Mills equations have been well known for a long time. They have attracted much attention from mathematicians and physicists since their discovery. In this work, we analyze black holes associated with the gauge Lorentz group. In particular, we study solutions which identify the gauge connection with the spin connection. This ansatz allows one to find exact solutions to the complete system of equations. By using this procedure, we show the equivalence between the Yang-Mills-Lorentz model in curved space-time and a particular set of extended gravitational theories. I. INTRODUCTION The dynamical interacting system of equations related to non-abelian gauge theories defined on a curved space-time is known as Einstein-Yang-Mills (EYM) theory. Thus, this theory describes the phenomenology of Yang-Mills fields [1] interacting with the gravitational attraction, such as the electro-weak model or the strong nuclear force associated with quantum chromodynamics. EYM model constitutes a paradigmatic example of the non-linear interactions related to gravitational phenomenology. Indeed, the evolution of a spherical symmetric system obeying these equations can be very rich. Its dynamics is opposite to the one predicted by other models, such as the ones provided by the Einstein-Maxwell (EM) equations, whose static behaviour is enforced by a version of the Birkhoff's theorem. For instance, in the four-dimensional space-time, the EYM equations associated with the gauge group SU(2), support a discrete family of static self-gravitating solitonic solutions, found by Bartnik and McKinnon [2]. There are also hairy black hole (BH) solutions, as it was shown by Bizon [3,4]). They are known as colored black holes and can be labeled by the number of nodes of the exterior Yang-Mills field configuration. Although the Yang-Mills fields do not vanish completely outside the horizon, these solutions are characterized by the absence of a global charge. This feature is opposite to the one predicted by the standard BH uniqueness theorems related to the EM equations, whose solutions can be classified solely with the values of the mass, (electric and/or magnetic) charge and angular momentum evaluated at infinity. In any case, The EYM model also supports the Reissner-Nordström BH as an embedded abelian solution with global magnetic charge [5]. It is also interesting to mention that there are a larger variety of solutions associated with different generalizations of the EYM equations extended with dilaton fields, higher curvature corrections, Higgs fields or cosmological constants (read [6] and references therein for more details). In this work, we are interested in finding solutions of the EYM equations associated with the Lorentz group as gauge group. The main motivation for considering such a gauge symmetry is offered by the spin connection dynamics. This connection is introduced for a consistent description of spinor fields defined on curved space-times. Although general coordinate transformations do not have spinor representations [7], they can be described by the representations associated with the Lorentz group. In addition, they can be used to define alternative theories of gravity [8]. We shall impose that the spin connection is dynamical and its evolution is determined by the Yang-Mills action related to the SO(1, n − 1) symmetry, where n is the number of dimensions of the space-time. In order to complete the EYM equations, we shall assume that gravitation is described by the metric of a Lorentzian manifold. We shall find vacuum analytic solutions to the EYM system by choosing a particular ansatz, that will relate the spin connection with the gauge connection. This work is organized in the following way. First, in Section II, we present basic features of the EYM model. In Section III, we show the general results based on the Lorentz group taking as a starting point the spin connection, which yields exact solutions to the EYM equations in vacuum. The expressions of the field for the Schwarzschild-de Sitter metric in a four-dimensional space-time are shown in Section IV, where we also remark some properties of particular the solutions in higher dimensional space-times. Finally, we classify the Yang-Mills field configurations through Carmeli method in Section V, and we present the conclusions obtained from our analysis in Section VI. II. EYM EQUATIONS ASSOCIATED WITH THE LORENTZ GROUP The dynamics of a non-abelian gauge theory defined on a four-dimensional Lorentzian manifold is described by the following EYM action: where Unless otherwise specified, we will use Planck units throughout this work (G = c =h = 1), the signature (+, −, −, −) is used for the metric tensor, and Greek letters denote covariant indices, whereas Latin letters stand for Lorentzian indices. The above action is called pure EYM, since it is related to its simplest form, without any additional field or matter content (see [6] for more complex systems). The EYM equations can be derived from this action by performing variations with respect to the gauge connection: and the metric tensor: where the energy-momentum tensor associated with the Yang-Mills filed configuration is given by: As we have commented, the first non-abelian solution with matter fields was found numerically by Bartnik and McKinnon for the case of a four-dimensional space-time and a SU(2) gauge group [2]. We are interested in solving the above system of equations when the gauge symmetry is associated with the Lorentz group SO(1, 3). In this case, we can write the gauge connection as where the generators of the gauge group J ab , can be written in terms of the Dirac gamma matrixes: J ab = i[γ a , γ b ]/8. In such a case, it is straightforward to deduce the commutative relations of the Lorentz generators: III. EYM-LORENTZ AN SAT Z The above set of equations constitutes a complicated system involving a large number of degrees of freedom, which interact not only under the gravitational attraction but also under the non-abelian gauge interaction. It is not simple to find its solutions. We propose the following ansatz, that identifies the gauge connection with the spin connection: with e a λ the tetrad field [9,10], that is defined through the metric tensor g µν = e a µ e b ν η ab ; and Γ λ ρµ is the affine connection of a semi-Riemannian manifold V 4 . By using the antisymmetric property of the gauge connection with respect to the Lorentz indeces: (A ab ) µ = − (A ba ) µ , we can write the field strength tensor as Then, by taking into account the orthogonal property of the tetrad field e λ a e a ρ = δ λ ρ , the field strength tensor takes the form [11,12]: where R λ ρµν are the components of the Riemann tensor. Thus, F µν = e a λ e ρ b R λ ρµν J b a represents a gauge curvature and we can express the pure EYM Equations (2) and (3) in terms of geometrical quantities. On the one hand, Eq. (2) can be written as: whereas on the other hand, the standard Einstein equations given by Eq. (3) has the following energy-momentum tensor as source: which replaces Eq. (4). IV. SOLUTIONS OF THE EYM-LORENTZ AN SAT Z The EYM-Lorentz ansatz described above reduces the problem to a pure gravitational system and simplifies the search for particular solutions. According to the second Bianchi identity for a semi-Riemannian manifold, the components of the Riemann tensor satisfy: By contracting this expression with the metric tensor: By using the symmetries of the Riemann tensor: with R ν λ the components of the Ricci tensor. Then, taking into account (9), we finally obtain: The integrability condition R σ [µν|λ| R ρ]σ = 0 for this expression is known to have as only solutions [13]: where b is a constant. First, we shall analyze the case of a space-time characterized by four dimensions. In such a case, T µν is trace-free and the solutions are scalar-flat. From the expression of the stress-energy tensor in terms of the Weyl and Ricci tensors, Einstein equations are: where Therefore, by using (15) and the condition C λ µλν = 0, the only solutions are vacuum solutions defined by R µν = 0 [14]. Hence, for empty space, T µν = 0 and all the equations are satisfied for well known solutions [15] such as the Schwarzschild or Kerr metric. Note that these solutions are generally supported for a large variety of different field models and gravitational theories [16]. We can also add a cosmological constant in the Lagrangian and generalize the standard solutions to de Sitter or anti-de Sitter asymptotic space-times, depending on the sign of such a constant. Once the metric solution is fixed by the particular boundary conditions, the EYM-Lorentz ansatz defined by Eq. (6) determines the solution of the Yang-Mills field configuration. In order to characterize such a configuration, it is interesting to establish the form of the electric E µ = F µν u ν , and magnetic field B µ = * F µν u ν , as measured by an observer moving with four-velocity u ν . In particular, for the Schwarzschild-de Sitter solution, we find the following electric and magnetic projections of the Yang-Mills field strength tensor in the rest frame of reference: It is straightforward to check that the above solution verifies: and tr( E · B) = 0 . It is also interesting to remark that the family of solutions provided by the EYM-Lorentz ansatz is not restricted to the signature (+, −, −, −). It is also valid for the the Euclidean case (+, +, +, +). For this latter signature, the corresponding gauge group is SO (4) and the associated generators satisfy the following commutation relations: The above solutions can also be generalized to a space-time with an arbitrarily higher number of dimensions. For the n-dimensional case, the assumption of the ansatz (6) in the EYM Equations (2) and (3) is equivalent to work with the following gravitational action in the Palatini formalism: whereñ = n andñ = n − 1 for even and odd n. In such a case, the Yang-Mills stress-energy tensor takes the form of the one associated with a cosmological constant, in a similar way to certain solutions of modified gravity theories, as the Boulware-Deser solution in Gauss-Bonnet gravity [17]. For instance, for a de Sitter geometry, the Riemann curvature tensor is given by In this case, the energy-momentum tensor associated with the Yang-Mills configuration given by Eq. (10) takes the form Therefore, T µν = 0 is a particular result associated with the four-dimensional space-time. On the other hand, the equivalence between the Yang-Mills-Lorentz model in curved space-time and a pure gravitational theory is not restricted to Einstein gravity. For example, in the five-dimensional case, we can study the gravitational model defined by the following action in the Palatini formalism: The above expression includes not only the cosmological constant (proportional to α 0 ) and the Einstein-Hilbert term (proportional to α 1 ), but also quadratic contributions of the curvature tensor (proportional to α 2 , α 3 and α 4 ). In this case, the addition of the Yang-Mills action under the restriction of the Lorentz ansatz (6) is equivalent to work with the same gravitational model given by Eq. (29) with the following redefinition of α 4 : It is particularly interesting to consider the model with . In such a case, the higher order contribution in the equivalent gravitational system is proportional to the Gauss-Bonnet term. As it is well known, this latter term reduces to a topological surface contribution for n = 4, but it is dynamical for n ≥ 5. In particular, according to the Boulware-Deser solution, the metric associated with the corresponding equations takes the simple form: where dΩ 2 3 is the metric of a unitary three-sphere, and A 2 (r) is given by: with α 0 /α 1 = −2Λ, α 2 /α 1 = Υ, and σ = 1 or σ = −1. Therefore, from the EYM point of view, the Yang-Mills field contribution modifies the metric solution in a very non-trivial way. We can study the limit Υ → 0 in the Boulware-Deser metric. It is interesting to note that it does not necessarily mean a weak coupling regime of the EYM interaction, since α Y M 4 → 0 does not implies α → 0. It is convenient to distinguish between the branch σ = −1 and σ = 1. The first choice recovers the Schwarzschild-de Sitter solution for Υ = 0: When this metric is deduced from the equations corresponding to a pure gravitational theory, the new contributions from finite values of Υ are usually interpreted as short distance corrections of high-curvature terms in the Einstein-Hilbert action. From the EYM model point of view, these corrections are originated by the Yang-Mills contribution interacting with the gravitational attraction. On the other hand, the metric solution takes the following form in the EYM weak coupling limit for the value σ = 1: The corresponding geometry does not recover the Schwarzschild-de Sitter limit when Υ → 0, and it presents ghost instabilities. V. CARMELI CLASSIFICATION OF THE YANG-MILLS FIELD CONFIGURA-TIONS In the same way that the Petrov classification of the gravitational field describes the possible algebraic symmetries of the Weyl tensor through the problem of finding their eigenvalues and eigenbivectors [18], the Carmeli classification analyze the symmetries of Yang-Mills fields configurations [19]. Let ξ ABCD be the gauge invariant spinor defined by ξ ABCD = 1 4 ǫĖḞ ǫĠḢtr(f AĖBḞ f CĠDḢ ), with f AḂCḊ = τ µ AḂ τ ν CḊ F µν the spinor equivalent to the Yang-Mills strength field tensor written in terms of the generalizations of the unit and Pauli matrices, which establish the correspondence between spinors and tensors. Let φ AB be a symmetrical spinor. Then, by studying the eigenspinor equation ξ CD AB φ CD = λ φ AB , we can classify Yang-Mills field configurations in a systematic way. This analysis can be applied to any of the EYM-Lorentz solutions but, for simplicity, we will illustrate the computation for the EYM solution related to the Schwarzschild metric in four dimensions. We find the following invariants of the Yang-Mills field: where η ABCD is the totally symmetric spinor ξ (ABCD) , and ξ ABCD satisfies the equalities ξ ABCD = ξ BACD = ξ ABDC = ξ CDAB . Then, the characteristic polynomial p(λ ′ ) = λ ′3 − Gλ ′ /2−H/3 associated with eigenspinor equation of η ABCD provides directly the eigenvalues of the corresponding ξ ABCD . By taking λ = λ ′ + P/3, we obtain the following results: Thus, there are two different eigenvalues: the first one is simple, whereas the the second one is double. There are three distinct eigenspinors and the corresponding Yang-Mills field is of type D P , which is associated with the Yang-Mills configurations of isolated massive objects. VI. CONCLUSIONS In this work, we have studied the EYM theory associated with a SO(1, n − 1) guage symmetry, where n is the number of dimensions associated with the space-time. In particular, we have derived analytical expressions for a large variety of black hole solutions. For this analysis, we have used an ansatz that identifies the gauge connection with the spin connection. We have shown that this ansatz allows to interpret different known metric solutions corresponding to pure gravitational systems, in terms of equivalent EYM models. We have demonstrated that this analytical method can also be applied successfully to the study of fundamental black hole configurations. For the analysis of the corresponding Yang-Mills model with Lorentz gauge symmetry in curved space-time, we have used the appropriate procedure in order to solve the equivalent gravitational equations, which governs the dynamics of pure gravitational systems associated with the proper gravitational theory. In particular, we have derived the solutions for the Schwarzschild-de Sitter geometry in a four-dimensional space-time and for the Boulware-Deser metric in the five-dimensional case. For these solutions, we have specified the corresponding pure gravitational theories. The algebraic symmetries associated with the Yang-Mills configuration related to a given solution can be classified by following the Carmeli method. We have explicitly shown the equivalence with the Petrov classification for the Schwarzschild metric in four-dimensions. In addition, numerical results obtained for these gravitational systems can be extrapolated to the EYM-Lorentz model by following our prescription. Through the gravitational analogy, one can also deduce the stability properties of the EYM solutions or the gravitational collapse associated with such a system. Here, we have limited the EYM-Lorentz ansatz to the analysis of spherical and static black hole configurations, but it can be used to study other types of solutions. For example, by using the same ansatz, gravitational plane waves in modified theories of gravity may be interpreted as EYM-Lorentz waves. We consider that all these ideas deserve further investigation in future works.
3,921.4
2015-01-28T00:00:00.000
[ "Physics" ]
Catalytic subunits of the phosphatase calcineurin interact with NF-κB-inducing kinase (NIK) and attenuate NIK-dependent gene expression Nuclear factor (NF)-κB-inducing kinase (NIK) is a serine/threonine kinase that activates NF-κB pathways, thereby regulating a wide variety of immune systems. Aberrant NIK activation causes tumor malignancy, suggesting a requirement for precise regulation of NIK activity. To explore novel interacting proteins of NIK, we performed in vitro virus screening and identified the catalytic subunit Aα isoform of serine/threonine phosphatase calcineurin (CnAα) as a novel NIK-interacting protein. The interaction of NIK with CnAα in living cells was confirmed by co-immunoprecipitation. Calcineurin catalytic subunit Aβ isoform (CnAβ) also bound to NIK. Experiments using domain deletion mutants suggested that CnAα and CnAβ interact with both the kinase domain and C-terminal region of NIK. Moreover, the phosphatase domain of CnAα is responsible for the interaction with NIK. Intriguingly, we found that TRAF3, a critical regulator of NIK activity, also binds to CnAα and CnAβ. Depletion of CnAα and CnAβ significantly enhanced lymphotoxin-β receptor (LtβR)-mediated expression of the NIK-dependent gene Spi-B and activation of RelA and RelB, suggesting that CnAα and CnAβ attenuate NF-κB activation mediated by LtβR-NIK signaling. Overall, these findings suggest a possible role of CnAα and CnAβ in modifying NIK functions. two distinct intracellular signaling pathways, canonical and non-canonical NF-κ B pathways 4 . The canonical NF-κ B pathway requires the Iκ B kinase (IKK) complex including IKKα , IKKβ , and IKKγ and results in nuclear translocation of NF-κ B dimers typically consisting of RelA and p50, which in turn up-regulate genes required for innate immune responses and cell survival. In contrast to the canonical NF-κ B pathway, the non-canonical NF-κ B pathway does not require IKKβ and IKKγ , while IKKα is essential for mediation of the signaling pathway. IKKα phosphorylates inhibitory protein p100 that preferentially binds to RelB. Phosphorylation of p100 is followed by partial degradation of p100 to p52. Consequently, the p52 and RelB heterodimer complex is translocated into the nucleus for transcriptional activation 5 . NF-κ B-inducing kinase (NIK) was originally identified as a serine/threonine kinase that activates the canonical NF-κ B pathway 6 . However, later studies revealed an essential role of NIK in non-canonical NF-κ B activation. NIK-deficient mice and alymphoplasia (aly) mice, which have a dysfunctional point mutation in the Nik gene, lack lymph nodes, Payer's patches, and organized structures of the spleen and thymus [7][8][9] . These phenotypes are similar to those of RelB-deficient mice 10 . Moreover, ligand-dependent phosphorylation of IKKα and processing of p100 are abolished by the absence of functional NIK in mouse embryonic fibroblasts (MEFs) 11 . These data suggest that NIK is a critical activator of the non-canonical NF-κ B pathway to activate RelB via phosphorylation of IKKα and subsequent partial degradation of p100. In addition to its physiological significance, deregulation of NIK activation is reportedly associated with the onset of multiple myeloma and inflammatory diseases [12][13][14] . Under these pathological conditions, canonical and non-canonical NF-κ B pathways are constitutively activated by NIK. These findings suggest a biological significance of the precise regulation of NIK-dependent NF-κ B activation. Activation of NIK is controlled by its phosphorylation and proteasome-dependent degradation 15 . In unstimulated cells, NIK is recruited to a complex consisting of TNF receptor-associated factor (TRAF) 3, TRAF2, and cellular inhibitor of apoptosis 1 or 2 (cIAP1/2) ubiquitin ligase through binding to TRAF3. The TRAF3-TRAF2-cIAP1/2 complex induces polyubiquitination and subsequent proteasomal degradation of NIK in unstimulated cells 16 . As a result, the constitutive degradation limits the amount of NIK protein at biochemically undetectable level in unstimulated cells. Ligand stimulation of receptors triggers self-degradation of the TRAF3-TRAF2-cIAP1/2 complex, thereby leading to stabilization and accumulation of NIK. Accumulated NIK induces autophosphorylation of Thr-559, which is required for phosphorylation of downstream IKKα for signal transduction 17 . In addition, a recent study has revealed novel feedback inhibition of NIK activity by IKKα -mediated phosphorylation of NIK at Ser-809, Ser-812, and Ser-815, leading to destabilization of NIK protein 18 . Calcineurin is a serine/threonine protein phosphatase including a catalytic subunit (CnA) and regulatory subunit (CnB), which participates in calcium ion-dependent signal transduction pathways 19 . Calcineurin activates nuclear factor of activated-T cells (NFAT) by dephosphorylation. Previous studies have elucidated the roles of calcineurin in NF-κ B activation. Calcineurin enhances T-cell antigen receptor (TCR)-mediated NF-κ B activation by regulating formation of the Carma1-Bcl10-Malt1 complex 20,21 . In contrast, inhibition of calcineurin in murine macrophages enhances the nuclear localization of RelA induced by Toll-like receptor (TLR) signaling. Thus, calcineurin is a positive regulator of TCR signaling and a negative regulator of TLR signaling. These findings suggest the involvement of calcineurin in the canonical NF-κ B pathway. However, the role of calcineurin remains to be determined in the non-canonical NF-κ B pathway. In this study, we identified calcineurin catalytic subunit Aα and Aβ isoforms (CnAα and CnAβ , respectively) as novel NIK-interacting proteins. Small interfering (si)RNA-mediated depletion of CnAα and CnAβ (CnAα /β ) enhanced nuclear translocation of RelA and RelB and expression of a NIK-dependent target gene, Spi-B. Thus, our data suggest that CnAα /β are negative regulators of NIK-mediated signaling. NIK binds to the catalytic subunits of calcineurin. To identify novel NIK-binding proteins, we performed in vitro selection of NIK-binding proteins using the combination of cell-free co-translation and an "in vitro virus" (IVV) technology [22][23][24] . This selection consisted of several steps: in vitro transcription and cell-free co-translation of bait NIK and prey cDNAs, IVV selection, and amplification of the selected IVVs by RT-PCR (see Methods for detail). Relatively weak interaction between NIK and NIK-binding peptides was detected by multiple rounds of this procedure. We screened a cDNA expression library from mouse embryonic thymus and obtained 29 candidates as novel NIK-binding proteins (Table 1). Because the function of NIK is positively or negatively controlled by phosphorylation and proteasome-dependent degradation 15 , respectively, we focused on possible regulators of these biochemical reactions (e.g., kinases, phosphatases, and ubiquitin ligases). Among the 29 candidates, we further validated CnAα as a possible regulator of NIK by co-immunoprecipitation studies (validation of some other candidates are shown in Table 1). To verify the interaction between CnAα and NIK in living cells, Flag-tagged NIK and Myc-tagged CnAα were transiently co-expressed in human embryonic kidney (HEK) 293T cells. A co-immunoprecipitation assay revealed that CnAα bound to NIK in HEK293T cells (Fig. 1A). The CnA family consists of three isoforms encoded by different genes: CnAα , CnAβ , and the calcineurin catalytic subunit Aγ isoform (CnAγ ). CnAα /β are expressed ubiquitously and usually function in a redundant manner, whereas expression of CnAγ is testis specific 25 . Despite the similarity in structure, the NIK-CnAβ interaction was not detected in the first screening, which could occur possibly due to technical reasons (e.g. possible biased amplifications during multiple rounds selections and PCR). Therefore, we tested binding of CnAβ to NIK in a co-immunoprecipitation assay. Indeed, co-immunoprecipitation indicated that CnAβ also interacted with NIK in HEK293T cells (Fig. 1A). These data suggested a common binding activity of CnAα /β for NIK. To gain some insight into the function of CnAα /β in NIK-dependent signaling, we next determined the responsible domains in NIK for its binding to CnAα /β . NIK has a serine/threonine kinase domain that is essential for activation of NIK itself and downstream signal-transducing molecules 15 . The serine/threonine kinase region intervenes between the N-terminal and C-terminal regions (Fig. 1A). The N-terminal region contains the binding site for TRAF3 that is critical for degradation of NIK. The C-terminal region includes the binding site for IKKα that is phosphorylated by NIK and subsequently mediates downstream activation of the NF-κ B pathway. To determine the CnAα /β -binding region in NIK, we analyzed various deletion mutants of NIK co-expressed with CnAα in HEK293T cells ( Fig. 1A; left). A co-immunoprecipitation assay showed that deletion of both the C-terminal region and kinase domain (Δ KC mutant in Fig. 1B) abolished binding to CnAα , whereas the deletion mutant lacking only the C-terminal region still bound to CnAα (Δ C mutant in Fig. 1B). This finding suggests that the kinase domain binds to CnAα . Furthermore, the mutant lacking both the N-terminal region and kinase domain bound to CnAα (Δ NK in Fig. 1B), indicating that the C-terminal region also binds to CnAα . Thus, either the C-terminal region or the kinase domain (Δ NK and Δ NC in Fig. 1B, respectively) is sufficient for interacting with CnAα ( Fig. 1A; right). As expected because of their similarity, binding regions of CnAβ in NIK were similar to those of CnAα ( Fig. 1A) although the interaction of NIK with Δ C mutant of CnAβ is relatively weaker than that of CnAα . These data suggest that NIK recruits CnAα /β via two distinct regions, the kinase domain and C-terminal region. Schematics of NIK and its deletion mutants used in this study. "Kinase" indicates the kinase domain. "IKKα " indicates the determined binding region of IKKα . A TRAF3-binding sequence is located in the N-terminal region. The Flag tag (abbreviated in this figure) was connected to the N-terminus of the wild-type protein and mutants. The binding ability of each protein for CnAα /β , as determined in Fig. 1A, is indicated at the right of each structure. "+ " indicates positive for binding, and "− " indicates negative for binding. We next examined the NIK-binding region in CnAα . CnAα consists of several domains: an N-terminal phosphatase catalytic domain, regulatory subunit binding domain, calmodulin-binding domain, and autoinhibitory domain ( Fig. 2A) 24 . C-or N-terminal deletion mutants of CnAα (CnAα Δ C and CnAα Δ N in Fig. 2A) were co-expressed with NIK in HEK293T cells. A co-immunoprecipitation assay showed that NIK bound to the C-terminal deletion mutant (CnAα Δ C), but not the N-terminal deletion mutant (CnAα Δ N) (Fig. 2B). Thus, CnAα binds to NIK via its phosphatase domain. These data suggest that the phosphatase domain of CnAα /β interacts with the kinase domain and C-terminal domain of NIK. Because NIK is recruited to a protein complex consisting of TRAF2, TRAF3, and cIAPs in unstimulated cells, we next determined whether CnAα /β also interact with this protein complex. CnAα/β bind to TRAF3. The protein complex consisting of TRAF2, TRAF3, and cIAP1 or cIAP2 mediates polyubiquitination of NIK, thereby initiating its degradation in unstimulated cells 5 . TRAF3 in this protein complex binds to NIK. Interestingly, a co-immunoprecipitation assay indicated that CnAα /β bound to TRAF3 in transfected HEK293T cells (Fig. 3). Thus, in addition to NIK, CnAα /β bind to TRAF3. These results support the idea that CnAα /β binds to a transient protein complex containing TRAF3 and NIK, which should be formed before proteasome-dependent constitutive degradation of NIK in unstimulated cells. Interestingly, affinity of CnAβ with TRAF3 seemed to be higher than that of CnAα , which implying the difference between these two homologues in contribution to function of NIK-TRAF3 complex. Because CnAα /β interact with NIK and its regulator TRAF3, we next addressed the roles of CnAα /β in NIK-mediated gene expression induced by receptor ligations. Transcription factor Spi-B is a target gene of NIK-mediated signaling triggered by ligation of lymphotoxin β-receptor. TNF receptor family lymphotoxin β receptor (LTβ R) signaling has been reported to activate NIK-mediated non-canonical NF-κ B activation and thereby inducing the expression of numerous chemokines including Cxcl13, Ccl19, and Ccl21 in peripheral lymphoid tissues [26][27][28] . However, we failed to detect significant up-regulation of these genes in MEFs, which is consistent with Supplementary Figure 2. B. Schematics of CnAα and its deletion mutants used in this study. "Phosphatase" indicates the phosphatase domain containing the catalytic domain and regulatory subunit-binding domain. "CaM" indicates a potential calmodulin-binding domain. "AI" indicates the auto-inhibitory domain. The Flag tag (abbreviated in this figure) was connected to the N-terminus of the wild-type protein and mutants. The binding ability of each protein for NIK, as determined in Fig. 2A, is indicated at the right of each structure. "+ " indicates positive for binding, and "− " indicates negative for binding. We have recently found that NIK activation induces expression of a splice variant of Spi-B (hereafter referred to as Spi-B1) in TNF receptor family member RANK signaling 31 . That study suggested that Spi-B1 is a direct target gene of NIK-mediated activation of NF-κ B signaling because overexpression of NIK and the RelB complex activates the proximal promoter of the Spi-B1 gene 31 . Because LTβ R signaling activates NIK-dependent NF-κ B pathways similarly to RANK signaling 32 , we first tested whether Ltβ R signaling induces Spi-B1. MEF cells were stimulated with an agonistic anti-Ltβ R antibody. Quantitative PCR (qPCR) analysis indicated that Ltβ R signaling efficiently up-regulated Spi-B1 (Fig. 4A,B). We next confirmed that Ltβ R signaling-mediated expression of Spi-B1 is dependent on NIK activity. The Aly/aly mice line has a point mutation in the coding region of the Nik gene 8 . Because the aly/aly mutation abrogates binding of NIK to IKKα 33 , there is a severe impairment in NF-κ B activation mediated by NIK-IKKα . We isolated MEFs from aly/aly mice and determined whether Ltβ R signaling-mediated Spi-B1 expression is dependent on the NIK-IKKα axis by qPCR analysis. In fact, up-regulation of Spi-B1 induced by Ltβ R stimulation was abolished in aly/aly MEFs (Fig. 4A). Thus, the NIK-IKKα interaction is essential for Ltβ R signaling-dependent expression of Spi-B1 in MEFs. Because the Ltβ R-NIK-IKKα signaling axis was confirmed to induce Spi-B1 expression in MEFs, we next addressed the function of CnAα /β in the Ltβ R signaling-dependent Spi-B1 expression in MEFs. CnAα/β attenuates expression of Spi-B and nuclear translocation of RelA and RelB induced by NIK-mediated signaling. Protein expression of CnAα /β was suppressed by siRNA-mediated knockdown in MEFs (Fig. 4B). We found that siRNA-mediated knockdown of CnAα /β resulted in a significant increase in the expression Spi-B induced by LTβ R ligation (Fig. 4B, right). Effect of the CnAβ depletion were prominent as compared to that of the CnAα depletion, which is consistent with the observation that the affinity of CnAβ with TRAF3 was higher than that of CnAα (Fig. 3). Double knockdown of CnAα /β led to remarkable up-regulation of Ltβ R-mediated Spi-B expression, suggesting partial redundancy of these two isoforms. The enhancement of Spi-B expression by CnAα /β knockdown was not observed in aly/aly MEFs (Fig. 4A). This result is consistent with the idea that CnAα /β -dependent regulation of Spi-B expression is mediated by NIK. The basal level of Spi-B expression (without anti-Ltβ R antibody stimulation) seemed to be elevated by CnAα /β deletion (Fig. 4A,B). NIK-mediated activation of canonical and non-canonical NF-κ B pathways leads to activation of RelA and RelB complexes, respectively, thereby enhancing gene expression 15 . Because CnAα /β negatively regulated NIK-mediated Spi-B expression, we next determined the role of CnAα /β in NF-κ B activation induced by Ltβ R-NIK signaling. Because nuclear translocation is a critical hallmark of NF-κ B activation, we examined whether CnAα /β negatively regulate Ltβ R signaling-mediated nuclear translocation of RelA and RelB. As reported previously 34 , nuclear RelA and RelB levels were increased by stimulation with the agonistic anti-Ltβ R antibody in MEFs. Depletion of both CnAα /β increased the amount of nuclear RelA and RelB induced by Ltβ R signaling (Fig. 4C), whereas the total amount of RelA and RelB was not significantly influenced by Ltβ R stimulation (Fig. 5A). These data suggest that CnAα /β cooperatively attenuate NIK-mediated NF-κ B activation, thereby negatively regulating expression of the NIK-dependent gene Spi-B. Therefore, we next determined whether CnAα /β is involved in the NIK-mediated signaling pathway of non-canonical NF-κ B activation. CnAα/β negatively regulate processing of p100 to p52 induced by LtβR and tumor necrosis factor-like weak inducer of apoptosis (TWEAK) signaling. It is known that Ltβ R-NIK signaling induces processing of p100 to p52 5 . Indeed, stimulation with the agonistic anti-Ltβ R antibody led to a reduction of p100 and an incremental increase of p52 in MEFs (Fig. 5A). CnAβ depletion slightly increased the amount of p52 induced by stimulation with the anti-Ltβ R antibody (Fig. 5A). However, there were marginal effects of CnAα /β depletion. Therefore, we used recombinant TWEAK protein as a ligand to confirm the effect of CnAα /β depletion on p100 processing. Binding of TWEAK to its receptor, Fn14, effectively induced processing of p100 to p52 in MEFs (Fig. 5B), which is consistent with previous studies 35,36 . Depletion of CnAα or CnAβ caused an increase in the amount of processed p52. Interestingly, the level of total NF-κ B2 protein (i.e., both p52 and p100) in cells was also increased in CnAα /β knockdown MEFs stimulated with TWEAK (Fig. 5B). Thus, CnAα /β inhibit the expression and processing of p100 induced by the TWEAK-Fn14 axis. Because canonical NF-κ B activation reportedly up-regulates p100 expression 34 , these data are consistent with the idea that CnAα /β attenuates both canonical and non-canonical NF-κ B activation. Our data suggest that CnAα /β negatively regulates processing of p100 to p52 induced by ligand signaling. Discussion Calcium ions play a critical role in a variety of signal transduction pathways as a second messenger 37 . Calcineurin mediates certain calcium signaling pathways by dephosphorylation of NFAT 24 . Several studies have reported that intracellular calcium ions modulate NF-κ B activity. Calcineurin enhances activation of the canonical NF-κ B pathway in T cells by promotion of Carma1-Bcl10-Malt1 complex formation 20,21 , while it attenuates TLR-dependent activation of the canonical NF-κ B pathway by inhibition of the essential adaptor MyD88 and TRIF 38 . Here, we propose that CnAα /β negatively regulate the non-canonical NF-κ B pathway mediated by NIK. Thus, our data suggest the possibility of novel cross-talk between calcium signaling and the non-canonical NF-κ B pathway induced by TNF family signaling. An important aspect is the mechanism by which CnAα /β control NIK activity. Deletion mutant experiments suggest that CnAα /β interact with NIK via the phosphatase domain. Because NIK mediates downstream signaling by autophosphorylation and phosphorylation of downstream target molecules, it is possible that NIK-interacting CnAα /β dephosphorylates substrates of NIK, thereby inhibiting the function of NIK as a signal transducer. Further in-depth structural and biochemical studies are necessary to determine the molecular mechanism of CnAα /β -mediated regulation of NIK activity. Single knockdown of CnAα or CnAβ enhanced processing of p100 to p52 induced by TWEAK signaling, whereas an additive effect was not observed by double knockdown of CnAα /β (Fig. 5). Assuming that the role of CnAα /β in regulation of NIK functions is redundant, NIK-dependent p100 processing may be already maximized by elimination of either CnAα or CnAβ . Conversely, nuclear localization of RelA and RelB was not clearly enhanced by single knockdown of CnAα or CnAβ , but it was increased by double knockdown of CnAα /β . Moreover, expression of the target Spi-B gene was more efficiently up-regulated in double knockdown cells compared with that in single knockdown cells. One possible explanation for these observations is that CnAs negatively regulate the NIK-mediated NF-κ B activation pathway via two independent mechanisms. Thus, one mechanism influences processing of p100 to p52 and may be relatively sensitive to reductions in the amounts of CnAs in cells, while another mechanism affects nuclear localization of the NF-κ B complex and may be less sensitive to CnA depletion. This idea may be consistent with the fact that CnAs bind to NIK at two distinct regions (Fig. 1). Thus, CnAs may inhibit the function of NIK via two mechanisms through interacting with the kinase domain or C-terminal region in NIK. Deregulation of NF-κ B induces tumorigenesis and inflammatory diseases 15,39 . Therefore, NF-κ B activity needs to be finely tuned and ceased appropriately at the end of stimulation. Previous studies have indicated that deregulation of NIK leads to activation of canonical and non-canonical NF-κ B pathways, which is associated with the pathogenesis of multiple myeloma 12,13 . Our data imply that CnAα /β may be novel modulators of NIK activity. Although it is unknown whether CnAα /β -mediated inhibition of NIK activity is also active in other cell types such as B cells or plasma cells, it would be interesting to investigate whether abolition or attenuation of calcineurin-mediated NIK inhibition can initiate or promote malignant B-cell tumors or other types of tumors. Because proper regulation of NIK activation is essential to prevent the onset of cancer and inflammatory diseases, further studies on calcineurin-mediated inhibition of NIK activity might provide important insights into the development of anti-tumor or anti-inflammatory drugs in the future. In vitro virus selection. First, randomly primed reverse transcription of fetal thymus poly(A)+ mRNAs were subjected to ligation mediated amplification and multi-step PCRs to create cDNA constructs for in vitro expression. The resulting PCR products (SP6-Ω -T7-Flagment-Kpn1-FLAG) were purified with a QIAquick PCR Purification Kit (Qiagen, Germany) and transcribed into mRNA with a RiboMAX Large Scale RNA Production System-SP6 (Promega, WI, USA) and an m7G(5')ppp(5')G RNA Cap Structure Analog (Ambion, Life Technologies, CA, USA). After purification of the transcribed mRNAs using an RNeasy 96 BioRobot 8000 Kit (Qiagen), PEG Puro spacer was ligated to the 3' ends of mRNAs using T4 RNA ligase (Promega) and the RNA was purified again. A cDNA for the bait (NIK) was prepared similarly. In vitro virus selection was performed as previously reported. Briefly, mRNA templates used as bait and prey were co-translated in a wheat germ extract (Zoegene Corporation, now Molecuence Corporation) for 1 h at 26 °C in 96-well plates by using Qiagen Biorobot 8000. At the same time, the in vitro virus molecules were formed by covalently attaching the 3' end of mRNA for prey to the C-terminus of its coding protein via puromycin. After each round of selection, prey mRNA was amplified by RT-PCR, followed by the in vitro transcription and translation reactions that prepared the library for the next round of selection. After four rounds of selection, interaction sequence tags obtained by in vitro virus selection were identified by Takara Bio Inc., Otsu, Japan and Shimadzu Corporation, Genomic Research Center, Kyoto, Japan. A mock experiment was performed without bait protein as the negative control to eliminate technical false positive results. Plasmids. Expression vectors encoding full-length and truncated forms of NIK and CnAα were generated by PCR amplification of NIK and CnAα cDNAs (provided by RIKEN), followed by subcloning the amplified DNA fragments into vectors. In vitro virus selection was performed as reported previously 22 . Briefly, a cDNA library was prepared from mouse fetal thymus RNA (embryonic day 18.5). NIK mRNA was used as bait, and prey were co-translated in a wheat germ extract (Molecuence, Yokohama, Japan) using a Qiagen Biorobot 8000. After four rounds of selection, we identified interaction sequence tags obtained by in vitro virus and verified them as reported previously 23,40 . Immunoprecipitation and immunoblotting. Lysates of HEK293T cells and MEFs were prepared in TNE buffer (50 mM Tris, pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Nonidet P-40, 1 mM sodium orthovanadate, and a protease inhibitor mixture). The lysates were precleared in a protein G-sepharose column (GE Healthcare, Chalfont St Giles, UK) and immunoprecipitated with the indicated antibodies, followed by incubation with protein G-sepharose. For endogenous immunoprecipitation of TRAF3, MEFs were pretreated with 10 mM MG132 for 2 h before harvesting. For immunoblot analysis, immunoprecipitates or cell extracts were eluted with SDS loading buffer (67.5 mM Tris-HCl, pH 6.8, 2.25% SDS, 10% glycerol, 5% β -mercaptoethanol, and bromophenol blue) and resolved by SDS-polyacrylamide gel electrophoresis. The proteins were transferred to polyvinylidene fluoride membranes (Immobilon P, Millipore) and incubated with the indicated antibodies. Immunoreactive proteins were visualized with anti-rabbit or anti-mouse IgG conjugated to horseradish peroxidase (GE Healthcare), followed by processing with an ECL detection system (GE Healthcare) and imaging using a ChmiDoc system (Bio-Rad, Richmond, CA). Intensities of bands were quantitated by using Image J software. Nuclear protein extraction. Cells were washed with PBS and collected by centrifugation at 1,300 × g for 3 min. The cell pellet was lysed in hypotonic cytosol extraction buffer (10 mM HEPES, pH 7.9, 1.5 mM
5,628
2015-06-01T00:00:00.000
[ "Biology", "Medicine" ]
Research Progress on the Etiology and Pathogenesis of Alzheimer's Disease from the Perspective of Chronic Stress Due to its extremely complex pathogenesis, no effective drugs to prevent, delay progression, or cure Alzheimer’s disease (AD) exist at present. The main pathological features of AD are senile plaques composed of β-amyloid, neurofibrillary tangles formed by hyperphosphorylation of the tau protein, and degeneration or loss of neurons in the brain. Many risk factors associated with the onset of AD, including gene mutations, aging, traumatic brain injury, endocrine and cardiovascular diseases, education level, and obesity. Growing evidence points to chronic stress as one of the major risk factors for AD, as it can promote the onset and development of AD-related pathologies via a mechanism that is not well known. The use of murine stress models, including restraint, social isolation, noise, and unpredictable stress, has contributed to improving our understanding of the relationship between chronic stress and AD. This review summarizes the evidence derived from murine models on the pathological features associated with AD and the related molecular mechanisms induced by chronic stress. These results not only provide a retrospective interpretation for understanding the pathogenesis of AD, but also provide a window of opportunity for more effective preventive and identifying therapeutic strategies for stress-induced AD. Introduction Alzheimer's disease (AD), a progressive degenerative disease of the central nervous system (CNS) characterized by learning and memory impairment and progressive decline in cognitive function, was first discovered and described by the German psychiatrist named Alois Alzheimer in 1906 [1]. With the gradual increase in human life expectancy, as well as improvements in the methods of diagnosis, the incidence and total number of AD cases are increasing every year. AD has become one of the most important threats to the quality of life and symptoms of AD and cannot reverse the course of the disease. The etiology and pathogenesis of AD are still unclear, and it is therefore difficult to find an accurate and effective treatment. Consequently, there is an urgent need to explore alternative strategies to prevent and treat AD. In addition to genetic mutations, there are many risk factors related to the onset of AD, including sex, aging, stress, traumatic brain injury, obesity, education level, and endocrine and cardiovascular factors [3,4] (Fig. 1). Undoubtedly, advanced age is the most important factor. Stress includes both acute and chronic stress. Acute stress is generally believed to be the body's adaptation to unfavorable environments or stimulation, and it always involves a protective response. However, in recent years, chronic stress has become an area of great interest in the study of AD etiology. Relevant studies have shown that environmental factors, especially exposure to chronic stress, can induce the onset of AD-related pathology in wild-type mice and worsen it in AD transgenic models [5][6][7]. Due to the complexity of the modern living environment and the variety of pressures we face daily, stress has become an inevitable component of everyday life. In-depth research on the mechanism by which chronic stress promotes the onset and development of AD-related pathologies could provide a theoretical basis for the development of effective clinical interventions. To study the relationship between stress and AD, researchers have developed a large number of chronic stress animal models, including restraint stress, social isolation stress, noise stress, and unpredictable stress, which have been shown to induce AD-related symptoms. The development of these stress models provides a promising opportunity to study the relationship between stress and AD pathogenesis. However, the AD-like pathological features often differ between models, suggesting that the mechanisms involved may be different. This review summarizes the most common models for chronic stress currently used in research, describes the different AD-like pathological features induced by each of them, and explores the putative underlying mechanisms in order to provide some theoretical bases for the potential prevention and treatment of AD by regulating stress levels. Figure 1. Chronic stress is not only a risk factor for Alzheimer's disease, but also for other age-related diseases. The left side of the illustration depicts common factors known to influence the onset of Alzheimer's disease, including stress, cardiovascular disease, aging, diabetes, gene mutation, obesity, education, traumatic brain injury, and endocrine function. The right-side lists age-related diseases for which stress is a risk factor, including diabetes, cancer, cardiovascular disease, Parkinson's disease, depression, atherosclerosis, and coronary heart disease. The history of stress Stress is a syndrome caused by the body's adaptive response to various internal and external environmental stimuli, as proposed by Hans Selye in 1936 [8]. In another words, stress describes the response to an experience that is emotionally and physiologically challenging [9]. It has been proposed that constant stress could dismantle biochemical protection mechanisms, thus making individuals vulnerable to attack by diseases. It is generally believed that the negative impact of excessive stress on health manifests in two aspects: one is to aggravate existing diseases by weakening protection mechanisms, and the other is to enhance the vulnerability of certain organs to diseases. Owing to excessive work pressure and a complex living environment, individuals are susceptible to continuous stress. In the 1970s, Mason proposed a theory of psychological stress to explain the mechanism underlying some mental illnesses. Plenty of clinical data shows that stress is closely related to the onset of many age-related diseases, including diabetes, cancer, cardiovascular disease, Parkinson's disease, depression, atherosclerosis, and coronary heart disease, as well as Alzheimer's disease [10][11][12][13][14] (Fig. 1). As life rhythms accelerate and more stressful events need to be faced daily, the prevalence and number of diseases caused by stress, and their associated economic burden, are also increasing every year. The human brain is susceptible to stressful events, and an increasing number of studies on AD-specific neuropathological changes elicited by stress have recently attracted intense attention. Chronic stress may precede and trigger subjective cognitive decline (SCD), which precedes mild cognitive impairment (MCI), which in turn often occurs before the early clinical symptoms of AD [15] (Fig. 2). Chronic stress can change neuronal properties in the brain and disturb learning, memory and cognitive processes, suggesting that this event may function as a trigger for AD pathology. Changes in dynamic biomarkers (Aβ, tau-mediated neuronal injury, brain structure, cognitive and memory deficits, clinical function deterioration) during the AD pathological cascade are shown. Chronic stress precedes and acts as a trigger for subjective cognitive decline (SCD), which may precede mild cognitive impairment (MCI), which in turn becomes manifests before the early clinical symptoms of dementia. Aβ, amyloid β. However, due to the inherent difficulty of performing a large-scale study of the relationship between stress and AD pathogenesis in human subjects, choosing a suitable animal model is crucial. This review describes several models for chronic stress commonly used in current research, and the insight on the mechanisms by which stress promotes the onset and development of AD that has been gained from these models. Chronic restraint stress The chronic restraint stress (CRS) animal model is used as one of the techniques used to monitor physiological and psychological changes caused by stress. Under this model, the animals are repeatedly placed in conical tubes with holes for ventilation for half an hour to several hours per day during several weeks or months, without physical pressure or pain involved [16]. It has been reported that CRS can promote AD-type pathology in wild type rats and mice, and in transgenic models for AD, including aggregation and deposition of Aβ, hyper-phosphorylation of tau protein, and degeneration and massive loss of neurons, as well as decline in learning and memory ability (Table 1) [7,[17][18][19][20]. HPA axis Studies have found that the HPA (hypothalamic-pituitaryadrenal) axis is dysfunctional in AD, and the basal corticosterone level is significantly increased [21]. After stress stimulation, corticotrophin-releasing hormone (CRH) secreted by the hypothalamus acts on the pituitary gland to induce the release of adrenocorticotropic hormone (ACTH), which further acts on the adrenal gland to promote the release of glucocorticoids (corticosterone (CORT) in mice and rats, cortisol in primates) [22]. The increase in the level of CORT is associated with damage to synaptic function in AD (Fig. 3). Chronic stress or administration of CORT can accelerate the degradation of hippocampal function and cause AD-like lesions, including neuronal loss, increased Aβ deposition, tau phosphorylation, and loss of cognitive and memory function [23]. However, it has been reported that transgenic mice expressing a mutated form of tau (PS19) that were implanted with slow-release subcutaneous CORT pellets failed to show an increase in tau phosphorylation, suggesting that CORT may not be the key hormones mediating tau phosphorylation in response to stress [7]. CRF system The expression of corticotropin-releasing factor (CRF) and its receptor, CRFR, is significantly increased in the brains of AD [24]. Studies have shown that CRF may play a crucial role in stress-induced Aβ increase and tau phosphorylation, which can accelerate the pathogenesis of AD (Fig. 3); These results were not only confirmed in wild type mice, but also in AD transgenic mice, which indicate that CRF plays a propelling role in the progression of AD [7,25]. The CRF receptor is composed of two subtypes: CRFR1 and CRFR2. Rissman et al. found that CRFR2 can inhibit tau phosphorylation induced by stress [26]. However, most studies concluded that CRFR1 plays a positive regulatory role in the stress response [7,27]. Tau phosphorylation and learning and memory impairment triggered by stress were reversed in AD transgenic mice receiving CRFR1 antagonist [7], CRFR1 knockout mice, and wild-type C57BL/6 mice treated with CRFR1 antagonist [25]. The mechanism involved was CRFR1 upregulation of the active form (Tyr-216 phosphorylated) of glycogen synthase kinase 3β (GSK3β) and alteration of p35 expression levels [26]. It is well known that GSK3β and cell cycle-dependent protein kinase 5 (CDK5) are the two most important kinases in the brain regulating tau protein phosphorylation, and CDK5 activity requires p35 or p25 regulatory subunits. Increased tau phosphorylation and Aβ levels lead to synaptic degeneration, which in turn impairs learning and memory behavior in mice [28]. Aβ is derived from the cleavage of the amyloid precursor protein (APP) by β-and γ-secretases [29]. βsecretase (BACE-1) cleaves APP to generate the C99 fragment (also known as APP CTF-β), which comprises the N-terminus of Aβ [30]. When γ-secretase cleaves CTF-β, Aβ is released in several forms of fragments, the most abundant of which is composed of Aβ 38 , Aβ 40 , and Aβ 42 , respectively. Aβ 40 and Aβ 42 peptide monomers are flexible hydrophobic peptides that can rapidly selfaggregate into pro-inflammatory dimers, oligomers, and fibrils to drive AD pathogenesis [31]. The removal and elimination of excessive Aβ 40 and Aβ 42 peptide monomers represents a promising strategy for preventing AD. APP CTF-β and Aβ peptides were significantly reduced in the brains of mice treated with CRFR1 antagonists, and in CRFR1 knockout mice [32,33], indicating that CRFR1 is regulates β-or γ-secretase expression, activity, and/or trafficking (Fig. 3). Glutamate system Glutamate is the most important excitatory neurotransmitter with the highest content in the CNS. As a neurotransmitter, glutamate must be eliminated immediately after it is released into the synaptic cleft to prevent excitatory toxicity. Glutamate is eliminated mainly through reuptake, relying on the high-affinity excitatory amino acid transporters (EAAT) 1-5 expressed on the presynaptic and glial cell membranes [34]. Among them, EAAT1 is mainly present in the cerebellum, hippocampus, cortex, and striatum, whereas EAAT2 is mainly distributed to the cerebral cortex, hippocampus, and striatum. They both belong to the glial glutamate transporter family and bind glutamate with high affinity. Therefore, any reduction in EAAT1 and EAAT2 protein levels directly affects extracellular glutamate concentration. EAAT2 accounts for 95% of the uptake and transport of glutamate, and EAAT2 deficiency shows overlap with human aging and AD at the transcriptomic level [35]. Shan et al. found that after CRS, the expression of EAAT1 and EAAT2 in the brains of wild type mice decreased in both total protein and membrane-bound forms, and the decrease in the membrane components of the brains of aged mice was more remarkable than that in younger subjects [6]. NMDARs are glutamate-gated ion channels that are critical for neuronal communication and play a vital role in the dysfunction of glutamatergic transmission [36]. Our previous study found that CRS increased the expression levels of the glutamate NMDAR subunits GluN2A and GluN2B in the hippocampus and prefrontal cortex of wild type mice [37]. GluN2A and GluN2B can be detected in both synaptic and extrasynaptic NMDARs in the adult brain, with GluN2A mainly located in synaptic NMDARs and GluN2B mainly in extrasynaptic NMDARs [38]. In addition, increased expression of extrasynaptic GluN2B was found in the brain of APP/PS1 transgenic mice, and xanthoceraside significantly improved the learning and memory behavior in APP/PS1 transgenic mice by inhibiting the overexpression of extrasynaptic GluN2B [39]. Stimulation of NMDARs at synaptic sites with physiological concentrations of glutamate promotes cell survival, learning, and memory via the Ca 2+ -ERK-CREB pathway. However, prolonged activation of extrasynaptic NMDARs with excitotoxic glutamate stimulation causes calcium overload and neuronal apoptosis. The initial calcium influx following excitotoxic glutamate stimulation triggers a secondary intracellular calcium overload, and this secondary response strongly correlates with neuronal injury and death. The main reason is the involvement of mitochondria in the maintenance of cellular calcium homeostasis. Mitochondria can restore intracellular calcium concentrations by absorbing large amounts of calcium. In response to excitotoxic glutamate stimulation, the mitochondrial uptake of calcium results in the generation of an excessive amount of reactive oxygen species (ROS), leading to mitochondrial depolarization and excitotoxic neuronal death [40]. In addition, the activation of extrasynaptic NMDARs can also inhibit the ERK signaling pathway, which mediates the physiological function of synaptic NMDARs [38,41]. Memantine is a voltage-dependent noncompetitive antagonist of NMDAR channels that preferentially binds to extrasynaptic NMDARs and blocks their hyperactivation without affecting the synaptic NMDAR signaling underlying various physiological functions [42,43]. Compared with other ionotropic glutamate receptors, NMDARs have stronger permeability to Ca 2+ . Under normal circumstances, Ca 2+ influx regulates physiological processes such as synaptic plasticity [44]. However, excessive intracellular Ca 2+ levels can also cause excitotoxicity. Chronic stress induces the influx of large amounts of Ca 2+ into neurons through synaptic and extrasynaptic NMDARs, and persistent Ca 2+ overload in neurons leads to progressive dysregulation of synaptic function, and ultimately to neuronal cell death. This involves the activation of the Ca 2+ /calpain pathway by Ca 2+ overload in neuronal cells, which in turn activates CDK5 through the Calpain/p25/CDK5 pathway to promote the phosphorylation of tau protein. This provides theoretical support for the clinical use of memantine in the prevention and treatment of stress-induced AD. The above results also suggest that abnormal glutamate signaling system is involved in the AD-related pathology induced by CRS (Fig. 3). Neuroinflammation Neuroinflammation is a common pathological feature of AD. To reduce the damage caused by stress to the body, microglia and astrocytes secrete inflammatory factors that promote immune cells recruitment, thereby clearing infected or damaged tissues and cells. Microglia are the main immune cells in the brain and play a crucial role in the maintenance of brain homeostasis and immune surveillance [45]. When Aβ is abnormally deposited, microglia can engulf the aggregated proteins and secrete pro-inflammatory cytokines to recruit additional microglia and immune cells, enhancing Aβ uptake and clearance [46,47]. However, although this initial immune response has a beneficial effect by removing accumulated Aβ, the persistent inflammatory response caused by CRS is harmful to neuronal survival and synaptic function, and eventually also damages the phagocytosis of Aβ by microglia [48]. Furthermore, the excessive release of inflammatory factors may even promote Aβ generation and tau hyperphosphorylation [49,50]. Therefore, neuroinflammation plays a dual role in AD pathogenesis, under physiological conditions, it contributes to the clearance excess of Aβ secreted by nerve cells, while a continuous and excessive inflammatory response directly promotes AD-related pathology (Fig. 4). Increased inflammation is a common event in brains affected by chronic stress and may represent one mechanism by which stress promotes AD pathology. Synaptic dysfunction Synaptic dysfunction is closely related to learning and memory impairment. Studies have found that CRS can not only reduce the number and density of hippocampal dendritic spines but also reduce the expression levels of the presynaptic membrane marker synaptophysin, the postsynaptic membrane marker post-synaptic density-95 (PSD-95), and the neurotrophic factors, such as brainderived neurotrophic factor (BDNF), vascular endothelial growth factor (VEGF), and nerve growth factor (NGF) [51][52][53] (Fig. 4). BDNF plays an important role in synaptogenesis and cognition by promoting synaptic plasticity, and synaptogenesis [54]. VEGF can improve cognition by reducing Aβ production [55]. In addition, studies have shown that a decline in serum VEGF levels is directly associated with the onset of AD [56]. Finally, NGF is essential for neuronal survival. (1) Under physiological conditions, neuroinflammation contributes to Aβ clearance, whereas a continuous and excessive inflammatory response induced by CRS compromises Aβ uptake by microglia. This promotes AD-related pathology, namely, an increase in Aβ production and tau phosphorylation. Ultimately, synaptic function is damaged and neuronal apoptosis is induced. (2) CRS inhibits the expression of neurotrophic factors (BDNF, VEGF, NGF) and synapseassociated proteins (PSD-95, synaptophysin), reduces the expression of PP2A, and promotes the dephosphorylation of cofilin1 at the Ser3 site, leading to degeneration and loss of synapses, which contribute to the onset of AD. Cofilin1 and PP2A Cofilin1 is essential for the growth and remodeling of dendrites and dendritic spines. Downregulation of cofilin activity can increase dendritic spine density. Cofilin1 is activated by dephosphorylation at Ser3, and CRS can promote the dephosphorylation at this particular site, increasing cofilin activity. This leads to the reduction in the number of dendritic spines and has an impact on learning and memory in mice [57] (Fig. 4). Zhang et al. showed that CRS inhibits the expression of the protease PP2A, which is a crucial enzyme for the dephosphorylation of tau. Approximately 70% of the phosphorylated tau is dephosphorylated by PP2A, and a reduction in PP2A expression leads to the aggregation of hyperphosphorylated tau and to a disruption in memory function in mice [58] (Fig. 4). Microbiota-gut-brain axis "All disease begins in the gut" was purportedly said more than 2,000 years ago by Hippocrates, a Greek physician known as the father of modern medicine [59]. The gut microbiota in humans contains approximately 10 14 microbes that outnumber the host's cells by approximately ten-to-one [60,61]. The function of the gut microbiota was previously thought to be limited to maintaining normal gastrointestinal function, but they also regulate several additional processes, including vitamin and glucose metabolism, immune and inflammatory responses, central and peripheral neurotransmission [62,63]. Growing evidence has indicated that the gut microbiota plays an active role in obesity, addiction, type 2 diabetes mellitus, cancer, aging, pain, stroke, and even neurodegenerative diseases [59]. In 2018, Lu et al. observed significant memory deficits in germ-free mice, suggesting an essential role of the gut microbiota in memory maintenance [64]. Indeed, numerous studies have provided evidence that Lactobacillus and Bifidobacterium probiotic supplements have a positive effect on mouse memory, assessed by object recognition and fear conditioning tests [65,66]. Moreover, alterations in the biodiversity and composition of the intestinal microbiota have been observed in AD patients as well as in mouse models for AD [67,68]. Liu et al. studied 97 participants from Hangzhou (China), including 32 healthy controls, 32 patients with amnestic mild cognitive impairment, and 33 patients with AD, and found marked differences in microbiota composition between the groups. Moreover, these alterations were tightly correlated with AD severity [69]. Kim found that transplantation of fecal microbiota from wild type mice mitigated amyloid and tau pathology, memory deficits, and reactive gliosis in an AD mouse model [70]. Probiotics (Lactobacillus and Bifidobacterium) ingestion by AD patients and murine AD models reduced various pathological markers, such as brain atrophy, Aβ accumulation, learning and memory deficits, and oxidative stress [71,72]. More recently, mounting evidence has suggested that gut microbiota dysbiosis induced by CRS affects brain structure and function, and individual behavior through the microbiota-gut-brain axis, leading to the onset and development of AD [73,74]. The microbiota-gut-brain axis is a bidirectional communication system comprising several routes, including the neural and immune systems, microbial metabolites, and endocrine signals (Fig. 5). Here, we discuss the mechanisms underlying dysbiosis of the gut microbiota may participate in AD onset. It is known that CRS can accelerate aging, and Lee et al. found that transfer of gut microbiota from aged mice to younger mice was sufficient to reproduce the cognitive decline associated with aging [75]. Multiple reports have confirmed that the gut microbiota naturally secrete massive amounts of Aβ, lipopolysaccharide (LPS), and related microbial secretory products. Considering the huge number of microbes that comprise the human gut microbiota, it is apparent that we need to have an in-built tolerance to life-long exposure to LPS, Aβ and other related pro-inflammatory pathogenic signals. This exposure is nevertheless likely to increase the burden of amyloid protein and LPS in the CNS and activate the microglial-mediated innate immune and inflammatory responses. In individuals suffering from chronic stress, this may further contribute to AD development. As mentioned for neuroinflammation, CRS can attenuate amyloid plaque sensing, phagocytosis, and clearance by microglia. Moreover, CRS is known to induce gut microbiota dysbiosis characterized by increased intestinal permeability and changes in gastrointestinal motility leading to "leaky gut". As a result, bacteria, pathogens, amyloid protein and LPS can freely cross the epithelial barrier [66]. Extracellular Aβ deposition can cause secondary pathological changes such as tau hyperphosphorylation, oxidative stress, neuroinflammation, synaptic degeneration, and neuronal death, eventually leading to AD [76]. Gram-negative bacteria are predominant in the gut microbiota, and LPS is the major component of their cell wall. The secretory products of the gut microbiota are seriously powerful immune activators and proinflammatory factors that affect the host, accelerating the free radical's production and upregulating ROS and/or reactive nitrogen species. This subsequently increases vascular and blood-brain barrier (BBB) permeability, immunogenicity, and aberrant activation of the immune system [77]. The increased permeability of the gut, vasculature, and BBB results in large amounts of Aβ protein and LPS leaking into the CNS and the peripheral circulation, which contributes to the accumulation of amyloids and the production of pro-inflammatory cytokines [interleukin-6 (IL-6), CXCL2, NLRP3, tumor necrosis factor-alpha (TNF-α) , and interleukin 1-beta (IL-1β)]. Gut microbes can also regulate cortisol release by affecting the activation state of the HPA. This in turn has an effect on microglia activation, cytokine release, and monocytes recruitment. Bostanciklioğlu et al. found that gut metabolites can affect learning and memory through vagal afferent fibers that control the secretion of bioactive molecules (peptide YY, cholecystokinin, glucagon-like peptide-1) by enteroendocrine cells [78]. Therefore, AD pathology may result from a misregulated HPA axis and vagus nerve signaling induced by gut dysbiosis. In addition, dysbiosis of the gut microbiota can inhibit the expression of BDNF, a gene associated to neurogenesis and neuronal growth [72]. Dysbiosis induced by CRS translates into a decrease in the proportion of beneficial microorganisms (Lactobacillus and Bifidobacterium) and an increase in that of the more harmful ones (Escherichia, Shigella, Proteus, Klebsiella), leading to a decrease in the production of short chain fatty acids (SCFAs) [72,79,80]. SCFAs have a beneficial effect on the CNS and peripheral circulation and have been shown to play a key role in microbiota-gut-brain communication. SCFAs can interfere with various forms of Aβ peptides, effectively inhibiting aggregation of Aβ fibrils and reducing the accumulation of neurotoxic oligomers in the brain [81]. Dysbiosis of the gut microbiota therefore further decreases intestinal barrier integrity due to this reduction in SCFAs synthesis [73,82]. SCFAs can also influence neuroinflammation by affecting microglial cell morphology and function as well as immune cells and immune modulators. Therefore, downregulation of SCFAs may potentially induce impairments in cognition, memory, and emotional response [83]. Recent investigations of microbial endocrinology also demonstrated that neuroactive molecules, such as neurotransmitters produced by gut microbes, can directly contribute to the crosstalk between the gut and the brain [84]. Acetylcholine, γ-aminobutyric acid (GABA), dopamine, and 5-hydroxytryptophan (5-HT), produced by gut microbes from the Bifidobacterium and Lactobacillus genera, among others, can influence nerve physiology. During CRS-induced dysbiosis, the secretion of these neurotransmitters is decreased, which is a predisposing factor to AD. In addition, gasotransmitters of microbial origin, including nitric oxide (NO), hydrogen sulfide (H 2 S), ammonia, and methane, also play crucial functions in neurophysiology, and may be participated in AD pathogenesis [85,86]. For example, the elevation of NO increases BBB permeability. Furthermore, NO reacts with superoxide to form peroxynitrite, a potent oxidizing agent that can cause neurotoxicity. Oxidative stress and mitochondrial dysfunction, two well-characterized pathological features in AD, can also be induced by elevation of NO levels, leading to neuronal apoptosis. Moreover, oxidative stress can enhance Aβ production and deposition. Overproduction of H 2 S leads to decreased oxygen consumption of mitochondria and increased expression of pro-inflammatory factors such as IL-6 [87]. Based on the pieces of evidence discussed above, we may conclude that dysbiosis induced by CRS influences AD pathology in several ways. These include increased gut, vasculature, and BBB permeability; accelerated aging; increased amyloid burden; abnormal LPS secretion; misregulation of the HPA axis and vagus nerve signaling; neuroinflammation; oxidative stress; aberrant immune activity; alterations in the biodiversity and composition of the gut microbiota; downregulation of BDNF and SCFAs; decreased neurotransmitters secretion; and abnormal release of gasotransmitters (Fig. 5). Social isolation stress Social isolation stress (SIS) is a form of chronic stress that refers to a complete or almost complete lack of contact with conspecifics [88,89]. Since humans are highly social, SIS can affect people of all ages and is known to be a trigger for emotional problems and cognitive dysfunction in adolescents [90]. SIS is also associated with an increased risk of death in the elderly. Robert et al. found that the incidence of AD in individuals affected by SIS was more than double that of the control group [91]. The recent Lancet Commission on Dementia Prevention, Intervention, and Care estimated that if the risk of SIS in later life was eliminated, the prevalence of dementia would be reduced by 4%. The impact would be greater than that estimated for reducing physical inactivity in later life (2%) or hypertension in midlife (2%). Positron emission tomography (PET) imaging has shown that Aβ protein load is significantly correlated with increased loneliness [92]. Changes in some pathological markers of AD, such as increased amyloid beta plaques and neurofibrillary tangles, are not always equivalent to the degree of cognitive decline or clinical dementia. The closest neurobiological association with cognitive decline in AD is synaptic degeneration and/or loss. Therefore, many technologies that can directly measure biomarkers of synapse loss or damage have already been adopted in clinical settings. These include PET ligands that label synapses in vivo and biomarkers that detect synaptic degeneration in the cerebrospinal fluid [93][94][95]. The emergence of these technologies provided new evidence indicating that SIS affects the onset of AD by inducing increased synaptic degeneration and/or loss [96]. It is well established that various SIS models can induce AD-type pathological features (Table 2). APP/PS1 mice showed normal hippocampal long-term potentiation (LTP) and situational fear conditioning at the age of 3 months, indicating that they had no defects in learning and memory [97]. However, when APP/PS1 mice were raised in social isolation, cognitive impairment was already evident at 3 months of age and was accompanied by a massive increase in Aβ. This latter effect was due to a significant increase in the activity of β-and γ-secretase, resulting in excessive production of Aβ 40 and Aβ 42 in the hippocampus [98]. In addition, the researchers also found that SIS increased calpain activity and the p25/p35 ratio, while reducing membrane-associated p35. The main reason for this was that the large amount of Aβ promoted calcium influx and calpain activation. Calpain-mediated proteolysis releases p25 from the N-terminus of p35, and an increased p25/p35 ratio promotes tau phosphorylation and induces neuronal death [99]. Furthermore, membrane-associated p35 interacts with the AMPA receptor subunit GluR1 and α-CamKII to form the p35-GluR1-CamKII complex. SIS reduces the interaction of p35-GluR1-CaMKII. Since the p35-GluR1-CaMKII complex is important for synaptic plasticity, learning, and memory, this decrease in the formation of the complex resulting from SIS leads to memory and cognitive impairment in APP/PS1 mice [100,101]. Cao et al. raised APP/PS1 transgenic AD mice singly for 8 weeks at the age of 1 month. They found SIS increased hippocampal cell apoptosis, synaptic protein loss, glial activation, and triggered inflammatory responses by increasing the expression of IL-1β, IL-6, and TNF-α [102]. Huang and colleagues housed seventeen-month-old APP/PS1 mice in isolation for 3 months and found that this exacerbated hippocampal atrophy, increased the accumulation of hippocampal Aβ plaques, and induced cognitive dysfunction. Expression of γ-secretase was increased, and that of neprilysin (NEP) was decreased. Synapse and myelin loss, as well as glial neuroinflammatory reactions, were exacerbated [88]. NEP and insulin-degrading enzymes (IDE) are two major Aβ-degrading enzymes that play a vital role in maintaining Aβ homeostasis in the brain via Aβ degradation. NEP expression is negatively correlated with Aβ accumulation and cognitive impairment severity, while the expression levels of IDE do not seem to be correlated with these two events. In addition, similarly to what has been noticed for CRS, the large amount of Aβ induced by SIS can promote intracellular calcium ion overload and tau protein hyperphosphorylation, disrupt mitochondrial energy metabolism, aggravate oxidative stress, and activate neuronal apoptosis and other pathways that affect the normal structure and function of the hippocampus negatively [103]. In another APP transgenic mouse model (Tg2576), the subjects developed normally until 9 months of age, and almost no β-amyloid deposits are detectable in the brain [104]. After these mice were individually housed in special cages one-third the size of a standard mouse cage from weaning to 6 months, a large number of senile plaques formed by the deposition of Aβ 42 were observed in the brain, resulting in impaired the ability to generate new cells in the dentate gyrus of the hippocampus [105]. Neurogenesis of the hippocampal dentate gyrus is thought to be related to learning and memory [106]. Increased Aβ deposition can damage neurons by disrupting intracellular calcium ion homeostasis, inducing oxidative stress, and causing massive release of glutamate [107]. Studies in rats have found that SIS during 6 consecutive weeks can induce tau hyperphosphorylation and deficits in learning and spatial memory in middle-aged rats [89]. Research into the underlying mechanism showed that SIS inhibited the phosphorylation of GSK3β at Ser9, resulting in an increase in GSK3β activity. Since GSK3 kinase plays an essential role in regulating tau phosphorylation, this in turn led to tau hyperphosphorylation and deficits in spatial memory. In addition, the BDNF/PI3K/Akt/GSK3β signaling pathway plays important roles in synapse formation, neuronal differentiation and survival, and regulation of synaptic structure and function [108]. Gong et al. found that SIS reduced the expression of BDNF, serine 473-phosphorylated Akt, and serine 9-GSK3β [109]. Ali et al. found that SIS increased β-secretase, Aβ protein, Tyr-216-GSK3β, phosphorylated tau, malondialdehyde, IL-1β, and TNF-α gene expression levels [110]. Chronic noise stress Noise stress is harmful, particularly to the CNS. According to previous reports, chronic noise exposure (CNE) can induce cognitive impairment and is also a predisposing factor for AD pathogenesis [111][112][113]. There is a compelling body of research on the effect of CNE on pathological features and mechanisms associated with AD ( Table 2) which will be discussed below. CNE can promote the secretion of CRF and glucocorticoids by activating the HPA axis and the CRF pathway, thereby promoting tau phosphorylation and other AD-related pathologies [114][115][116]. Hyperphosphorylation of tau reminiscent of what is observed in AD has been found in the brains of rats exposed to long-term noise. In this rat model, the increase in tau phosphorylation differs between chronic and acute stress, but both can cause cognitive impairment [117,118]. In the chronic stress model, the phosphorylation of tau is permanently increased, whereas in the acute stress model, tau begins to dephosphorylate 24 h after the stressor is removed [119]. The expression of PP2A in the rat model of CNE is increased, which is the opposite of what has been observed in the AD brain. It has been postulated that when the level of PP2A increases, the increase in GSK3β activity is responsible for the tau phosphorylation, thereby counteracting the dephosphorylation effect of PP2A [120]. Other studies have found Aβ production and abnormal phosphorylation of tau were evident in the brains of Kunming mice and SAMP8 after CNE [117,121]. CNE can induce neuronal damage, particularly in the hippocampus, ultimately causing neuronal loss and memory impairments [114]. The hyperphosphorylation and aggregation of tau caused by CNE results in neurofibrillary tangles, and the massive production and deposition of Aβ can cause disorders of synaptic function and apoptosis of nerve cells, ultimately also inducing learning and memory impairments. The hyperphosphorylation of the tau is mediated by the GluN2B subunit of NMDARs. Once this signaling pathway is overactivated, the kinases GSK3β and CDK5 associated with tau phosphorylation are activated through the GluN2B-Fyn signaling pathway. GluN2B can also directly inhibit the activity of PP2A [122]. Other studies have reached similar conclusions, namely, CNE-induced disruption of the NMDAR signaling pathway ultimately leads to high phosphorylation of tau. Use of the NMDAR antagonist MK-801 can reverse the activation of GSK3β and the increase in tau protein phosphorylation induced by noise stress [123]. In addition, CNE can increase the level of glutamate in the brain and thus promotes the influx of Ca 2+ , which triggers ROS production and inhibits LTP [124]. Other researchers have determined that oxidative stress can promote the generation and deposition of Aβ as well as tau phosphorylation. Oxidative stress is increased in response to noise stress and could therefore mediate the occurrence of AD-type pathological features [125,126]. The main effects of oxidative stress are increased APP expression, decreased α-secretase activity, and increased expression and activation of β-and γ-secretase [127]. Chronic unpredictable mild stress Chronic unpredictable mild stress (CUMS), also called chronic variable stress, refers to an inconsistency in the exposure to stress or to the use of multiple different forms of stress in a single stress model to achieve unpredictable effects. As the stress-inducing procedure varies among different studies, it is difficult to compare the results and draw any general conclusion (Table 3). For example, in a study by Han et al., the results of their CUMS procedure did not show any increase in Aβ 40 and Aβ 42 in the mouse hippocampus. In another study involving APP/PS1 mice, CUMS was introduced under the same conditions, and in this instance, it not only promoted the expression of Aβ 40 and Aβ 42 , but also induced neuronal injury and cognitive impairment [128]. Consistent with Han's report, Bing et al. found that CUMS induced Aβ deposition and severe impairment of cognitive behavior in APP/PS1 mice for 4 weeks, but they observed no significant effect on wild type C57 mice. The authors believe that the main reason behind these contradictory results is that six-month-old C57 mice have better tolerance to CUMS because they are at the peak of brain function development. In addition, these results are clearly dependent on the physical conditions of the mice and the skills of the experimenters [129]. Hossein et al. used another model of CUMS and found that it significantly increased Aβ levels in the hippocampus of adult male rats [130], but the effect on tau phosphorylation was not reported. We previously exposed ten-week-old C57 mice to CRS for 4 weeks and detected tau hyperphosphorylation in the hippocampus and prefrontal cortex [37]. In addition, Carroll et al. found that exposure to CRS rather than CUMS for one month exacerbated Aβ levels in Tg2576 transgenic mice [7]. The results obtained in studies that adopted the unpredictable stress model are diverse. It has been reported in the literature that after 4 consecutive weeks of CUMS in wild type rats, tau is abnormally phosphorylated in the hippocampus and prefrontal cortex, and behavioral tests shown a decline in learning and memory ability [131]. Another study reported that CUMS can induce the production of Aβ 42 in the hippocampal CA1 region of rats [132]. Research by Peay et al. found that CUMS only damaged the spatial memory of male rats but had no significant effect on female rats [133]. Studies have confirmed that abnormal expression or activation of Fyn is closely related to tau phosphorylation, amyloid accumulation, and cognitive decline in patients with AD [134][135][136] and blocking the abnormal activation of Fyn can protect neurons from Aβ toxicity [137]. Lopes et al. utilized another model of CUMS that could induce tau phosphorylation, neuronal atrophy, dendritic spine shortening, and learning and memory disorders in 4-to 6month-old wild-type mice and found that CUMS upregulated Fyn expression in the hippocampus [138]. Upregulated Fyn interacts with PSD-95 and GluN2B to form a GluN2B/PSD-95/Fyn complex, which regulates the activation of glutamate NMDARs. The activation of NMDARs in turn activates two key tau protein kinases, GSK3β and CDK5. Furthermore, Fyn also plays an important role in mediating tau-induced neuropathology [139]. Four-month-old Tg2576 transgenic mice were subjected to CUMS for 7 weeks. Almost no AD-type pathological features were detected in the brains of the control group, and their behaviors were normal, whereas the CUMS-exposed group showed Aβ deposition and tau phosphorylation in the brain. Cognitive impairment was also demonstrated in this study using the water maze test. The proposed mechanism was an increase in the activity of β-secretase and an inhibition in the expression of Ser9-GSK3β [140], which is similar to the mechanism underlying the pathological changes in response to noise stress. Table 3. Rodent studies on impact of chronic unpredictable mild stress (CUMS) on AD-related markers. Stress paradigm Procedure Animal model Relevance to AD Ref. CUMS The stressors included: (1) long swim for 20 min in a 30 °C water bath, (2) cold swim for 2.5 min in a 15 °C water bath, (3) restraint for 15 min in a 50 ml conical tube, (4) housed in isolation for 24 h, (5) housed with soiled bedding for 24 h, and (6) Conclusion We are inevitably affected by stressful events. It is of great significance to investigate the pathological features and mechanisms responsible for AD in response to chronic stress. A reliable animal stress model is essential for this purpose. Although not all stress animal models can fully replicate the AD-type pathological features, tau hyperphosphorylation, Aβ overproduction and deposition, and learning, memory, and cognitive dysfunction can be induced in most of them (Table1, Table2, Table3). HPA axis dysfunction, abnormalities in the CRF system and glutamate systems, neuro-inflammation, aberrant immune activity, dysbiosis of the gut microbiota, downregulation of neurotrophic factors, synaptic degeneration, and changes in the activity and expression of GSK3β, CDK5, and PP2A in response to chronic stress are all of great significance in the pathogenesis of AD. This line of research provides a basis for the development of more effective prevention and treatments strategies aimed at improving the quality of life of affected individuals.
9,046.8
2022-12-21T00:00:00.000
[ "Biology" ]
Easymatch- An Eye Localization Method for Frontal Face Images Using Facial Landmarks Eye detection algorithms are being used in many fields such as camera applications for entertainment and commercial purposes, gaze detection applications, computer-human interaction applications, and eye recognition applications for security, etc. Successful and fast eye detection is an essential step for all these applications in order to achieve good results. There are many eye detection methods in the literature, and most of them rely on the Viola-Jones method to detect the face before localizing eyes. In this paper, a straightforward approach to detect eyes from images which contain a frontal face is proposed. The approach can be used for real-time eye detection using cheap web cameras or other cameras. First, face landmarks are detected from the image, and by utilizing these landmarks; the eye region is determined. The eye radius is estimated by utilizing eye corners. Then, reduced input images are tested with a tailored matching algorithm which does not need image reduction to determine where the eye is. INTRODUCTION Eye detection is a fundamental step for many applications including security, marketing, psychology, human-computer interaction and so on. After detecting a face in an image, the next logical step is to locate the elements of the face such as eyes, mouth, chin, etc. The necessity of locating eyes comes from the importance of eyes because as much as eyes are windows to the outside world, they are also windows to the inside. For example, the eyes have definitive signs to express emotions or mental state. The eye pupil may get bigger when in distress, or it may look upward while thinking and this kind of patterns can be used in creating a psychological profile which can then be used for marketing or treating psychological diseases, etc. One can deduce where one is looking from the positions of his/her eyes. After locating the eye positions, it is also possible to detect gaze. Gaze detection can be used in computer-human interaction or finding answers to questions like where people tend to look first when entering a market or mall or school building, where people tend to look most etc. Also, just like fingerprints, eyes also have a unique print, which is used in identification [1,2]. By counting the number of blinks in a minute, it is possible to determine if a person is tired or sleepy, which is used in detecting whether the driver is tired or not [3]. All of these applications and many more like these rely heavily on locating eyes, and success of these methods is only as good as the success of the eye detection methods. Current technologies can be classified into three categories. The first is invasive methods such as electrooculography methods which need to place electrodes on the skin near the eyes [4] or using contact-lens based on eye coil systems [5,6], etc. The second are semi-invasive methods which use some type of special light to illuminate the eyes. The last category is non-invasive and uses only images to determine where the eye is [7][8][9][10][11][12][13][14][15][16][17]. The invasive methods generally have a good detection rate, however, in today's world it is not possible to use these methods in many applications because they tend to have a high cost and low mobility. Semi-invasive methods have general usage and good detection rate; however, they also have a high cost. Non-invasive methods generally have a low cost; however, they have a lower detection rate because they can be affected by many factors such as light condition, makeup, eyeglasses, etc. Non-invasive methods can be divided into two main categories. The first category contains methods which use A.I. (Artificial Intelligence) methods to determine eye position. A.I. based methods are generally composed of three main steps: creating an eye pattern, using a classifier or statistical model and finally by utilizing these, precise eye positions can be found [8,16,17]. The second category uses the general characteristic of an eye by utilizing pixel distribution, pixel values, histograms, etc. [7,10,12]. A detailed survey on eye detection and tracking techniques is presented in [18]. Usually, the first step in locating eye location has been to detect the face within the image. In order to detect a face within images, many of the applications use the Viola-Jones method. Viola-Jones method is a well-accepted method which works fast and has a satisfactory success rate. However, to determine the eye region, it has to be used two times; one for detecting the face and a second time to detect the eye region. However, unlike images of the static objects, the image of the eye changes dramatically depending on where the gaze is, this causes the creation of larger eye regions which even contains eyebrows by the Viola-Jones method. After detecting the face, it is possible to use an educated guess or other methods to determine the eye region. However, using educated guess also causes large eye regions, other methods may work, but they are also extra workload for the algorithm. This study proposes an easy way to localize the eye position in frontal face images by utilizing a template matching based approach and uses facial landmarks to determine a region of interest. LOCATING EYE In the proposed method the most simplistic approach to locate the eye is used. A few steps are usually the same in many of the eye locating methods. First, detecting a face with Viola-Jones method [19], determining general eye areas with an educated guess or using an eye cascade to detect eye region, finally using a novel approach to locate where an eye is as shown in Fig. 1. Finding Face The purpose of detecting the face is to ensure that the search area has limited characteristics and appearances. There should not be objects or images like the eye in the search area. Also by reducing the area, the workload of the algorithm can be reduced. Face detection is a decade-old phenomenon, and there are many methods to solve this [20]. In literature, many choose the Viola-Jones method to detect the faces by using Haar-Like cascades. According to the tests carried out on the BioID database, this method manages to successfully detect faces in 1365 images out of 1521 images which means over 89% success rate. There are only eleven false positives out of 1521 images. These false positive results are usually a smaller region within a face so it can be easily cast off by choosing the biggest face from detected faces. For TalkingFace database success rate is 99.1%. OpenCV (Open Computer Vision) library contains necessary cascade files for this kind of detection purposes, and in this study, frontal face file (haarcascade_frontalface_default.xml) is used to detect faces in images. Necessary files and more information about OpenCV could be found on their official web page [21]. Facial landmarks are a relatively new method to determine interest points in a person's face, and this method manages to not only locate the face with precision but also necessary interest points for determining many aspects of a human face [22]. In this study, a detector with 68 markups had been used as shown in Fig. 2 According to the tests carried out on the BioID database, the facial landmarks successfully detect the faces in the 1509 images from 1521 images, which means more than 99% success rate. According to the test carried out on the TalkingFace database, the facial landmarks successfully detect all faces. There are five false positives in TalkingFace database. However, like the Viola-Jones method, these false positive results are usually a smaller region within a detected face, thus it can be easily discarded by choosing the biggest face from detected faces. The ColorFeret database success rate is also 100%. As seen in the results; using facial landmarks has a much better success rate than the Viola-Jones method. Determining the Eye Region and Reducing the Image In order to determine a general eye region, taking an educated guess like breaking image to four square parts and supposing up left half contains left eye and up right half contains right eye can be used [11]. However, this approach would cause a much bigger region of interest than what was needed but ensures that the eye is within that area. Another option is using a Haar-Like cascade or similar detection method to determine where the general eye region is. However, the usage of Haar-like cascades gives poor results. Because the eye itself is moving within the eye region, it makes it very hard for cascades to detect the eye region with high accuracy. Also, using a secondary detection method would increase the computation cost, but may give a smaller eye region. In this study, facial landmarks are used to determine the region of interest. At first, the eye region was tried to be determined using all points as seen in Fig. 3. However, these landmarks are not as accurate as desired, and usually they do not contain the entire eye region. For this reason, these landmarks cannot be used to detect an eye region successfully. However, there is a natural rate between two eye corners and the eye radius. Since facial landmarks include eye corners as landmarks, these corners can be used to determine a small eye region as seen in Fig. 4. In this study, it is seen that errors in eye corner landmarks are not causing many problems in detecting the eye region. First, the distance between eye corners is computed using the Eqs. (1, 2) where l right and l left is the length between eye corners for right and left eyes, p i is the i th landmark as shown in Fig. 3, p i x and p i y are x and y coordinates of the landmark point p i respectively. In order to determine the eye region of the right eye, top left of the eye region as (p 1 x, p 1 y -1.2X l rigth ), right bottom of the eye region chosen as (p 10 x, p 10 y + l rigth ). As for left eye, (p 9 x, p 9 y − 1,2Xl left ) chosen as top left and (p 6 x , p 6 y + l left ) for bottom right. In this study, a smaller and more efficient region of interest for the eyes is determined by considering the natural rate between eye corners and the eye itself as seen in Fig. 4. Such a small eye region helps to avoid dark appearances, such as eyebrows and eyeglasses. Thus, it should help to raise the success rate of almost all of the eye detection methods because one of the main problems for eye detection methods is dark appearances like eyebrows or eyeglasses. The ratio between eye radius and the distance between eye corners is estimated at about 0.22. The radius of the left and right eye can be found using the Eqs. Template Matching The template matching is a process where each pixel in one input image is compared to another image to find if there is a matching part in it. This is done in a windowed manner, and for each window, a similarity value is calculated. Best similarity value determines which of the windows is matched to the input image. There are many methods to calculate similarity value like square difference (5), the normalized square difference (6), correlation (7), normalized correlation (8), etc. In Eqs. (5) to (9) R is the similarity value, I input image, S source image which a match tried to find in it. xs, y s are starting positions of the window and x ' , y ' pixel positions of the input image. The circular characteristic of the eyes makes it easy to use in a matching algorithm. In this study, images are reduced to black and white, and then a circle is used as a template for matching. Even though eyes are not fully circle, but an ellipse with a ratio of 1.13 usually this difference is negligible because cameras do not have the necessary resolution to catch that difference. However, in a state where the camera has a good resolution to catch that difference, an ellipse must be used as a template instead of a circle. A simple input image which will be used in template matching is created using an estimated radius value as seen in Fig. 5. However, this approach does not provide satisfactory results. In order to improve the success rate and reduce the workload a modified approach based on template matching is tried. There are two pre-knowledge about the eye; the color of the eye is darker than is its vicinity and the eye has a circular shape. Since the eye has a circular shape if the sum of the points in a circular shape is calculated in a windowed manner, then the position of the window with the lowest sum can be accepted as the position of the eye. For this purpose a list is created which contains the x and y coordinates of each white point of the image as seen in the right image in Fig. 5. Then, the list is used in sliding windows. In each window, the sum of the values of the list points is calculated using the Eq. (9). Then it is determined where the biggest sum is achieved. Using all points within the circle has a good success rate. However, in order to further reduce the workload of the method instead of using all points within a circle reduced points are used for experiment as seen in Fig. 6. Figure 6 Reduced images Considering characteristics of the eye, detecting eye position in a windowed manner within an eye region may be unnecessarily time-consuming. Also, as the resolution of the eye region grows, the time needed to compute every window would also grow. In order to further reduce the time consumption, instead of checking all of the windows, the two-stage window movement is tried as seen in Fig. 7. For the starting position, the middle of the y-axis of the eye region is chosen as a center of the y-axis of the window. After finding the best x-axis position for given y-axis; using best x-axis position the window is moved in y-axis to determine best overall position. Matter of Blink Blink detection is also a necessity for many applications which use eye detection. If it is known when a person blinks, then there is no need to try to detect eye positions since it is known that there is no eye in the image. Also by calculating eye blink frequency it is possible to determine the tiredness of an individual. Eyeblink could be easily detected by utilizing landmark points (p2, p3, p11, p12, p4, p5, p7, p8 in Fig. 3) and eye radius which is calculated by the Eq. (2). Eyeblink can be determined by calculating the distance between the upper and bottom eyelids. If this distance is close to one or two pixels, then it can be assumed that the eye is shut. However, there is also a matter of deciding in which condition it should be deemed there is a blink. Should a moment where the eye is completely shut be chosen or should moments where eyes are partially shut also be added? As seen in Fig. 8, the act of blinking is composed of a few stages. When looking from the perspective of getting information out of images, in a state where eyes are partially shut and in a state where eyes are completely shut, they are the same because both have the same meaning that the eyes are in a state where they do not concentrate on looking. So, if the distance between the eyelids is smaller than half of the eye radius, it can be assumed that the eye is closed. Even though this is a practical approach there are still some negligible issues. When a person laughs or looks at a bright sight, people tend to partially close their eyes which may be interpreted as a blink. It is not easy to detect the difference between a blink and a laugh or a bright light reflex. It is possible to use other landmark points which correspond to mouth to detect the laugh or it may be possible to check the brightness by using a histogram. However, it would be too much effort to detect the difference, and it is an acceptable error. Also, it will not make much difference in practice. EXPERIMENTAL RESULTS In the experiments, a computer with Intel Core i7 2.8 GHz CPU and 32 GB ram has been used. The method is coded using python version 2.7. As for the computer vision library, OpenCV version 2.4 has been chosen. The code takes 4.5 ms (millisecond) to detect both eye centers on images from the BioID database, not including the face detection time. Databases BioID database, TalkingFace database, and ColorFeret database are used to evaluate the success rate of the proposed method. The BioID database consists of 1520 frontal face images with files which contain eye position annotations of the left and right eye. There is a single frontal face at each image, and these images belong to twenty-three different individuals. BioID database is a challenging database because images are gray level and have low resolution (384 × 286). Most of the images belong to the persons who use glasses, and there is an intense reflection in their glasses. There are images taken in areas that are not sufficiently illuminated or over-illuminated. There are images where an individual's eyes are fully closed or partially closed which makes it impossible to determine where eyes are since there is no eye in view. However, to determine the success of an algorithm a challenging database is necessary. All of the images on the BioID database are used in the experiments. The TalkingFace database consists of 5000 frontal face images of one individual which are taken while the individual is talking in a sufficiently illuminated area. The images are in color and have 720 × 526 resolution. There is a single frontal face at each image and a file for each image that contains coordinates of 68 interest points of the face which include eye positions. All of the images of TalkingFace are used in the experiments. ColorFeret database is another face database which contains color images to develop, test and evaluate face detection algorithms. Colorferet database contains 11338 facial images of 994 subjects from various angles. The images in the Color FERET Database are 512 by 768 pixels. Regular front images and alternative frontal images are used in this study. However, some of the images did not have eye position information, and usually, eye positions are not accurate. Image resolution itself may not be adequate in understanding the database. Image resolution and resolution of the faces within images are given in Tab. 1. Mean resolution of the faces for the BioID database is very low. Resolution of the TalkingFace database is about 3.4 times that of the BioID database. However, resolution of the face area of the TalkingFace database is about 5.4 times that of the BioID database. Evaluation Error rate e is calculated by using Eq. (10) where e is an error rate, L, R are actual position of the left and the right pupil whereas L ' and R ' are calculated positions by the method and difference of them is the Euclidean distance between them. If error rate e is less than 0.25, the algorithm is considered good for locating the eye, however, for application such as gaze detection, e should be less than 0.05. As seen in the images, this approach manages to create a minimal eye region. Also, these eye regions do not contain eyebrows. As long as eye corners are detected at an acceptable error rate, this eye region creation approach is successful in creating a small region which contains only eye area. In Fig. 9 and Fig. 10, successful results for BioID and TalkingFace databases are given. As seen in the results this method manages to draw a white circle to the eyes and the center of the circle is where eye pupil is. In Fig. 11 and Fig. 12, unsuccessful results for BioID and Talking Face databases are given. In some cases as seen in Fig. 11 and Fig. 12, eye corners are not accurate. In such a case calculated eye radius may be smaller than the real value. Even though the result will be within the eye, it may not be perfectly aligned with the eye. In some other cases, the small partition of the eyes is visible as seen in the top left image in Fig. 12. In such cases; the eye may not be accurately localized. In Fig. 13 and Fig. 14 unsuccessful results for BioID and TalkingFace databases are given. However, as seen in the images these results were successful. It has been noticed that some of the eye positions are not annotated accurately. Figure 12 In some cases eye positions of the BioID database are not accurate. These images are marked as unsuccessful for e < 0.05 Figure 13 Successful localization, which is marked as unsuccessful for Talking Face database (e < 0.025) Our attempt to correct the inaccurate eye positions showed that even the same person might mark different locations as an eye in different trials, which is why it has been decided to use given eye positions. However, it should be noted that for (e < 0.025) some of the successful results will be marked as unsuccessful. ColorFeret database has the best resolution for faces out of the three databases. However, it has also the poorest eye position information. Usually, they merely did mark somewhere within an eye. They did not mark eye centers. In Fig. 15, some successful results from ColorFeret database are given. It can be seen that as the resolution grows the method can cope with it. Fig. 16 shows yet another example of failed eye corner detection for ColorFeret database. As seen in images, eye corners are not accurate which causes unsuccessful results. However, since eye positions are not accurate, ColorFeret database has not been used for success comparison. Experimental results are given in Tab. 2 and Tab. 3. For e < 0.1 at worst 91.91% success rate is achieved in both databases. As seen in Fig. 6, (h) is the input list where all points within the eye circle are used. Thus the best result is expected from this list and as seen in Tab. 2 for both databases (h) has the best results. (b), (c), and (d) have the lowest score in both databases, and all of these input lists have the points from a circle which has a half radius of the eye. Usually, reflection occurs within this circle and, we believe, that is the reason for the low success rates. (e), (f), and (g) are the lists which have a low point count. In descending order, they are sorted (g), (e), and (f) according to the number of points they contain. The results of the (f) are surprisingly good for TalkingFace database, and it is better than the (h) for e < 0.025. However, for BioID databases result of the (f) is the lowest among them. In a state where illumination is sufficient, resolution is adequate as well; (f) is an excellent choice for locating the eye. (e) was not very successful in both databases in contrast to (g) which manages to achieve good results in both databases. It seems (g) is an acceptable choice for an application which contains faces with different resolutions. Comparison with the State of the Art This study is compared with state of the art methods which use the BioID database and Talking Face database to determine the success rate. The results are given in Tab. Easymatch method seems to be average against state of the art for the BioID database. However, it should be noted even though all of the methods use BioID database some of them use the database partially such as removing images with people who use eyeglasses or removing images with closed eyes. Also, some of the studies [6,8,10,11] had used the Viola-Jones face detector to detect the face and according to our tests; successful face detection rate of the Viola-Jones is 89%, which means the success rate of their method shouldn't be more than 89% if they used the Viola-Jones method. 4. Easymatch method has the best result in (e < 0.025) against state of the art for TalkingFace database. This approach yields better results than low resolutions at decent resolutions. However, this difference is also due to poor environmental conditions of the BioID database. This approach is faster than other methods. The time consumption is close to or less than the time consumption of image smoothing operations. Methods which use AI-based approaches achieve better success rates; in fact, they achieve the best success rates overall. However, they also have some shortcomings. For AI-based approaches to work, images must be resized to a certain size, which means additional workload and additional quantization error. If an image has a better resolution, the approaches' success rates tend to drop. For the BioID database, 4 ms average time consumption is calculated in [16] while the approach presented in this paper manages to get less than 1 ms. CONCLUSION Eye detection is a fundamental and essential step for many applications. This paper presents a simple matching method to localize eye positions in frontal face images. Three characteristics of the eyes have been used to determine eye location. Those characteristics are: eyes are darker than the surface around of it, there is a ratio between the eye radius and the distance between two eye corners, and finally, eyes are round. Using facial landmarks and a tailored matching method a novel way to detect the eyes is presented. A face detector with 68 points has been used in experiments. However, out of 68 points, only the 12 points which are around the eye have been used in this study. We believe it would provide better results if an eye detector with 12 points were to be created for eye detection. The approach which is shown in this study provides successful results in a fast and easy way. Different reduced input images have been used to detect eye. Each input image has different computation speed and different success rates, which makes it possible to choose between them according to the expectations of the application. Overall, the method yields promising results, considering it is an easy approach which does not involve any learning or model scheme. Also, the method does not need any complex algorithm to determine eye location; it is easy to understand and easy to implement.
6,266.4
2020-02-15T00:00:00.000
[ "Computer Science" ]
Enantioselective Metabolism of Quizalofop-Ethyl in Rat The pharmacokinetic and distribution of the enantiomers of quizalofop-ethyl and its metabolite quizalofop-acid were studied in Sprague-Dawley male rats. The two pairs of enantiomers were determined using a validated chiral high-performance liquid chromatography method. Animals were administered quizalofop-ethyl at 10 mg kg−1 orally and intravenously. It was found high concentration of quizalofop-acid in the blood and tissues by both intragastric and intravenous administration, and quizalofop-ethyl could not be detected through the whole study which indicated a quick metabolism of quizalofop-ethyl to quizalofop-acid in vivo. In almost all the samples, the concentrations of (+)-quizalofop-acid exceeded those of (−)-quizalofop-acid. Quizalofop-acid could still be detected in the samples even at 120 h except in brain due to the function of blood-brain barrier. Based on a rough calculation, about 8.77% and 2.16% of quizalofop-acid were excreted through urine and feces after intragastric administration. The oral bioavailability of (+)-quizalofop-acid and (−)-quizalofop-acid were 72.8% and 83.6%. Introduction Pesticide is a double edged sword, which plays very important roles in increasing crop production and income, but it also causes some negative effects, such as environmental pollutions [1,2], homicidal and suicidal accident [3], cancer and other diseases [4]. Among the total amount of pesticide in china, more than 40% of them are chiral [5], and this ratio is increasing as more and more complex structures are being developed. Chiral pesticides are composed of two or multiple enantiomers, which have the same physical, chemical properties and affection in achiral environment. However, for the individual enantiomers can interact enantioselectively with enzymes or biological receptors in organisms [6], the biological and physiological properties of enantiomers are often different [7]. For example, (2)-o,p9-DDT is a more active estrogen-mimic in rat and human than (+)-o,p9-DDT [8]. The (R)-form of dichlorprop is active while the other is totally inactive [9], but its inactive form still has oxidative damage to the nontarget organisms [10]. Although the enantioselective ecotoxicities of some chiral pesticides to non-target animals, plants and human cancer cell lines have been reported [7], the different properties of the enantiomers are still poorly understood and many chiral pesticides are still used and regulated as if they were achiral. Quizalofop-ethyl, (2RS)-[(2-(4-((6-chloro-2-quinoxalinyl)oxy)phenoxy)-ethyl ester] (QE, Fig. 1) is intensively used to control both annual and perennial grass weeds in broadleaf crops, such as alfalfa, bean, cabbage, canola, carrot, lettuce, potato, soybean, sugar beet, tobacco, tomato and turnip [11]. The half-life (T 1/2 ) of quizalofop ethyl on onion was about 0.8 day [12]. QE could be rapidly metabolized to its primary metabolite quizalofop-acid (QA) in soybean, cotton foliage and goat [11,13]. The study of potential effects of QE on the development of rats has been conducted, and the results showed a significant decrease in the number of fetuses alive and a significant increase in the number of rats with retained placenta [14]. QE exits two enantiomeric forms, the (+)-and (2)form, but the (+)-form has higher herbicidal activities. For the herbicidal mechanism of QE is inhibiting acetyl CoA carboxylase and (+)-form is a more potent inhibitor against acetyl-CoA in chloroplasts [15]. However, the racemate of QE is still widely used owning to the low cost. The inactive enantiomer just causes environmental problems and may have influences on non-target organisms after their use on crops. A chiral HPLC and a LC-MS/MS method were set up for the separation of the enantiomers and the identification of QE and QA in this work. The stereoselective metabolism of QE in rat in vivo was conducted. The data presented in this study may have some significance for risk assessment. Ethics statement This study and all animal experiments were approved by the local ethics committee (Beijing Association For Laboratory Animal Science), ethical permit number 30749 and carried out with local institutional guidelines. Chemicals and Reagents Rac-quizalofop-ethyl (98%, technical grade) and rac-quizalofop-acid (99%) were obtained from Institute for the Control of Agrichemicals, Ministry of Agriculture of China. Tween 80 and corn oil was obtained from Sigma-Aldrich (St. Louis, MO, USA). Dimethyl sulfoxide, trifluoroacetic acid (TFA), ethyl acetate, nhexane, acetonitrile, methanol and 2-propanol were purchased from Beijing Chemicals (Beijing, China). Water was purified by Milli-Q water, 18 MV?cm. All other chemicals and solvents were of analytical grade and purchased from commercial sources. Animal Experiments Sprague-Dawley male rats weighing 180-220 g were procured from Experimental Animal Research Institute of China Agriculture University and housed in well-ventilated cages with a 12:12 h light: dark photoperiod. The rats were provided standard pellet diet and water ad libitum throughout the study. The experiments were started only after acclimatization of animals to the laboratory conditions. Before the experiments, the rats were fasted for 12 h, with free access to drinking water at all the times. All the samples were stored immediately at 220uC till the sample processing. A certain amount of QE dissolved in dimethyl sulfoxide was added in corn oil, after ultrasound and shaking, it turned into a suspension solution and then given to rats by intragastric administration at a dose of 10 mg kg 21 b.w. (n = 6) [16]. Blood was sampled from rat tails at 1, 3, 7, 9, 10, 12, 15, 24, 48, 72 and 120 h after the intragastric administration. Control rats received an equal volume of corn oil only. Brain, liver, kidney and lung were collected at 12 h and 120 h respectively. Urine and feces were gathered throughout the study. Sample Preparations Kidney, lung, liver, brain and feces were homogenized for 3 min to prepare homogenized tissues. The rat blood (0.2 mL), urine (2 mL) and 0.2 g homogenized tissues were transferred to a 15 mL plastic centrifuge tube with the addition of 5 mL of ethyl acetate. To obtain a better extraction, 100 mL HCl (1 mol L 21 ) was added. The tube was then vortexed for 5 min. After centrifugation at 3500 rpm for 5 min, the upper solution was transformed to a new test tube. Repeat the extraction with another 5 mL of ethyl acetate and combine the upper solution. The extract was dried under a stream of nitrogen gas at 35uC. Then the residue was redissolved in 0.5 mL of 2-propanol or 5 mL of methanol, and finally filtered through a 0.22 mm syringe filter for HPLC and LC -MS/MS analysis. Table 3. Extraction efficiency of (+)-QE, (2)-QE in blood, kidney, lung, liver, brain, urine and feces. Table 1. Method validation Blank tissues obtained from untreated rats were spiked with rac-QE and rac-QA working standard solutions to generate calibration samples ranging from 0.3 to 60 mg L 21 . Calibration curves were generated by plotting peak area of each enantiomer versus the concentration of the enantiomer in the spiked samples. The standard deviation (SD) and the relative standard deviation (RSD = SD/mean) were calculated over the entire calibration range. The recoveries were estimated by the peak area ratio of the extracted analytes with an equivalent amount of the standard solution in pure solvents. The limit of detection (LOD) for each enantiomer was considered to be the concentration that produced a signal-to-noise (S/N) ratio of 3. The limit of quantification (LOQ) was defined as the lowest concentration in the calibration curve with acceptable precision and accuracy. Table 4. Extraction efficiency of (+)-QA, (2)-QA in blood, kidney, lung, liver, brain, urine and feces. Data Analysis Enantiomeric fraction (EF) was used to present the enantioselectivity, defined as: peak areas of (+)/[(+)+ (2)]. An EF = 0.5 indicates a racemic mixture, whereas preferential degradation of one of the enantiomers made EF under or over 0.5. The direct excretion rate (ER) of urine and feces was defined as the following exponential:ER~C |m1 m2 |100%. Where C is the concentration of QA in urine or feces, mg kg 21 ; m 1 is the amount of urine or feces, g; m 2 stand for the administered dose of QE. This equation could only reflect the excretion rate approximately base on the assumption that all the QE was metabolized to QA quickly according to the results of this work and the previous studies. The pharmacokinetic parameters such as volume of distribution (Vd) and clearance rate (CL) were generated. The oral bioavailability was calculated as (AUC oral / AUC i.v. )6(dose i.v ./dose oral ). The area under the concentrationtime curve (AUC) was determined to the last quantifiable concentration using the linear trapezoidal rule and extrapolated to infinity using the terminal phase rate constant. An analysis of variance (ANOVA) was used to determine the statistical differences and p,0.05 was considered to be of statistical significance. Data were presented as the mean 6 SD of six parallel experiments. Assay Validation The chromatograms of the control and spiked samples and mass spectrums were shown in Fig. S1 and Fig. S2 in File S1. No endogenous peaks from samples were found to interfere with the Linearities of all the tissues were shown in Table 2. Over the concentration range of 0.3-60 mg kg 21 , correlation coefficients (R 2 ) were all higher than 0.994. As shown in Table 3 and Table 4, Extraction efficiency of (+)-QE, (2)-QE, (+)-QA and (2)-QA in samples at the concentrations of 0.3, 6 and 60 mg kg 21 (n = 3), ranging from 77% to 108% with RSD of 3%-10%. The LOD and LOQ were 0.1 and 0.3 mg kg 21 , respectively. Degradation Kinetics in Rat in vivo As shown in Fig. 2 and Fig. 3, QE could not be detected in blood after intragastric and intravenous administration of rac-QE, which indicted that QE could be metabolized to QA quickly. However, QA could still be detected even at 120 h in all samples that meant QA could not be easily metabolized by animals. Great difference between the two enantiomers of QA was found in all samples (Fig. 4, Fig. 5). The maximum concentration (Cmax) of (+ )-QA in blood was almost ten times higher than that of (2)-QA. Pharmacokinetic parameters and bioavailability of QA after intravenous and oral administration were shown in Table 5. The AUC of (+)-QA and (2)-QA were 1631.2026241.038 mg/ L/h and 246.571670.677 mg/L/h after intragastric administration, and 2239.1056300.554 mg/L/h and 294.751685.377 mg/ L/h after intravenous administration. The oral bioavailability of (+ )-QA and (2)-QA were 72.8% and 83.6%. The results revealed a slow clearance of QA from blood. The reason for not detecting QE in blood after intragastric and intravenous administration could be the rapid deesterification of QE in small intestine and blood. The selective uptake, transport across tissues or protein and elimination of enantiomers may be responsible for the enrichment of (+)-QA [17,18]. The high index of AUC in both intragastric and intravenous administration means that QA was slowly eliminated from plasma and tissues, which may have chronic effects such as reproductive toxicity on rats [15]. QE was also not detected in the tissues. The data of the residue of QA at 12 and 120 h in tissues were shown in Table 6. The EF values in brain, kidney, lung, liver, urine and feces were shown in Fig. 6. Both enantiomers could be detected in brain, kidney, lung and liver at 12 h and 120 h except (2)-QA in brain at 120 h. The concentrations of QA in the tissues were in the order of liver. kidney.lung.brain at 12 h and kidney.liver.lung.brain at 120 h. The relative low concentration of (+) and (2)-QA in brain was mainly due to the function of blood-brain barrier [19]. QA was also found in urine and feces. As shown in Table 7, the rats excreted approximately 8.77% and 2.16% of the administered dose by urine and feces based on the calculation. The relative low amount of QA in urine and feces might be attributed to the fact that QA was degraded to further metabolites or QA was transferred to others tissues. Conclusions The stereoselective metabolism of QE and its primary metabolite QA in rats was conducted. QE was rapidly hydrolyzed to QA and could not be detected in all samples. However, QA still could be detected even at 120 h. High index of AUC indicated that QA was more likely to have chronic toxicity to animal and human, especially to the tissues that contained high concentration of QA, such as liver and kidney. (+)-QA occupy a higher proportion than the (2)-isomer in residues and the faster degradation of (2)-QA might contribute to the enantioselectivity. It was also found that urine excretion was not the main pathway of Table 5. Pharmacokinetic parameters and bioavailability of QA after intravenous and oral administration (n = 6). QA by rat. The data was helpful for full risk assessment of chiral pesticides. Supporting Information File S1 Figure S1, Representative HPLC chromatograms of QE and QA extracted from untreated and spiked samples. A1-G1 and A2-G2 represent chromatograms extracted from rat blood, urine, feces, liver, brain, kidney and lung (untreated and spiked with 10 mg L 21 of rac-QE and rac-QA respectively). H represents the standard of 10 mg L 21 of QA and QE. Figure S2, Representative MS spectra of QE and QA extracted from untreated and spiked samples. Table 7. Excretion rate of (+)-QA (ER 1 ) and (2)-QA (ER 2 ) by urine and feces.
3,060.4
2014-06-25T00:00:00.000
[ "Chemistry" ]
Elementary operations: a novel concept for source-level timing estimation ABSTRACT Early application timing estimation is essential in decision making during design space exploration of heterogeneous embedded systems in terms of hardware platform dimensioning and component selection. The decisions which have the impact on project duration and cost must be made before a platform prototype is available and software code is ready to be linked and thus timing estimation must be done using high-level models and simulators. Because of the ever increasing need to shorten the time to market, reducing the amount of time required to obtain the results is as important as achieving high estimation accuracy. In this paper, we propose a novel approach to source-level timing estimation with the aim to close the speed-accuracy gap by raising the level of abstraction and improving result reusability. We introduce a concept – elementary operations as distinct parts of source code which enable capturing platform behaviour without having the exact model of the processor pipeline, cache etc. We also present a timing estimation method which relies on elementary operations to craft hardware profiling benchmark and to build application and platform profiles. Experiments show an average estimation error of 5%, with maximum below 16%. Introduction Systems on Chip (SoC), which are used to run modern complex applications, must have the heterogeneous structure of processing, memory and communication elements to meet high performance, energy efficiency and low price goals. Due to the exponential growth of heterogeneous system complexity, it is estimated that designers productivity will have to increase up to ten times to successfully meet system requirements and constraints within the similar time and cost limits [1]. The key to success is making good decisions in early design stages, before assembly of the first prototype. Raising abstraction level in all design phases enables separation of computation from communication and using separated application and platform models. This leads to a more efficient approach to design space exploration (DSE) [2]. Early timing estimation is one of the most important phases in DSE. In recent years, the traditional approach using highly accurate Instruction Set Simulator (ISS) has been replaced by high-level timing estimation models which enable obtaining estimates in early design stages [3][4][5][6][7][8][9][10]. In this paper, we propose a source-level application execution time estimation method based on a concept named elementary operations which enables capturing architectural effects and compiler optimizations influence. The estimation method consists of two phases: analysis and estimation. In the analysis phase, application and platform configurations considered for design are profiled. Application profile is obtained by transforming application source code into a list of elementary operations structured in loops, branches and sequences. It is independent from platform and compiler optimization level, and hence the same application profile can be used to estimate execution time on any platform. Platform profile is obtained by executing a benchmark entitled "ELOPS benchmark" 1 on every platform configuration and for each compiler optimization level separately. This benchmark was specially crafted as a part of this research to measure execution times of elementary operations on real platforms. Results of benchmark run on each platform configuration make the platform profile for that respective configuration. In the estimation phase, the proposed timing estimation algorithm combines application and target platform profiles to provide timing estimate. Accuracy of our approach is evaluated using the JPEG image compression algorithm and Advanced Encryption Standard (AES) algorithm on several hardware configurations based on two RISC processors: ARM A9 and Microblaze, custom-built for Xilinx Zynq-based ZC706 platform. Achieved accuracy (i.e. error rate) is similar to the most accurate state of the art source-level timing estimation methods. The strong point is a significant reduction in time and effort required to obtain results due to reusability of application and platform profiles. Also, the proposed method can be easily scaled for systems with hundred or more elements of the same type, in a similar fashion to the method demonstrated in [11]. The rest of this paper is organized as follows. Section 2 provides an overview of the current state of art in area of high-level timing estimation. The proposed method for source-level timing estimation is presented in Section 3. The flow of application timing estimation is described in Section 4. Test cases used for evaluating the proposed method are given in the first part of Section 5. The results of the conducted experiments are presented and discussed in the rest of the section. Related work Authors in [12] propose a source-level simulation infrastructure that provides a full range of performance, energy, reliability, power and thermal estimation. For timing simulation, they build upon their previous work [3] which uses simulation-based approach with back annotation on intermediate representation (IR) level. Simulating pipeline effects on basic block boundaries requires additional pair-wise block simulation for every possible block pair combination on a cycle-accurate reference. They consider a high-level cache model by reconstructing target memory traces solely based on IR and debugger information. Simulation of the entire application execution is done using System C and transaction-level modeling (TLM) [13] with estimation error below 10%. Simulating pipeline effects on basic block boundaries requires additional pair-wise block simulation for every possible block pair combination. Other approaches use machine learning and mathematical models for early timing estimation. Authors in [8] use artificial neural networks (ANN). ANN gives timing estimate based on execution time and total number of each instruction type. Estimation error is around 17% but the method is much more flexible compared to simulation methods and provides a higher level of result reusability. After the initial training period, estimation results are obtained rapidly. Methods presented in [9] and [10] are based on linear regression and ANNs with higher error rates -around 20%. Authors in [14] use model tree-based regression technique as a machine learning method of choice. Authors in [11,15,16] propose hybrid methods: first simulation is used to obtain the execution time of each procedure on each type of processing element, then analytical methods are used to account for cache and communication effects. In [17], authors use linear regression for calculating timings but they use a set of specially crafted training program to identify instruction costs of an abstract machine. Authors try to capture effects of cache, pipeline and code optimization by crafting examples with longer instruction sequences and loops. However, since they rely on IR, they face challenges when introducing code optimizations because virtual instructions in the translation of the training program are not in close correspondence with the compiled version. The concept of elementary operations has first been introduced in [18] in attempt to characterize platform behaviour without having the exact hardware model. This preliminary method for early timing estimation lacked compiler optimization support and ability to estimate input dependent application tasks. In this paper, we extend our previous work and present an improved method. Elementary operations approach Our approach is based on decomposing a piece of source code written in C programming language (standard C11) to elementary operations -distinct parts of source code which enable capturing platform behaviour without having the exact model of processor pipeline, cache etc. The set of elementary operations is finite with several subsets: integer, floating point logic and memory operations. These sets are co-related to parts of RISC-like architecture processor and memory datapath. Classification of elementary operations We propose a multi-level elementary operations classification scheme. The top level contains four operation classes: INTEGER, FLOATING POINT, LOGIC and MEMORY. Second level of classification is based on origin of operands (i.e. location in memory space): local, global or procedure parameters. This stems from the difference in locality due to the way compiler implements operands stored in different parts of memory space. It is expected that each group will show different timing behaviour: local variables, being heavily used, are almost always in cache, while global and parameter operands must be loaded from an arbitrary address and can cause a cache miss. Third level of classification is by operand type: (1) scalar variables and (2) arrays of one or more dimensions. Pointers are treated as scalar variables when the value of pointer is given using a single variable, or as arrays when the value of pointer is given using multiple variables. Operations which belong to INTEGER and FLOAT-ING POINT classes are: addition (ADD), 2 multiplication (MUL) and division (DIV). LOGIC class contains logic operations (LOG): (i.e.and, or, xor and not) and shift operations (SHIFT): operations that perform bit-wise movement (e.g. rotation, shift, etc.). Operations in MEMORY class are: single memory assign (ASSIGN), block transaction (BLOCK) and procedure call (PROC). MEMORY BLOCK represents a transaction of a block of size 1000 and it can only have array operands. MEMORY PROC represents a function call with one argument and a return value. Arguments can be variables and arrays, declared locally or given as parameters of the caller function, but never global. All of these operations are listed in Table 1. Abbreviations indicated in the table are used further on when referring to a specific class. Sample source code given in Figure 1 illustrates how code statements can be correlated to elementary operations classification scheme. Each operation is denoted using abbreviations from Table 1. Accuracy of timing estimation using the proposed classification scheme was analysed on two RISC processors: ARM Cortex-A9 and Microblaze, implemented on Xilinx Zynq-based ZC706 platform. First, the actual execution time of each elementary operation from Table 1 was measured for each processor. Each operation was repeated in a for-loop a thousand times to compensate for timer setup and to create a context to capture the effects of compiler optimizations, cache and pipeline. Then, test cases were crafted in a way to contain constructs commonly found in real-world application code. For each test case, elementary operations were identified and, using previously obtained execution times, a timing estimate was calculated. Finally, each test case was executed on both target processors in order to obtain actual execution times and compare them to estimated ones. Sequence of operations A sample source code given in Figure 2 illustrates four examples of sequences of operations: (1) five INT_loc_var ADD operations in a single statement (2) for-loop with five statements in a sequence, each containing one INT_loc_arr ADD operation It must be noted that in our approach for-loops are considered to be an implicit part of operations with array type of operands. This is because when the execution time of each elementary operation is measured, it is done in a loop and all overhead added by the loop is already included in the obtained timings. According to the initial classification scheme proposal, each elementary operation is treated separately during code analysis and timing estimation. Estimated and actual execution times for the source code in Figure 2 are presented in Table 2. Estimation error is calculated using formula It can be observed that execution time per one elementary operation decreases exponentially with the increase in total number of operations in a sequence. The same behaviour is observed for all other types of elementary operations on both processors, but for the sake of brevity is not shown here. This leads to conclusion that due to pipelining and decreasing of loop overhead, sequence lengths plays an important role when estimating timings of sequences of operations which belong to the same class. Experiments also show that by measuring timings only for several lengths such as 2, 3, 4, 5, 10, 20 and 50 an approximation with less than 10% error can be done for any other sequence lengths. This makes profiling a much faster and more efficient process. Operations with mixed types and origin of operands Source code in Figure 4 represents four cases of operations with mixed type and origin of operands. The initial elementary operations classification scheme proposal does not give explicit specifications for determining elementary operation class in such cases. Thus, we additionally introduce origin priority. Priorities are defined based on difference in locality due to the way the compiler implements each of the operand type (from highest to lowest) and the idea is to select operation class based on the operand with the highest priority: (1) parameter array (2) global array (3) local array (4) parameter variable (5) global variable (6) local variable All operations in the given example are classified as operations with global operands, even though OP1, OP3 and OP4 contain local variables while OP2 and OP4 contain local array operands. The comparison of estimated and actual execution times for these test cases is presented in Table 3. It shows that in the case of OP2 and OP3, where local operands are present, estimation error goes over 20%. Error for OP4 is slightly lower, but this is probably because the proposed method gives underestimation in case of local arrays (OP2) and overestimation in case of local variables (OP3), so the numbers even-out. These results indicate that it is necessary to modify the existing solution by expanding the origin priority-based approach and give each operation additional attributes to denote different types and origin of operands. Attribute mod will be added in case when an operation has operands of mixed types and origin. Attribute value will indicate the following cases: (6) presence of constants. List of these values is given in Table 5 in column Values for operation modifier attribute mod. Array operands index Index of array operands can have more than one dimension and/or can be calculated using the values of more than one variable. The same applies to struct types in C source code. Sample source code in Figure 5 illustrates such examples. OP1 is an example of MEM_par_arr ASSIGN operation with a 3-dimensional array, and OP2 is a similar example with a struct containing 2dimensional array. At this point, operations with structs are classified as operations with arrays. Operations OP3 to OP7 are examples of arrays with index calculated based on values of several variables. Table 4 shows the results for code in Figure 5. In case of OP1 and OP2 estimation error is above 60%. Almost the same error is observed in case of 3-dimensional array and struct containing 2-dimensional array (making it also a 3-dimensional structure). In case of OP3 to OP7 it can be observed that the results are severely underestimated, but it can also be observed that the underestimation increases as the number of operations required for index calculation increases. These results suggest that the initial classification scheme, which does not recognize multiple dimensions and index structure, should be extended even further. Several attributes will be added to operations with arrays to indicate specifics about index type. These attributes will indicate (1) type of array index, which can be simple -given using a single variable, complex -calculated based on two or more variables and a constant, Sequence of operations seq positive integer Operation modifier mod "var" -at least one variable operand of the same origin is present "glob_var" -at least one global variable operand is present "glob_arr" -at least one global array operand is present "loc_var" -at least one local variable operand is present "loc_arr" -at least one local array operand is present "const" -at least one constant operand is present Index modifier type "simple" -index is given as a single variable "complex" -index must be calculated using more than one variable "const" -index is a constant value List of these attributes is given in Table 5 under index modifier attribute description. Finally, since arrays and structs show similar timings, they will continue to be treated equally. Classification scheme attributes overview Based on previously discussed observations, the classification scheme is extended to incorporate the proposed modifications. These extensions are included as attributes to each class of elementary operations. According to the three groups of possible cases which have effect on execution duration of elementary operations defined in Table 1, three groups of attributes are listed in Table 5 under column Attribute group. Sequence of operations is denoted with attribute seq. Operation modifier group contains attribute mod which is present in operations with operands of mixed types and origin. Attribute value is a space separated list which can contain one or more of the following elements as listed in Table 5 Index modifiers is a group of four attributes, all listed in Table 5, which are added to elementary operations with arrays. Application timing estimation The proposed application execution time estimation method based on elementary operations consists of analysis and estimation phase as indicated in Figure 6. The first step of analysis phase is platform profiling. In this step a specially crafted ELOPS benchmark, described later, is compiled and run on every platform configuration and for each optimization level separately. The platform profile is created based on the results of benchmark runs and contains timings of elementary operations. The second step is application profiling. Application profile is a transformation of original C source code into a list of elementary operations structured in loops, branches and sequences. Application profiling is done only once on the original C source code. For this purpose common compiling constructs such as abstract syntax tree (AST) and control and data flow graph (CDFG) are used. Application and platform profiles created during the analysis phase are permanently stored in database. In estimation phase, first platform and application profiles are retrieved from database. Then a timing estimation algorithm, described later in this section, combines application and platform profiles to provide timing estimate. Platform profiling Platform profiling starts with the execution of ELOPS benchmark 3 on every platform configuration considered for final design.A platform configuration is a pair of a specific processor and a memory connected to it, used to store instructions and data. ELOPS benchmark is designed based on elementary operations classification scheme to measure execution time of each operation from Table 1 and timing effect of every possible attribute listed in Table 5. For each operation sub-class listed in Table 1 (e.g. INTE-GER ADD, LOGIC SHIFT etc.), three main groups of benchmark entries are defined: local, global and parameters. Each group contains two sub-groups: variable and array. Array sub-group branches further by two criteria, array index type and dimension. This means that for each operation from Table 1 and for each origin operand group, there are five base benchmark entries. All benchmark entries are systematized in Table 6. Each base benchmark entry has sub-variants in which the different lengths of sequences of operations are measured. The distinction is made between multiple occurrences of the same operation in one statement -named sequential operations, and sequence of statements belonging to same elementary operations class -named sequential statements. In our current implementation, the benchmark contains entries for the following sequence lengths: 2,3,5 and 10. All measurements are done by executing an operation in a loop for a thousand times to compensate for timer setup effects and to create a context which will capture effects of optimizations and hardware features such as cache and pipeline better. The special case are two elementary operation sub-classes: MEMORY BLOCK an MEMORY PROC. The MEMORY BLOCK class is measured as a single transaction of a block of size 1000 (using memcpy function) and it can have only array operands. MEMORY PROC class is measured as a function call with one argument and a return value. In our implementation benchmarks do not contain entries for arrays with an index of dimension higher then 3 because at this point, there was no code in our test applications which contained structures of higher dimensions. In order to enable accurate estimations for code containing compiler optimizations, the benchmark has to be compiled and run separately for all optimization levels. This way the same optimizations which will occur in e.g. looped or sequential execution in the source code, will also be present in the benchmark code. The measurements are then combined into a platform element profile in an XML document. Application profiling Application profiling uses code analysis and profiling method described in [19] which is slightly adapted to be compatible with the elementary operations classification scheme. Application source code processing starts with the generation of call-tree statistics to produce profiling information at procedure call-graph abstraction level. The compiler transformation flow starts with parsing the source code to the abstract syntax tree (AST). During recursive traversal of the tree, information about data structures, types of variables and procedure arguments is used to identify elementary operations according to the proposed classification scheme. AST is further transformed to a control and data flow graph (CDFG) representation by using recursive traversal of the tree with the introduction of temporary variables that form the threeaddress code statement notation. During that process, the key is recognizing points where uniform instruction flow is broken by condition testing in branch or loop jump conditions. The final application profile is obtained by unifying procedures calls statistics and profiles obtained using AST and CDFG for each procedure separately. The output of the entire process is application profile as an abstract model written in an XML structure. In it, the original application source code is transformed into a multi-level structure of elementary operations organized in loops, branches and sequences. Each application is composed of one or more procedures which directly correspond to procedures (functions) in original C source code. Procedures can contain any number of loops, branches or operations. Loop represents a for or a while loop, and branch represents an if-else or switch-case conditional constructs. Loops and branches can have any number of loops, branches and operations as sub-elements. Operation represents a single statement or a sequence of operations that has been assigned an elementary operation class. Operations have attributes which cover the extension to classification scheme as discussed in Section 3.1.1. All possible profile elements and attributes are listed in Table 7. For applications which have data-dependent behaviour, the precision of profiling can be highly dependable on input data in run-time. In such cases, loops iteration count or branch condition evaluation result cannot be resolved without simulation and analysis of variable data values. Since these facts define the number of existing running paths through the application source code both during analysis of hierarchical task graph and formation of control and data flow graph, estimation must rely on one or more simulation runs to determine either the exact number or upper and lower boundaries and statistical probabilities for these values. In our research so far, we have employed a commonly accepted approach of running instrumented code on a host PC (i.e. host-compiled) for determining data-dependent behaviour, [3,4,7]. Timing estimation After obtaining both application and platform profiles, the final step is to combine the two to estimate application execution time. The algorithm is described in short using pseudo-code in Figure 7. Finally, it is important to accent the reusability of the proposed method. Each application needs to be profiled only once and the obtained profile can be used in the future as is for any platform configuration at hand. In the same manner, each type of platform element needs to be profiled only once and the obtained data can be reused for timing estimation of any other application. The reusability of profiling results also helps achieve better scalability when building platforms with multiple elements of the same type. Experimental setup and results Verification of elementary operation approach has been done on the commonly used real-world applications: (1) The Advanced Encryption Standard (AES) [20] used in two different implementation versions. (a) AES_G -the first version of the AES where data is accessed via global variables, (b) AES_P -the second version of the AES where data is accessed via procedure parameters. (2) JPEG image compression algorithm -using implementation as described in [21]. This particular set of applications encompasses all types of elementary operations and represents well the key features of applications for which heterogeneous embedded systems are used most often: multimedia, compression and encryption. Xilinx Zynq ZC706 reconfigurable evaluation board has been chosen as target platform. Three configurations, each composed of one processor and one memory element have been used: (1) MB1 -MicroBlaze, a 32-bit RISC Harvard architecture soft processor core in the following configuration: 5-stage pipeline with hardware multiplier, barrel shifter and floating-point unit operating at 200 MHz. The processor is connected to 128 KB FPGA-based BRAM memory, operating also at 200 MHz, via local memory bus (LMB). This memory is used for storing both instructions and data. (2) ARM1 -A single core of ARM Cortex-A9 processor is used in the following configuration: operating frequency at 667 MHZ, 32 KB L1 cache and 512 KB L2 cache with both instructions and data These configurations are a popular general purpose choice for low-power or thermally constrained, costsensitive devices (e.g. smart-phones, digital TV, and both consumer and enterprise applications enabling the Internet of Things). Test cases AES_G and AES_P have been tested for an example input of 32 bytes of data and JPEG has been tested on Lenna image. Each test case and each platform configuration have been profiled and timing estimation has been calculated based on these profiles using method described in Section 4. Then, each test case has been executed on each platform configuration to obtain actual timings. Total execution time for AES_G, AES_P and JPEG was measured as the time taken for the entire application to run. Parts of AES_G and AES_P (AddRoundKEy, ShiftRows, etc.) were measured in 1000x loops because of very small time scale, to negate timer setup effects. Parts of JPEG applications were not measured in a loop because time scale is orders of magnitude larger than the timer setup overhead. Tests were performed for optimization level O0 -O2. Optimization level O3 has not been considered since level O2 is still the recommended option by most embedded systems manufacturers in order to avoid potentially incorrect execution if the source code is not written exactly following C standard, [6]. Results for second AES implementation -AES_P, for all three platform configurations are presented in Table 9. The achieved average error is around 6% with minimum below 1% and maximum below 16%. Table 10 shows results for JPEG test case. Timing estimation has been done for all three configurations: To summarize, for all three test applications and for all three target platform configurations estimation accuracy remains approximately at the same level, with the average error around 5% and the maximum error below 17%. Estimation accuracy shows no significant degradation for any level of compiler optimization. Even for ARM2 configuration, there is no deviation in error rate compared with results on the other two configurations. This particular configuration is more sensitive to cache effects because processor communicates with a very slow memory. In case of inability to accurately capture cache hits -method would give overestimation, or in case of cache miss -underestimation. However, it must be noted that for all three test cases there was much larger chance of having a cache hit than a cache miss, because memory footprint of each of these applications remains within range of 50 KB -250 KB. This means they fit well to cache size typical for embedded processors like ARM and likelihood for cache hits is much larger. On the other hand, all three test applications represent well, in both size and structure, common tasks for which embedded systems are used for: signal processing, vector and matrix operations, numeric calculations, search and sorting [22]. Comparing to results achieved by analytical methods, which have an average error in the range from 17% [8] to 20% [9,10], our results are better. They are, however, slightly worse than those obtained using simulation methods which achieve estimation error below 10% in worst cases, [3][4][5][6][7]. But compared to simulation methods, the strong point of our method is reusability of profiling results because both application and platform profiles can be reused in future. In that way, our method enables obtaining accurate source-level estimation in a shortened amount of time and helps close the gap between accuracy and speed. Conclusion and future work In this paper, we have proposed a method for sourcelevel application execution time estimation in a heterogeneous computing environment based on a concept named elementary operations. The method features a classification scheme used for identifying elementary operations in the source code. It enables profiling applications and platforms in a way which successfully handles compiler optimizations, pipeline and cache effects. This enables providing accurate application timing estimation while keeping the required effort input within reasonable limits. Based on the classification scheme, ELOPS benchmark is designed to measure execution time of each elementary operation type, within context like loops and sequences of operations, on real platforms. Experimental results show an estimation error to be around 5% with maximum below 17%, which is comparable to best state-of-art simulation methods. The strong point of this method is that platform profiling needs to be done only once for each hardware configuration and these results are reused again later for any other application which is executed on the same hardware. The same applies to the application profiling: each application has to be profiled only once and the obtained profile can be used as is for any platform configuration at hand. Reusability of profiling results helps achieve better scalability when building platforms with multiple elements of the same type. In the future, emphasis will be put on the full integration of the method into design space exploration process for heterogeneous multi-processor and multimemory environments to eliminate the need to re-link and recompile source code using different development environments. Finally, application analysis will be improved by automating the instrumentation process in data-dependent parts of code.
6,851.6
2019-01-02T00:00:00.000
[ "Computer Science" ]
Asymptotics for ultimate ruin probability in a by-claim risk model Abstract. This paper considers a by-claim risk model with constant interest rate in which the main claim and by-claim random vectors form a sequence of independent and identically distributed random pairs with each pair obeying some certain dependence or arbitrary dependence structure. Under the assumption of heavy-tailed claims, we derive some asymptotic formulas for ultimate ruin probability. Some simulation studies are also performed to check the accuracy of the obtained theoretical results via the crude Monte Carlo method. Introduction Consider a by-claim risk model in which every severe accident causes a main claim accompanied with a secondary claim occurring after a period of delay. In such a model, the claims {(X i , Y i ); i ∈ N} form a sequence of independent and identically distributed (i.i.d.) nonnegative random vectors with a generic random vector (X, Y ). Here, for each i ∈ N, X i and Y i represent the ith main claim (original claim) and its corresponding byclaim (secondary claim), respectively, and they are highly dependent due to their being caused by the same accident. The main claims X i 's arrive at times τ i , i ∈ N, which constitute a renewal counting process N t = sup{n ∈ N: τ n t} for some t 0 with mean function λ(t) = EN t . Denote the inter-arrival times by θ i = τ i − τ i−1 , i ∈ N, which are i.i.d. nonnegative and nondegenerate at zero random variables. Let {D i ; i ∈ N} be the delay times of the by-claims, which also form a sequence of i.i.d. nonnegative, but possibly degenerate at zero, random variables with common distribution H. Assume, as usual, that the three sequences {(X i , Y i ); i ∈ N}, {τ i ; i ∈ N}, and {D i ; i ∈ N} are mutually independent. Denote by x > 0 the initial value of the insurer, by c 0 the constant premium rate, and by δ > 0 the constant interest rate. In this setting, the discounted surplus process of an insurer at time t 0 is where 1 A is the indicator function of a set A. In this way, the finite-time and ultimate ruin probabilities of model (1) can be defined, respectively, by In insurance risk management, this kind of risk model may be of practical use. For instance, a serious motor accident may cause two different kinds of claims, such as car damage and passenger injuries even death. The former can be dealt with immediately, while the latter needs an uncertain period of time to be settled. Hence, the claims for car damage can be regarded as the main claims, while the claims for passenger injuries as the by-claims. [20] considered a discrete-time risk model allowing for the delay in claim settlements called by-claims and used martingale techniques to derive some upper bounds for ruin probabilities. Since then, many researchers have paid their attention to by-claim risk models. To name a few, [21,22,30] investigated some independent by-claim risk models, that is, the main claim and by-claim sequences {X i ; i ∈ N} and {Y i ; i ∈ N}, respectively, consist of i.i.d. random variables, and they are mutually independent, too. However, it is worth saying that the independence assumption between each main claim and its corresponding by-claim makes the model unrealistic. For example, in the above motor accident, the two corresponding claims for car damage and passenger injuries should be highly dependent. In this direction, [14] studied a by-claim risk model with no interest rate under the setting that each pair of the main claim and by-claim follow an asymptotic independence structure or possess a bivariate regularly varying tail (hence, are asymptotically dependent). Further, [27] generalized Li's result by extending the distributions of the main claims and by-claims from the regular variation to the consistent variation in the case that the two types of claims are asymptotically independent. They also complemented another case that each pair of main claim and by-claim are arbitrarily dependent, but the former dominates the latter. In the study of dependent by-claim risk models with positive interest rate, [13] considered the case that all main claims and by-claims are pairwise quasi-asymptotically independent and established an asymptotic formula for the ultimate ruin probability. Based on [8], the paper [13] further studied a dependent renewal risk model with stochastic returns by allowing an insurer to invest its surplus into a portfolio consisting of risk-free and risky assets. For more recent advances in dependent (by-claim) risk models with interest rate, one can be referred to [2,4,9,12,15,[23][24][25][26]28], among others. Motivated by [13] and [27], in this paper, we continue to study a dependent by-claim risk model with interest rate in which the main claim and by-claim vectors {(X i , Y i ); i ∈ N} are i.i.d., but each pair possesses some certain strong dependence or arbitrary dependence structure. In such a model, we aim to establish some asymptotic formulas for ultimate ruin probability. In the rest of this paper, Section 2 presents the main results of this paper after preparing some preliminaries on some heavy-tailed distributions and dependence structures. Section 3 proves our results, and Section 4 performs some simulation studies to check the accuracy of our obtained theoretical results. Preliminaries and main results Throughout this paper, all limit relationships hold as x → ∞ unless stated otherwise. For two positive functions f and g, For two real numbers x and y, denote by x ∨ y = max{x, y}. When modeling extremal events, heavy-tailed risks (claims) have played an important role in insurance and finance due to their ability to describe large claims efficiently. We now introduce some commonly-used heavy-tailed distributions. A distribution V on R is said to be consistently varying tailed, denoted by Particularly, a distribution V on R is said to be regularly varying tailed with index −α, for any y > 0 and some α > 0. It should be mentioned that many popular distributions, such as the Pareto, Burr, Loggamma, and t-distributions, are all regularly varying tailed. For any distribution V on R, define if V ∈ C, then lim y↓1 V * (y) = 1; and if V ∈ R −α for some α > 0, then J + V = J − V = α. For more discussions on heavy-tailed distributions and their applications to insurance and finance, one can be referred to [1] and [6]. Bivariate regular variation is a natural extension of the univariate one in the twodimensional case, which was firstly introduced by [5]. It provides an integrated framework for modelling extreme risks (claims) with both heavy tails and asymptotic (in)dependence. Recent works in this direction include [3,7,17,18], among others. A random vector (ξ, η) taking values in [0, ∞) 2 is said to follow a distribution with a bivariate regularly varying (BRV) tail if there exist a distribution V and a nondegenerate (i.e. not identically 0) limit measure ν such that In (3), the notation v → denotes vague convergence meaning that the relation holds for every Borel set B ⊂ [0, ∞] 2 that is away from (0, 0) and ν-continuous (i.e. its boundary ∂B has ν-measure 0). Related discussions on vague convergence can be found in [16,Sect. 3.3.5]. Necessarily, the reference distribution V ∈ R −α for some Recall the by-claim risk model (1) in which {(X i , Y i ); i ∈ N} is a sequence of i.i.d. nonnegative random pairs with generic random vector (X, Y ) having marginal distributions F and G, respectively; {D i ; i ∈ N} is a sequence of i.i.d. nonnegative random variables with generic random variable D and distribution H; N t is a renewal counting process with mean function λ(t); and {(X i , Y i ); i ∈ N}, {D i ; i ∈ N}, and {N t ; t 0} are mutually independent. Now we are ready to state our main results regarding ultimate ruin probability. The first result considers the case that each pair of main claim and its corresponding by-claim follow a joint distribution with a bivariate regularly varying tail satisfying ν((1, ∞] 2 ) > 0, hence, they are highly dependent on each other. where A s = {(u, v) ∈ [0, ∞] 2 : u + ve −δs > 1} for any s 0. http://www.journals.vu.lt/nonlinear-analysis The second result relaxes the dependence between each pair of the two types of claims as well as the common distribution of the main claims, but requires that the by-claims are dominated by the main claims. Theorem 2. Consider the by-claim risk model (1). If X and Y are arbitrarily dependent and F ∈ C, G(x) = o(F (x)), then In particular, if F ∈ R −α for some α > 0, then Proofs of main results We shall adopt the recent method on the asymptotic tail behavior for infinite randomly weighted sums to prove Theorems 1 and 2. The first lemma considers the infinite randomly weighted sums of consistently varying tailed random variables, which can be found in [10]. Then, Proof of Theorem 1. On the one hand, which implies Since (X, Y ) ∈ BRV −α (ν, F 0 ) and is independent of D, we have where we used the dominated convergence theorem in the second step. Indeed, for any s 0, by (X, Y ) ∈ BRV −α (ν, F 0 ), which is integrable with respect to H(ds). By (8), F 0 ∈ R −α , and ν((1, ∞] 2 ) > 0 we have that {X i + Y i e −δDi ; i ∈ N} constitutes a sequence of i.i.d. random variables with regularly varying tails. Note that for any p > 0 and q > 0, Then by using Lemma 1 and (8) we obtain Thus, we can derive the upper bound in (4) from (7) and (9). On the other hand, by (2) and (9) we have http://www.journals.vu.lt/nonlinear-analysis By F 0 ∈ R −α we have that for any 0 < ε < 1 and sufficiently large x, This, by (11) and the arbitrariness of ε > 0, leads to the lower bound in (4). The following lemma will be used in the proof of Theorem 2, which is due to [29]. Lemma 2. Let (X, Y ) be a nonnegative random vector with marginal distributions F and G, respectively. If F ∈ C and G(x) = o(F (x)), then, regardless of arbitrary dependence between X and Y , Proof of Theorem 2. The proof is much similar to that of Theorem 1 with some slight modification. Note that by Lemma 2, for any s 0, which is integrable with respect to H(ds). Then, by the dominated convergence theorem, relation (8) can be rewritten as implying that X + Y e −δD has also a consistently varying tail. Again by using Lemma 1 we have Thus, the upper bound in (5) is derived from (7) and (12). As for the lower bound of (5), for any ε > 0 and all x c/(δε), by (12) and F ∈ C, by letting firstly x → ∞ then ε ↓ 0. Therefore, the desired lower bound in (5) can be obtained from (10) and (13). If further F ∈ R −α , relation (6) follows from (5) immediately. Simulation studies In this section, we use some numerical simulations to verify the accuracy of the asymptotic results for ψ(x; ∞) in Theorems 1 and 2. To this end, we adopt the crude Monte Carlo (CMC) method to compare the simulated ruin probability ψ(x; ∞) in (2) with the asymptotic one on the right hand side of (4) or (6). Throughout this section, we specify the renewal counting process N t in (1) to a homogeneous Poisson process with intensity λ 1 > 0, and we suppose the delay time D also follows the exponential distribution with parameter λ 2 > 0. Although we estimate the ultimate ruin probability ψ(x; ∞), when simulating it, we choose ψ(x; T ) as the replacement for large T but fixed due to (2): ψ(x; ∞) = lim T →∞ ψ(x; T ). As for Theorem 1, assume that the random pair (X, Y ) possesses the Gumbel copula of the form with parameter γ 1. It can be verified from [19,Lemma 5.2] that if (X, Y ) possesses a bivariate Gumbel copula (14) with γ > 1 and the marginal distributions F = G ∈ R −α for some α > 0, then (X, Y ) ∈ BRV −α (ν, F 0 ) for some nondegenerate limit measure ν and some reference distribution F 0 . Furthermore, according the discussions in [17,Sect. 4], it can be calculated that for any Borel set B ⊂ [0, ∞] 2 , with parameters α > 0 and σ > 0, which implies F 0 ∈ R −α . The various parameters are set to: • c = 1, δ = 0.005, T = 1000; • λ 1 = 0.2, λ 2 = 0.25; For the simulated estimationψ(x; T ) of the ultimate ruin probability ψ(x; ∞), we first divide the time interval [0, T ] into n parts, and for the given t k = kT /n, k = 1, . . . , n, we generate m samples N The asymptotic value on the right-hand side of (4) is computed by numerical integration with ∞ 0 ν(A s )λ 2 e −λ2s ds ≈ 3.190531 and _0 ∞ e −δαt λ 1 dt = λ 1 /(δα) ≈ 22.222. In Fig. 1, we compare the CMC estimateψ(x; T ) in (16) with the asymptotic value given by (4) on the left, and we show their ratio on the right. The CMC simulation is conducted with the sample size m = 5 × 10 6 , the time step size T /n = 10 −4 with n = 10 7 , and the initial wealth x varying from 1000 to 3500. From Fig. 1(a) it can be seen that with the increase of the initial wealth x, both of the estimates decrease gradually and the two lines get closer. In Fig. 1(b), the ratios of the simulated and asymptotic values are close to 1. The fluctuation is due to the poor performance of the CMC method, which requires a sufficiently large sample size to meet the demands of high accuracy. Next, we consider the situation of Theorem 2 in which, assume that X still follows the Pareto distribution of form (15) with parameters α > 0 and σ > 0, but Y follows the standard lognormal distribution Clearly, F ∈ R −α and G(x) = o(F (x)). The following simulation aims to check the accuracy of relation (6) and the influence caused by different dependence structures between X and Y . For this purpose, the Gumbel copula (14) and the Frank copula are used to model the dependence between X and Y . Recall that a random pair (X, Y ) possesses the Frank copula of the form for some parameter β > 0. In the terminology of [11], the Gumbel copula exhibits the asymptotic dependence between two random variables, whereas the Frank copula shows the asymptotic independence. Hence, the former reflects a type of strong dependence, but the latter is relatively weak. The various parameters are set to: • c = 1, δ = 0.005, T = 1000; • λ 1 = 0.2, λ 2 = 0.25; • γ = 1.2, α = 1.6, σ = 1, β = 2. We continue to simulate the ruin probability through the CMC method. The procedure is similar to the previous case. We compare the two simulated estimates under the Gumbel and Frank copulas with the asymptotic value in Fig. 2(a), and we present the two ratios in Fig. 2(b). The two simulated estimates are obtained with a sample of size m = 5 × 10 6 , the time step size T /n = 10 −4 with n = 10 7 , and x varying from 700 to 3500. From Fig. 2(a) it can be seen that the two simulated lines in different copulas almost coincide, so in the setting of Theorem 2, the ultimate ruin probability is insensitive to different dependence structures between each pair of the main claim and by-claim. Figure 2(b) indicates that the two convergences in different dependence structures are both robust. This confirms Theorem 2.
3,859.8
2021-03-01T00:00:00.000
[ "Mathematics" ]
The Why and How of adopting Zero Trust Model in Organizations As organizations move most of their workloads to public cloud and remote work becomes more prevalent today, enterprise networks become more exposed to threats both from inside and outside the organization. The traditional Perimeter Security Model assumes that threats are always from the outside. It assumes that firewalls, proxies, IDS, IPS and other state-of-the-art infrastructure and software solutions curb most of the cyberattacks. However, there are loopholes in this assumption, which the Zero Trust Model addresses. This paper discusses the Zero Trust Model and it’s mandates and evaluates the model based on the various implementations by the leading industry players like Google and Microsoft. Overview of Cybersecurity in Organizations Most security leaders today are not very confident about the existing security solutions working as expected when most of the organizations' workloads are in the public cloud space. It is therefore critically important that the cybersecurity model deployed, is effective and efficient in meeting the expectations of the most modern working conditions. Threats due to cyberattacks and hacking improvisations should have lesser window of opportunities to obliterate the assets held by organizations. In the following sections of the paper, the currently popular perimeter security model, which most enterprises have deployed today, will be compared with the zero-trust model that the industry is moving towards. Cyberattacks are various types of malicious attacks that target computer networks, information infrastructures and private computer devices, using various methods to alter, steal and destroy data. The attacks can be active or passive, depending on whether it aims at altering or destroying the system resources or data, or gathering information from the system but not altering or destroying the resource. Cyberattacks bring loss to the organization and may result in loss of business, reputation or monetary loss. The business is ultimately compromised under such circumstances. Types of Cybersecurity Threats and Attacks Some of the most common types of cyberattacks and threats that computer networks at organizations are prone to are: § Ransomware Software -This is a type of cyberattack where all files in the system get encrypted and the hacker demands the victim to pay if they want to regain the files. Perimeter Security Model and loopholes The basic assumption of the perimeter security model is that cyber-attacks always arrive from outside the network. This assumption led to securing the perimeter of the network. Security devices and software methods like firewalls, load balancers, VPNs, DMZ etc formed the basis of perimeter security. In this scenario, the company resources and data existed mostly within the physical walls of the organization. This worked well for protecting against malware, phishing, denial of service and zero-day attacks. Figure 1 represents a traditional network security architecture. The disadvantages of this model are a) lack of intrazone traffic inspection b) lack of flexibility in host placement c) single points of failure. If the network locality requirements are removed in the above-mentioned network, the need for VPN is also removed. A virtual private network allows a user to authenticate to receive an IP address on the remote network. The traffic is then tunnelled from the device to the remote network. All these technologies of VPN, routers, switches, firewalls etc open out advanced capabilities at the network's edge. The core is never suspected and there is no enforcement of any measure in the perimeter model. The way an external attacker could penetrate the perimeter security via trojan attacks can be explained by the concept of phoning home, which is a term used to refer to the behaviour of security systems that report network location, username or other such data to another computer. NAT is configured to allow internal users to gain network access. While there is a strict control on inbound traffic, outbound traffic through NATing could consume eternal resources freely. Internal hosts which in this way communicated freely with untrusted internet resources could thereby be abused while attempting the communication. Figure 2 shows how "phoning home" happens in a typical attack where the attack is launched by an internal host. The hacker may send emails to all employees of a company whose addresses can be found in the internet by masquerading as a discount offer from a restaurant near the office. If an employee out of curiosity clicks the link, a malware is installed that phones home and provides the attacker with a session on the employee's machine. In this fashion, the attacker first compromises a low-security zone host and moves through the network towards highsecurity zones where the host has access to. The weak points in the perimeter security are usually places where firewall exceptions are made for various reasons such as, a web developer needing SSH access to production web servers or an HR representative needing access to an HR software's database to perform audits. Once a privileged workstation is located by the attacker, a keylogger maybe installed and the developer password could be stolen, which could be used to elevate privileges on the production application host. Database credentials could be stolen from the application and the database contents could be exfiltrated. Zero Trust Model and it's mandates Zero Trust Model is where there is no trusted perimeter. Everything is primarily untrusted. A device, user and an application would, by default, receive the least privileged access to the architecture even after authentication and authorization. The mandates of zero trust are: a) never trust b) always verify c) enforce least privilege. The concept of zero trust was first introduced by Forrester research and is implemented by enterprises that need to secure highly sensitive data from cyber threats. The purpose of zero trust architecture is to address lateral threat movement within a network by leveraging micro-segmentation and granular perimeter enforcement, based on data, user and location. This is also known as the "never trust always verify" principle. The point of infiltration mostly is not the attacker's target location. The way you define movement or access depends on the user and their interactions and behaviour. For example, a user from the marketing department often has no access to sensitive financial files but would have access to CRM systems and marketing content. Hence, identifying who the users are, and verifying if their actions during a session are appropriate, is very important. It is important to make a note of which applications the users are trying to access and if it fits their roles and responsibilities. Figure 3 shows the zero-trust architecture at a high level. The supporting system of the architecture is the control pane. All other components come under the data plane, which the control pane coordinates and configures. Requests for access to the protected resources are made through the control pane, where both the user and the device should be authenticated and authorized. Fine grained policies are to be applied to this layer, based on the role in the organization, time of the day or type of the device. The more secure the resource, the tighter would be the authentication. Architecting Zero Trust Networks Once the control pane decides that the user or device request is allowed, it dynamically configures the data plane to accept traffic from that client alone. It can also coordinate the details of an encrypted tunnel between the requestor and the resource. In summary, a trusted third party is granted the ability to authenticate, authorize and coordinate access in real time. In zero trust, one must assume that the attacker can use any arbitrary IP address. Hence protecting resources using IP addresses no longer works. Hosts, even if they share "trusted zones" must provide proper identification. Since attackers can employ a passive method and sniff traffic, host identification is not enough, strong encryption is also needed. The three components of zero trust networks are a) user/application authentication b) device authentication c) trust. Apart from the user or application, device authentication is just as important. A trust score is computed and the application, device and the score are bonded to form an agent. Policy is then formed against the agent in order to authorize the request. With the authentication/authorization components and the aide of control panel in coordinating encrypted channels, we can be sure that every single flow on the network is authenticated. Unlike the perimeter security model, where security ends as soon as the traffic reaches the VPN concentrator, in this model, security is ingrained through out the network. Implementing zero trust brings about several benefits to the business. Foremost among it is that it reduces the threat surface. It also provides increased visibility to all user activities. The Internet Threat Model is defined in RFC 3552, which is also the model used by zero trust networks to plan their security stance. Zero trust networks expand on the Internet Threat Model by considering compromises on the endpoints. The response to these threats is to harden the systems proactively against compromised peers, and to facilitate detection of those compromises. Detection is done by scanning those devices and by the behavioural analysis of the activity from each device. Frequent upgrades to software on the devices, frequent and automated credential rotation and in some cases frequent rotation of the devices themselves is employed to mitigate compromises at the endpoint. All zero trust networks use Public Key Infrastructure (PKI) that defines the set of roles and responsibilities that are used to securely distribute and validate public keys in untrusted networks. Entities like devices, users and applications are authenticated using digital certificates and this is done via automation. Because the public PKI system relies on publicly trusted authorities to validate digital certificates, that are costly, less flexible and not fully trustworthy. Hence, zero trust networks prefer private PKI. Zero Trust Implementations In this section we discuss two implementations of zero trust model -one from Microsoft and the other from Google. At Microsoft, they realized that with growing cloudbased services and mobile computing, the technology landscape for enterprises would have higher need for the zero-trust access architecture. Figure 4 shows the different steps espoused by Microsoft to mature an organization's approach to security a) Follow least privilege access principles for identities, whether they are people, services or IoT devices. b) Once identity has been granted, data flows from a variety of endpoints -from IoT devices to smartphones, BYOD to partner managed devices and on-premise devices to cloud infrastructure. It is important to monitor and enforce device health and compliance for secure access. c) Apply controls to applications and API that provide interface by which data is consumed. d) It is important to classify, label and encrypt data and to restrict access to it. e) Whether one uses on-premise infrastructure, cloud infrastructure, container-based solution or microservices, the medium represents a critical threat vector. It becomes important to use telemetry to detect and flag risky behaviours. f) Segmentation of networks (micro segmentation) and deployment of real-time threat protection, end-end encryption, monitoring and analytics help secure networks. g) With increased visibility, an integrated capability is needed to manage the influx of data. Microsoft identifies four scenarios to achieve zero trust: a) employees can enrol their devices to the device management to gain access to the company resources b) device health-checks per application or service can be enforced c) when not using a managed device, employees or business guests can use a secure method to access corporate resources d) employees can have user interface options (portals, desktop apps) to discover and launch applications and resources. Microsoft's structured approach to implementing the various zero trust stages is shown in Figure 5. Figure 6 shows the reference architecture used by Microsoft using its own services to implement zero trust. Google's implementation of zero trust is called BeyondCorp. It began as an internal google initiative to allow every employee to work remotely without VPN. BeyondCorp allows for single sign-on, access control policies, access proxy, and user and devicebased authentication and authorization. The fundamental components of BeyondCorp system include, Trust Inferer, Device Inventory Service, Access Control Engine, Access Policy, Gateways and Resources. The block architecture diagram is shown in Figure 7. The various components of the system are: a) Resources -an enumeration of all applications, services and infrastructure that are subject to access control. b) Trust inferer -a system that continuously analyses and annotates the device states. c) Access policy -a programmatic representation of the resources. d) Access Control Engine -a centralized policy enforcement service referenced by each gateway. e) Device Inventory Service -a service that continuously collects, processes and publishes changes about the state of known devices. f) Gateways -a medium by which resources are accessed, such as SSH servers, Web proxies etc. Evaluation of Zero Trust and Conclusion Although zero-trust makes the designed network highly secure, it can still be compromised by attackers in some unique cases. The following are some scenarios and pitfalls. a) Identity Theft -All decisions in zero trust networks are made based on authenticated identities. If one's identity is stolen, an attacker can masquerade their way through a zero-trust network. The identity which is linked to a secret should be therefore protected in different ways. Since zero trust networks need both the device and user/application to authenticate, it raises the bar compared to ordinary networks. b) DDoS attacks -While zero trust networks are concerned with authentication, authorization and confidentiality, it does not provide good mitigation against DDoS attacks. c) Endpoint Enumeration -It is easy for an adversary to observe which systems talk to which end points in a zero-trust network. Zero trust networks guarantee confidentiality but not privacy. individual. Physical attacks against individuals are best mitigated by a consistent process of cycling both devices and credentials. g) Invalidation -It applies to long-running actions that were previously authorized but are no longer. An action could be an application-level request or a network session. How quickly and effectively ongoing actions can be invalidated, deeply affects the security response. The way to mitigate it, is to perform more granular authorizations on actions that are shortlived. Another approach is to periodically reset network sessions. The best approach is to have enforcement components to track ongoing actions and take ownership of the reset. h) Control Pane Security -It is possible to completely thwart the zero-trust architecture, if the control pane security is compromised. For sensitive systems like the policy engine, rigorous controls should be applied from the beginning. Group authentication and authorizations should be considered, changes to control pane should be made infrequently and should be broadly visible. Another good practice is to keep the control pane systems isolated from an administrative point of view, which means, they are kept in dedicated cloud provider networks or datacentres with rigorous access control. While zero trust systems introduce new consideration points for network security, it resolves many other security issues. By applying automation and tried-and-tested security primitives and protocols, zero trust models will be able to replace the perimeter model as a more effective, secure and scalable solution.
3,477.6
2021-03-12T00:00:00.000
[ "Computer Science" ]
Hybrid Harris hawks optimization with cuckoo search for drug design and discovery in chemoinformatics One of the major drawbacks of cheminformatics is a large amount of information present in the datasets. In the majority of cases, this information contains redundant instances that affect the analysis of similarity measurements with respect to drug design and discovery. Therefore, using classical methods such as the protein bank database and quantum mechanical calculations are insufficient owing to the dimensionality of search spaces. In this paper, we introduce a hybrid metaheuristic algorithm called CHHO–CS, which combines Harris hawks optimizer (HHO) with two operators: cuckoo search (CS) and chaotic maps. The role of CS is to control the main position vectors of the HHO algorithm to maintain the balance between exploitation and exploration phases, while the chaotic maps are used to update the control energy parameters to avoid falling into local optimum and premature convergence. Feature selection (FS) is a tool that permits to reduce the dimensionality of the dataset by removing redundant and non desired information, then FS is very helpful in cheminformatics. FS methods employ a classifier that permits to identify the best subset of features. The support vector machines (SVMs) are then used by the proposed CHHO–CS as an objective function for the classification process in FS. The CHHO–CS-SVM is tested in the selection of appropriate chemical descriptors and compound activities. Various datasets are used to validate the efficiency of the proposed CHHO–CS-SVM approach including ten from the UCI machine learning repository. Additionally, two chemical datasets (i.e., quantitative structure-activity relation biodegradation and monoamine oxidase) were utilized for selecting the most significant chemical descriptors and chemical compounds activities. The extensive experimental and statistical analyses exhibit that the suggested CHHO–CS method accomplished much-preferred trade-off solutions over the competitor algorithms including the HHO, CS, particle swarm optimization, moth-flame optimization, grey wolf optimizer, Salp swarm algorithm, and sine–cosine algorithm surfaced in the literature. The experimental results proved that the complexity associated with cheminformatics can be handled using chaotic maps and hybridizing the meta-heuristic methods. Related work A previously conducted study has investigated drug design and discovery, exhibiting differences in efficiency 31 . The available tools used to identify chemical compounds which are known as computer-aided drug design (CADD) allows the reduction of different risks associated with the subsequent rejection of lead compounds. CADD has an important role and exhibits high success rates for the identification of the hit compounds 32 . The CADD methodology has two related concepts: ligand/hit optimization and ligand/hit identification. Methods hitting identification/optimization are based on the efficiency of the virtual screening techniques used to achieve the target binding sites. They are known to dock huge libraries for small molecules including chemical information or ZINC database, to identify the compounds based on the pharmacophore modeling tools (docking) to predict the optimal medicines and proteins obtained using the information from the ligand. The Pymol software 33 is useful in selecting the optimal ligand as the optimal drug, and the AutoDock software is employed to calculate the energy 5 . Thus, genetic algorithms (GAs) are applied in the AutoDock software and AutoDock Vina 34 . Also, in 35 , fuzzy systems have been introduced to address the optimization of the chemical product design. Another important method for drug design called QSAR is derived from CADD to extract the description of the correlation among different structures from a set of molecules and the response to the target 36 . Drug design and discovery are the main aspects of cheminformatics 37 . Cheminformatics can be divided into two sub-processes. The first process considers three-dimensional information; this process is called encoding. The second process, which is called mapping, comprises building a model using machine learning (ML) techniques 38 . In the encoding process, the molecular structure is transformed based on the calculation of the descriptors 36 . Moreover, the mapping process aims to discover different mappings created between the feature vectors and their properties. In cheminformatics and drug discovery, the mapping can be performed using various machine learning 2,39 . Chaotic maps are random-like deterministic methods that constitute dynamic systems. They have nonlinear distributions indicating that chaos is a simple deterministic dynamic system and a source of randomness. Chaos has random variables instead of chaotic variables and absolute searches can be performed with higher speeds when compared with stochastic search methods mainly based on probabilities. In a previous study 40 , chaotic maps have been considered to improve the performance of the whale optimization algorithm and balance the exploration and exploitation phases. Also, a grey wolf optimizer and flower pollination algorithm have been enhanced using ten chaotic maps to extract the parameters of the bio-impedance models 41 . Meanwhile, in 42 , the grasshopper optimization algorithm with chaos theory is employed to accelerate its global convergence and avoid local optimal. In 43 the schema of the CS algorithm based on a chaotic map variable value is introduced. In fact, the methodology of hybridizing MAs is widely used in different domains of optimization other than feature selection 44 . In this vein, combinations of different ML techniques and MAs (e.g., search strategies) have been applied in many fields with modifications and hybridization to benefit from one technique in uplifting search efficiency. For instance, the salp swarm algorithm combined with k-NN based on QSAR is an interesting alternative, which provides competitive solutions 45 . Also, Houssein et al. 37 introduced a novel hybridization approach for drug design and discovery-based hybrid HHO and SVM. However, in this study, we applied hybridization to select the chemical descriptor and compound activities in cheminformatics. Particularly, this study proposes an alternative classification approach with respect to cheminformatics, termed as CHHO-CSbased SVM classifier, for selecting the chemical descriptor and chemical compound activities; the hybrid HHO and CS were enhanced based on the chaos (C) theory. Materials and methods In this section, we briefly discus the QSAR model, the basics of SVM, the original HHO, the original CS, and the chaotic map theory. Quantitative structure-activity relationship. QSAR provides information based on the relation between the mathematical models associated with the biological activity and the chemical structures. QSAR is widely used because it can detect major characteristics of the chemical compounds. Therefore, it is not necessary to test and synthesize compounds. The inclusion of ML methods to study QSAR helps to predict whether the compound activity is similar to a drug-like activity in case of a specific disease or a chemical test. The compounds possess complex molecular structures, containing many attributes for their description. Some of the features include characterization and topological indices. Therefore, molecular descriptors are highly important in pharmaceutical sciences and chemistry 4 . Support vector machine. SVM is an important supervised learning algorithm commonly used for classification 46 . SVM extracts different points from the data and maps them in a high-dimensional space using a nonlinear kernel function. SVM works by searching for the optimal solution for class splitting. The solution can be used to maximize the distance with respect to the nearest points defined as support vectors, and the result of SVM is a hyperplane. For obtaining optimal results, SVM has some parameters that have to be tuned. The C controls the interaction between smooth decision boundaries and the accurate classification of the training points. If the C has a significant value, more training points will be accurately obtained, indicating that more complex decision curves will be generated by attempting to fit in all the points. The different values of C for a dataset can be used to obtain a perfectly balanced curve and prevent over-fitting. Ŵ is utilized to characterize the impact of single training. Low gamma implies that each point will have a considerable reach, whereas high gamma implies that each point has a close reach. The implementation of SVM has been extended to cheminformatics. In this work, steps of SVM are presented in Algorithm 1, and its graphical description is presented in Fig. 1 This species possesses a mechanism that allows them to catch prey even when they are escaping. This process is modeled in the form of a mathematical expression, allowing its computational implementation. HHO is a stochastic algorithm that can explore complex search spaces to find optimal solutions. The basic steps of HHO can be obtained with respect to various states of energy. The exploration phase simulates the mechanism when Harris's hawk cannot accurately track the prey. In such a case, the hawks take a break to track and locate new prey. Candidate solutions are the hawks in the HHO method, and the best solution in every step is prey. The hawks randomly perch at different positions and wait for their prey using two operators, which are selected on the basis of probability q as given by Eq. (1), where q < 0.5 indicates that the hawks perch at the location of other population members and the prey (e.g., rabbit). If q ≥ 0.5 , the hawks are at random positions around the population range. For facilitating the understanding of HHO, a list of symbols used in this algorithm is defined as follows: 1. Vector of hawks position (search agents) X i 2. Position of Rabbit (best agent) X rabbit 3. Position of a random Hawk X rand 4. Hawks average position X m 5. Maximum number of iterations, swarm size, iteration counter T, N, t 6. Random numbers between (0, 1) r 1 , r 2 , r 3 , r 4 , r 5 , q 7. Dimension, lower and upper bounds of variables D, LB, UB 8. Initial state of energy, escaping energy E 0 , E The exploration step is defined as: The average location of the Hawks X m is represented by: (1) X(t + 1) = X rand (t) − r 1 |X rand (t) − 2r 2 X(t)| q ≥ 0.5 (X rabbit (t) − X m (t)) − r 3 (LB + r 4 (UB − LB)) q < 0.5 The average position can be obtained by using different methods, but this is the simplest rule. A good transition from exploration to exploitation is required, here a shift is expected between the different simulated exploitative behaviors based on the escaping energy factor E of the prey, which diminishes dramatically during the escaping behavior. The energy of the prey is computed by Eq. (3). where E, E 0 , and T represent the initial escape energy, the escape energy and the maximum number of iterations, respectively. The soft besiege is an important step in HHO, it is shown if r ≥ 0.5 and |E| ≥ 0.5 . In this scenario, the rabbit has all sufficient energy. When it occurs, the rabbit performs random misleading shifts to escape, but in the metaphor, it cannot. The besiege step is defined by the following rules: where �X(t) is the difference locations vector for all rabbits and for presently positions in the iteration t, and J = 2(1 − r 5 ) Is the rabbit's spontaneous jumping ability throughout the escaping phase. The J value varies randomly in each iteration to represent the rabbit's behavior. In the extreme siege stage when r ≥ 0.5 and |E| < 0.5 , The prey is exhausted and has no escaping strength. The Harris hawks are hardly circling the trained prey, and they can make an assault of surprise. For this case, the current position is changed using: Consider the behavior of hawks in real life, they will gradually choose the best dive for the prey if they want to capture specific prey in competitive situations. This is simulated by: The soft besiege presented in the previous Eq. (7) is performed in progressive rapid dives only if |E| ≥ 0.5 but r < 0.5 . In this case, the rabbit has sufficient energy to escape and is applied for a soft siege before the attack comes as a surprise. The HHO models have different patterns of escape for a leap frog and prey movements. The Lévy flights (LF) are launched here to emulate the various movements of the Hawk and rabbit dives. Eq. (8) computes such patterns. where S represents the random vector for size 1 × D and LF is for the levy flight function, using this Eq. (9): Here u, v are random values between (0, 1), β is the default constant set to 1.5. The final step in the process is to update positions of the hawks using: where Y and Z are obtained using Eqs. (7) and (8). During progressive fast dives, HHO is also hard-pressed, where it may happen if |E| < 0.5 and r < 0.5 . Here the strength of the rabbit to escape is not sufficient and the hard siege is suggested before the numerous surprise attacks are made to catch and kill the prey. In this step, Hawks seek to reduce the various distances between their prey and the average position. This operator is explained as follows: The values of Y and Z are proposed by using new rules in Eqs. (12) and (13), where X m (t) is obtained using Eq. (2). 19 . The cuckoo quest hypothesis is inspired by a bird known as the cuckoo. Cuckoos are interesting creatures not only because they can make beautiful sounds but also for their aggressive strategy of reproduction. In the nests of other host birds or animals, adult cuckoos lay their eggs. Cuckoo search is based on three main rules: 1. Growing cuckoo lays one egg at a time and dumps the egg in a nest selected randomly. 2. The best nest with high-quality eggs will be delivered to the next generation. 3. The number of host nests available is set and the host bird finds the egg laid by a cuckoo with a probability ρ a ∈ [0, 1]. The probability is based on these three rules such that the host bird can either throw away the egg or leave the nest and build a completely new nest. This statement may be approximated by a fraction ρ a of n nests that are replaced by new nests (with new random solutions). The pseudo-code of CS is shown in Algorithm 2. chaotic maps. The majority of MAs have been established based on stochastic rules. These rules primarily rely on certain randomness obtained using certain distributions of probabilities, which are often uniform or Gaussian. In principle, the replacement of this randomness with chaotic maps can be beneficial because of the significant dynamic properties associated with the behavior of chaos. This dynamic mixing is important to ensure that the solutions obtained using the algorithm are sufficiently diverse to enter any mode in the objective multimodal landscape. These approaches, which use chaotic maps, are called chaotic optimization instead of random distributions. The mixing properties of chaos will perform the search process at higher speeds than traditional searches based on the standard probability distributions 47 . One-dimensional non-invertible maps will be used to produce a set of variants of chaotic optimization algorithms to achieve this ability. Table 1 presents some of the prominent chaotic maps used in this study. In addition, chaotic maps are obliged to result in 0/1 based on the normalization concept. The main task of chaotic maps is to avoid the local optima and speed up the convergence. Here, it is important to mention that the nature of chaotic maps could also increase the exploration due to the intrinsic randomness. It is necessary to properly select the best map that helps each algorithm for a specific problem. Another important point to be considered is that chaotic maps do not take decision about the exploration and exploitation of the algorithms. However, along with the iterations, the chaotic values generated by the maps permit to change the degree of exploration or exploitation of the search space. the proposed cHHo-cS In this section, the proposed CHHO-CS is explained in detail, which is used to improve the search-efficiency of basic HHO. Typically, HHO has the characteristics of acceptable convergence speed and a simple structure. However, for some complex optimization problems, HHO may fail to maintain the balance between exploration and exploitation and fall into a local optimum. Especially in the face of high dimension functions and multimodal problems, the shortcomings of HHO are more obvious. The optimization power of the basic HHO depends on the optimal solution 57 . In this paper, we introduced two strategies (Chaotic maps, and CS) to enhance the performance of the basic HHO. The following points are worthwhile: • Chaotic maps influence: applying chaos theory to the random search process of MAs significantly enhances the effect of random search. Based on the randomness of chaotic local search, MAs can avoid falling into local optimum and premature convergence. In the basic HHO algorithm, the transition from global exploration to local exploitation is realized according to Eq. (3). As a result, the algorithm will easily fall into a local opti-Scientific RepoRtS | (2020) 10:14439 | https://doi.org/10.1038/s41598-020-71502-z www.nature.com/scientificreports/ mum. Hence, in the CHHO-CS algorithm, a new formulation of initial escape energy E 0 and escaping energy factor E with chaotic maps are employed as demonstrated in Algorithm 3. Figure 2 shows the influence of a chaotic map on the energy parameter E obtained by the proposed method versus the basic HHO. Notably, the curve in the left-side linearly decreasing versus the proposed non-linear energy parameter defined by the new formulation of E, which clearly focuses on providing the search direction towards the middle of the search process to infuse enough diversity in population during the exploitation phase. • CS method influence: in the basic HHO, the position vectors X rand and X rabbit are responsible for the exploration step defined by Eq. (1), which plays a vital role in balancing the exploitation and exploration. More significant values of position vectors expedite global exploration, while a smaller value expedites exploitation. Hence, an appropriate selection of X rand and X rabbit should be made, so that a stable balance between global exploration and local exploitation can be established 58 . Accordingly, in the CHHO-CS algorithm, we borrow the merits CS method to control the position vectors of HHO. At the end of each iteration T, CS trying to find the better solution (if better solution found then update X rabbit and X rand ; otherwise left obtained values by HHO unchanged). Consequently, CS will determine the fitness value of the new solution, if it is better than the fitness value of the obtained from HHO, then the new solutions will be set; otherwise the old remains unchanged. To be specific, the steps of the CHHO-CS algorithm are executed as; chaotic maps are employed to avoid falling into local optimum and premature convergence. Moreover, a balancing between exploration and exploitation is performed by CS. Then, SVM is used for classification purposes. The flowchart of the proposed CHHO-CS method is represented in Fig. 3. The pseudo-code of the proposed CHHO-CS method is illustrated in Algorithm 3. Here is important to mention that for SVM and feature selection, in the CHHO-CS each solution of the µ is a parameter between 0.9 and 1.08 The control parameter P ∈ (0, 0.5) and x ∈ (0, 1) and P = 0 www.nature.com/scientificreports/ population is encoded as a set of indexes that correspond to the rows of the dataset. For example, if a dataset has 100 rows a possible candidate solution in the population for five dimensions could be [10,20,25,50,80], such values are rows with the features to be evaluated in the SVM. The location vector in the soft and hard besiege with progressive rapid dives in HHO is updated as follows: www.nature.com/scientificreports/ feature selection. FS is a data pre-processing step, which is used in combination with the ML techniques. FS permits the selection of a subset without redundancies and desired data. FS can effectively increase the learning accuracy and classification performance. Therefore, the prediction accuracy and data understanding in ML techniques can be improved by selecting the features that are highly correlated with other features. Two features show perfect correlation; however, only one feature is introduced to sufficiently describe the data. Therefore, classification is considered to be a major task in the ML techniques; in classification, data are classified into groups depending on the information obtained with respect to different features. Large search spaces are a major challenge associated with FS; therefore, different MAs are used to perform this task. fitness function. Each candidate solution is evaluated along with the number of iterations to verify the performance of the proposed algorithm. Meanwhile, in classification, the dataset needs to be divided into training and test sets. The fitness function of the proposed CHHO-CS method is defined by the following equation: and where R refers to the classification error and C is the total number features for a given dataset D. β refer to the subset length and α represents the classification performance defined in the range [0, 1]. T is a necessary condition and G is a group column for the specific classifier. Each step in the algorithm is compared with T, where the obtained fitness value must be greater than in order to maximize the solution. It is important to remark that the fitness (or objective) function in Eq. (15) is also used by the CS to compute the the positions of X rand and X rabbit . Results To Table 2. A common machine learning classifier has been used in experiments including called SVM also was combined with the proposed CHHO-CS method for the classification purpose. Performance analysis using UCI datasets. Description and pre-processing of the datasets, results, and comparison of the proposed CHHO-CS is described in the following subsections. UCI Data description. The proposed algorithm is examined on ten benchmark datasets obtained from the UCI machine learning repository 59 illustrated in Fig. 3 and it is available at "https ://www.openm l.org/searc h". Statistical results. SVM is used for the classification task. Following the previous methodology, in this experiment, iterations are set to 1,000 for each of the 30 runs. The experimental results are reported in Tables 4 and 5. In this experiment, the CHHO-CS-Piece based on SVM achieves the best mean and Std. Classification results. Since SVM is one of the most promising methods of classification, its performance needs to be analyzed. In this experiment, the number of iterations are set to 1,000, also the obtained results are reported in Tables 6 and 7. Notably, the CHHO-CS-Piece based on SVM obtains the best classification accuracy, sensitivity, specificity, recall, precision, and F-measure. performance analysis using chemical datasets. Description of chemical datasets. In this study, two different datasets are used to experimentally evaluate the performance of the proposed method. (1) The MAO dataset comprises 68 molecules and is divided into two classes: 38 molecules that inhibit MAO (antidepressants) and 30 molecules that do not. MAO is available at http://iapr-tc15.greyc .fr/links .html. Each molecule should have a mean size of 18.4 atoms, and the mean degree of the atoms is 2.1 edges. In addition, the smallest molecule contains 11 atoms, whereas the largest one contains 27 atoms; each molecule has 1,665 descriptors. (2) The QSAR biodegradation dataset comprises 1,055 chemical compounds, 41 molecular descriptors, and one class; it is available at http://archi ve.ics.uci.edu/ml/datas ets/QSAR+biode grada tion. These chemical compounds are obtained from the National Institute of Technology and Evaluation of Japan (NITE). The MAO dataset is transformed into a line notation form to describe the structure of the simplified molecular-input line-entry system (SMILES) using the open babel software 60 ; E-dragon 61 is subsequently applied to obtain the molecular descriptor. Information obtained with respect to the second QSAR biodegradation dataset was preprocessed by the Milano Chemometrics and QSAR Research Group, University of Milano-Bicocca and is available at http:// www.miche m.unimi b.it/ www.nature.com/scientificreports/ Data preprocessing. Here, the required steps to preprocess the data set information are presented. The information obtained from the molecules is transferred to the features representing chemical compounds 36,39 . The data obtained from the proteins are stored in a special chemical format. Further, the software should be used to transfer the information into the isomeric SMILES. The data set contains different instances with specific multidimensional attributes (commonly two-dimensional 2D and 3D according to the QSAR model. The E-dragon software is used to compute the descriptors from this dataset. The descriptors contain physicochemical or structural information as solvation properties, molecular weight, aromaticity, volume, rotatable bonds, molecular walk counts, atom distribution, distances, interatomic, electronegativity, and atom types. They are used for determining values of generations and instances which belong to a class as shown in Fig. 4. Statistical results. Here, the SVM is used for the classification task. Following the previous methodology, in the first experiment, iterations are set to 100 for each of the 30 runs. The experimental results are reported in Tables 8. In this experiment, the CHHO-CS-Piece based on SVM obtains the best mean and Std. The same rank is obtained for maximizing the classification accuracy solution, Sensitivity, Specificity, Recall, Precision, and F measure. In this case, the HHO-CS with SVM is the second-ranked in mean value, Std, and maximizing the classification accuracy solution, sensitivity, specificity, recall, precision, and F-measure. The iterations are configured to 1,000; the idea is to obtain the best solutions. In this case, the results are presented in Table 9, where the CHHO-CS-Piece combined with the SVM is the fist ranked approach for the mean value, and Std, the same occurs for maximizing the classification accuracy solution, sensitivity, specificity, recall, precision, and Table 2. Parameters setting of competitor algorithms used in the comparison and evaluation. Step length = 0.01 CHHO-CS Both HHO and CS parameters x 0 = rand default for maps Classification results. Since SVM is one of the most promising methods of classification, its performance needs to be analyzed. In the first experiment, iterations are set to 100; the experimental results are reported in Table 10. In this experiment, the CHHO-CS-Piece based on SVM obtains the best results. In this case, the HHO-CS with SVM is the second-ranked in most of the assessment criteria. A final experiment for SVM is performed by using 1,000 iterations and the reported values in Table 11 confirms that the CHHO-CS-Piece combined with the SVM the convergence analysis. This section aims to analyze the convergence of the proposed CHHO-CS based chaotic maps presented in this paper. Figures 5 and 6 shows the convergence curves for the competitor algorithms over the ten UCI Machine Learning Repository datasets along the iterative process 100, and 1,000 iterations respectively. Over the ten UCI datasets, the convergence curves plotted in Figs. 5 and 6 provides evidence that the proposed CHHO-CS method using SVM obtained the best results compared with the original On the other hand, the convergence curves plotted in Fig. 7a-d provide evidence that the proposed CHHO-CS method with SVM classifier obtained over the two datasets (MAO and QSAR biodegradation) the best results compared with the original HHO and CS algorithms and the other competitor algorithms along with the two-stop criteria (100 and 1,000 iterations). www.nature.com/scientificreports/ In worthwhile, the convergence curve is presented because it is a graphical form to study the relationship between the number of iterations and the fitness function. It declares the best-performed algorithm by comparison between various approaches and when increasing the number of iterations, it represents a direct correlation. The convergence curves plotted in Fig. 5a-j revealed that the proposed CHHO-CS-Piece method achieved better results compared with the competitor algorithms. Also, in the same context, the convergence curves plotted in Fig. 6a-j revealed that the proposed CHHO-CS-Piece method achieved better results compared with the competitor algorithms. To sum up, the experiments were conducted on MOA and QSAR biodegradation datasets and the obtained results are interesting and due to the lack of space, we have added the results of the best map only. For example, in the first MOA dataset with the SVM classification technique in different stop conditions 100, and 1,000 iterations as shown in Fig. 7a-d, respectively. Moreover, on the MAO dataset, with 100 and 1,000 iterations, it is interesting that CHHO-CS-Piece with SVM is better than the other competitor algorithms. Meanwhile, for the second QSAR biodegradation dataset, the optimal solutions with SVM are computed with 100, and 1,000 iterations as stop condition, it is interesting that the version CHHO-CS-Piece with SVM provides the optimal solutions in comparison with the other metaheuristic algorithms. www.nature.com/scientificreports/ conclusion metaheuristic algorithms and machine learning techniques are important tools that can solve complex tasks in the field of cheminformatics. The capabilities of MAs and ML to optimize and classify information are useful in drug design. However, these techniques should be highly accurate to obtain optimal compounds. In this paper, a hybrid metaheuristic method termed CHHO-CS which combined the Harris hawks optimizer (HHO) with operators of the cuckoo search (CS) and chaotic maps (C) in order to enhance the performance of the original HHO. Moreover, the proposed CHHO-CS method was combined with the support vector machine (SVM) as machine learning classifiers for conducting the chemical descriptor selection and chemical compound activities. The main tasks of the proposed method are to select the most important features and classify the information in the cheminformatics datasets (e.g., MAO and QSAR biodegradation). The experimental results confirm that the use of chaotic maps enhances the optimization process of the hybrid proposal. It is important to mention that not all the chaotic maps are completely useful, and it is necessary to decide when to use one or another. As expected, this is dependent on the dataset and the objective function. Comparisons of the proposed CHHO-CS method with the standard algorithms revealed that the CHHO-CS yields superior results with respect to cheminformatics using different stop criteria. In the future, the proposed CHHO-CS method can be used as a multi-objective global optimization or feature selection paradigm for high-dimensional problems containing many instances to increase the classification rate and decrease the selection ratio of attributes.
6,977.6
2020-09-02T00:00:00.000
[ "Chemistry", "Computer Science", "Medicine" ]
Solar Temperature Variations Computed from SORCE SIM Irradiances Observed During 2003 – 2020 NASA’s Solar Radiation and Climate Experiment (SORCE) Spectral Irradiance Monitor (SIM) instrument produced about 17 years of daily average Spectral Solar Irradiance ( SSI ) data for wavelengths 240 – 2416 nm. We choose a day of minimal solar activity, August 24, 2008 (2008-08-24), during the 2008 – 2009 minimum between Cycles 23 and 24, and compute the brightness temperature ( T o ) from that day’s solar spectral irradiance ( SSI o ). We consider small variations of T and SSI about these reference values, and derive linear and quadratic analytic approximations by Taylor expansion about the reference-day values. To determine the approximation accuracy, we compare to the exact brightness temperatures T computed from the Planck spectrum, by solving analytically for T , or equivalent root finding in Wolfram Mathematica . We find that the linear analytic approximation overestimates, while the quadratic underestimates, the exact result. This motivates the search for statistical “fit” models “in between” the two analytic models, with minimum root-mean-square-error, RMSE. We make this search using open-source statistical R software, determine coefficients for linear and quadratic fit models, and compare statistical with analytic RMSEs. When only linear analytic and fit models are compared, the fit model is superior at ultraviolet, visible, and near-infrared wavelengths. This again holds true when comparing only quadratic models. Quadratic is superior to linear for both analytic and statistical models, and statistical fits give the smallest RMSEs. Lastly, we use linear analytic and fit models to find an interpolating function in wavelength, useful when the SIM results need adjustment to another choice of wavelengths, to compare or extend to any other instrument. Advantages of the quadratic T over the exact T include ease of interpretation, and computational speed. Introduction The Sun's temperature and its variations over timescales from hours to decades have been determined since 1978 from satellite measurements of associated variations in Total Solar Irradiance (TSI). Since the deployment of the SORCE satellite in 2003, the Sun's temperature has also been determined for a continuous range of wavelengths that span ultraviolet, visible and near-infrared wavelengths, from solar spectral irradiance (SSI) measurements across the peak of the SSI distribution. These are of great interest due to the fundamental role that solar variations play in understanding the variations of the Earth's climate (Harder et al., 2005;Eddy, 2009). Beyond decadal timescales, the solar irradiance is key to estimating the Sun's luminosity, and on the longest timescales it determines Earth's lifetime, since it determines when the Sun will exhaust its energy from fusion of hydrogen in the core (Bahcall, 2000). The average radiative temperature of the Earth is determined by an approximate balance between the amount of energy it receives from the Sun, which can be calculated from the TSI and the Earth's albedo, and the amount of energy that Earth emits into space that depends on Earth's emissivity (Stephens et al., 2015). Earth's albedo is the fraction of solar energy reflected back into space, which averages about 30%, the remainder being absorbed by the atmosphere and surface (Wild et al., 2013). To determine how solar variations impact Earth's atmosphere-ocean system at various heights, SSI must be monitored in addition to TSI. The relationship between Earth's temperature and the variability of solar irradiance was first speculated on by Herschel, and as observations have improved, so has the understanding of solar variability and its contribution to climate change (Gray et al., 2010;Bahcall, 2000). The variability of both TSI and SSI occurs due to variations in magnetic fields on the solar surface, which in turn cause the appearance of sunspots and faculae (Shapiro et al., 2015). Various models attempt to predict these changes over a wide range of timescales. Prior to SORCE, limited observations of SSI made studying the variability of SSI and solar brightness temperature difficult, but databases over several years now enable these calculations that are important for both Heliospheric and Earth sciences (Rottman and Cahalan, 2002). In this article, we present a study of linear and quadratic analytic and statistical approximations of the solar brightness temperature, T , using either a single "reference day" during a solar minimum, or using statistical properties over many days in the available record. The estimation of values using linear and quadratic approximations, both analytic and statistical, are of great help in simplifying the calculations of T , and in interpreting its variability. In order to determine the accuracy of the approximated values, it is necessary to compare them with very nearly exact values of T , calculated from the monochromatic exact analytic T equation, Equation D.2, derived from the Planck distribution D.1, or equivalently by applying root-finding techniques to the Equation D.3 that implicitly determines T from observed values of SSI. This paper shows that the daily values of T over the SIM wavelengths are well determined from a polynomial that is quadratic in the observed daily values of SSI. The coefficients may be expressed as analytic functions of wavelength, with small RMS errors. Even smaller RMS errors are achieved with coefficients determined from statistical fits of the observed data. Advantages of the quadratic T over the exact T include ease of interpretation, and computational speed. Interpretation of the linear term is the sensitivity of T to changes in SSI, a concept widely used in climate studies. The speed gain will become important as time-dependent models of solar variations are further developed. The article is structured as follows: Section 2 describes the data analyzed here, provides the online link to download it, and describes the temporal and wavelength range. Section 3 summarizes the methodology used for the analysis of the SSI spectral data, the calculation of exact values of solar brightness temperatures T , as well as linear and quadratic analytic and statistical approximations of T . Section 4 discusses the results of exact computations of T , the time series of observed SSI, and the time-series comparisons of exact and approximate T values. Section 5 concludes by summarizing the key results and suggests future directions for research related to variations in solar irradiance and brightness temperature. Finally, the article contains nine appendices that derive results referenced in Sections 1 -5. Several figures and tables are discussed throughout. Readers may interact with several of the plotted results by going to the following dashboard that was coded in Microsoft Power BI. Dashboard link: http://wayib.org/solar-temperature-variations-relative-toa-quiet-sun-day-in-august-2008/. Data The TSI and SSI data were downloaded from the University of Colorado's LASP Interactive Solar Irradiance Datacenter (LISIRD), based on measurements made by instruments onboard the Solar Radiation and Climate Experiment (SORCE) satellite. The data is free and publicly available here: https://lasp.colorado.edu/lisird/data/sorce_sim_ssi_l3/. The SORCE Total Irradiance Monitor (TIM) instrument provides records of Total Solar Irradiance (TSI), while the Spectral Irradiance Monitor (SIM) instrument provides records of the Solar Spectral Irradiance (SSI). Both instruments provide daily averages, with TIM beginning 2003-02-25, andSIM beginning 2003-04-14 and both ending on 2020-02-25 when the SORCE instruments were passivated (i.e., turned off). We employ throughout the latest "final" data versions, v19 for TSI, and v27 for SSI, as discussed in Kopp (2020) and Harder (2020), respectively. All dates in this article are given in the format YYYY-MM-DD, in accord with https://www.iau.org/static/publications/stylemanual1989.pdf. The SIM measures SSI as a function of wavelength over the range from 240 nm to 2416 nm. Though measurements of SSI were made prior to SORCE, for example by the UARS SOLSTICE (operating during 1991 -2001), SIM was the first to provide SSI for a continuous range of wavelengths across the peak of the solar spectrum that occurs near 500 nm, and well into the near-infrared (IR) wavelengths, with sufficient precision to determine true solar variations (see, e.g., Harder et al., 2009;Lee, Cahalan, and Dong, 2016). Note that all irradiance data from SORCE, including all TSI and SSI values, are adjusted to the mean Earth-Sun distance of one astronomical unit, 1 AU. Doppler corrections are also made to remove any variations due to the satellite orbit. Absolute and relative calibrations are enabled by a variety of laboratory measurements carried out at both the University of Colorado's LASP (Laboratory for Atmospheric and Space Physics), and at NIST facilities. Onboard instrument degradation is monitored and corrected for. Our focus in this paper is on the day-to-day variability at near-ultraviolet, visible, and near-infrared wavelengths. For this, we rely primarily on the high precision and repeatability of TIM and SIM, more than on the absolute calibration. The high quality of TIM and SIM data has been amply documented in the literature. Due to operational difficulties encountered, particularly after 2011 as SORCE aged, there are a limited number of days where the records are given as NA (not available) or no values were recorded. These were omitted in all calculations reported here. As an example of the SSI records measured by the SIM instrument, the time series of the solar spectrum from 2003 to 2020 is shown in Figure 4 for a fixed wavelength, 656.20 nm, which corresponds to the hydrogen alpha (Hα) transition in the Balmer series. For much of the data analysis, open-source R and Python software was used, as well as commercial software including Wolfram Mathematica, and Microsoft Excel. Mathematica enabled precise computation of the brightness temperatures of the SSI data, using efficient interpolation and root-finding methods, and provided a check on exact values computed from the analytic equation for T derived from the Planck distribution for the spectral irradiance, shown in Appendix D. For more details on the TSI and SSI data used here, see the "release notes" for SORCE TIM v19, and for SORCE SIM v27, available from the NASA Goddard Space Flight Center Earth Sciences Data and Information Services Center, or from the University of Colorado's LASP (Harder, 2020;Kopp, 2020). Methodology For the radiation from a blackbody, the irradiance spectrum may be computed theoretically using the Planck distribution. However, the Sun is not a perfect blackbody, due to wavelength-dependent processes in the Sun's atmosphere. Large deviations from the Planck distribution are observed, as we show below. However, it is very useful for interpreting irradiance observations to define a solar "brightness temperature," either for the TSI, integrating all wavelengths, or for the solar spectral irradiance, SSI, at each available wavelength. This is the temperature for which the irradiance computed from a Planck distribution coincides with the irradiance observed by an instrument outside Earth's atmosphere, for example TIM for the wavelength-integrated irradiance, the TSI, or SIM for the wavelength spectrum of irradiance, SSI. Computation of the brightness temperature from TSI, T eff , is simply a matter of explicitly solving the Stefan-Boltzmann Law for T eff , with a result proportional to the one-quarter power of TSI. Appendices A, B and C discuss the importance of TSI and related quantities. Appendix D displays the Equations D.2 and D.3 that determine the value of the spectral brightness temperature T as an explicit function of the observed SSI for each fixed wavelength. Equations D.1 and D.3 also determine T as an implicit function of the observed SSI, by solving Equation D.3 for T as a function of SSI at each fixed wavelength using a rootfinding procedure. We employ a root-finding algorithm developed in Wolfram Mathematica, using the following initial condition T = 5770 K, where T is chosen near the effective radiative temperature computed using TSI = 1360.8 W/m 2 as provided by the SORCE TIM (Kopp and Lean, 2011). These two approaches produce the same values of T , referred to in this paper as the "exact" values, and each method provides a check on the other. SORCE SIM provides a daily SSI record for each associated wavelength from 240 nm to 2416 nm, so that over the 17 year period there is a large amount of data. To handle the large number of records, algorithms were developed in R, Python and Mathematica, to provide approximate values of T . These approximate alternatives allow more rapidly computed values of T for any date, given a fixed set of wavelengths. In this article, we investigate linear and quadratic analytic approximations as a function of the observed SSI values, derived in Appendix E. Below, it will be shown that these approximations bracket the exact values, which motivates the development of linear and quadratic fit approximations, that minimize the root-mean-square-error (RMSE) across a large range of days, which can include all available days. These fit approximations are developed in Appendix G. For the development of the linear and quadratic analytic approximations, a Taylor expansion is used (see Appendix E). Having the derivatives of T with respect to the SSI, this expansion gives a representation of T in terms of polynomial functions of SSI. To keep the models simple, only the first and second terms of this expansion are considered. To apply a Taylor expansion it is necessary to have a reference value around which to expand. For this, we choose the SSI on a single "reference" day during the 2008-2009 solar minimum of Cycle 23. Namely, we choose 2008-08-24, and label that day's exact values (T o , SSI o ). With the observed value of SSI o and associated computed value of T o during a solar minimum, the linear and quadratic coefficients were calculated for the analytic approximation models. The remainder of this section discusses the time series of SSI and estimated T values. Section 4 then compares the approximate values with the exact values, and also compares the analytic approximations with analogous fit approximations that use coefficients obtained by minimizing RMSE (root-mean-square-errors) over all days, and also over two selected ranges of days. To compare the estimation with the exact value of brightness temperature, we compute difference values, and relative differences, or delta values as: In addition to the linear and quadratic analytic approximations obtained with the Taylor expansion, a linear and quadratic fit model is developed in Appendix G, with the help of R statistical software. The linear and quadratic fit models have coefficients that depend on a given temporal range of available data, and not only on the chosen reference day, as is the case with the analytic approximations. In Section 4 we report results for the full range of available days, as well as for two subranges, those of "early" and "late" days, R1 and R2, respectively. The comparison between the brightness temperatures calculated with the linear and fit approximations are shown in the tables, along with a comparison between the linear coefficients. In order to make the computations very explicit, in Appendix H, an example of the calculation of the brightness temperature is given for the linear and quadratic analytic approximation methods, as well as for the linear and quadratic fit approximation methods, for a randomly selected day. In Appendix I, a method of rapid interpolation is given for the linear analytic and fit coefficients, valid over a broad range of wavelengths that satisfies 400 nm ≤ λ ≤ 1800 nm. Results Before considering the temporal variations of SSI observed by the SORCE TIM and SIM instruments over the 17 years, 2003 -2020, we first consider the wavelength variations of SSI on our chosen "reference day" 2008-08-24. Figure 1 shows this SSI o wavelength dependence observed on the reference day, in green, and for comparison the Planck irradiance distributions computed for temperatures T = 4500 K, 5770 K, 6500 K using Equation D.1, in blue, tan, and red, respectively. The lower and upper Planck temperatures are seen to give computed SSI values that bracket the observations of SSI 0 for this wavelength range, while the computed SSI for the intermediate 5770 K (tan) approximately follows the observed SSI o (green). Although the observed value coincides with the computed 5770 K Planck value for a few wavelengths only, the observed values of SSI o lie above or below the Planck curve. The measured value of TSI (historically "solar constant") by the Total Irradiance Monitor (TIM) instrument on the reference day is TSI o = 1360.4704 W/ m 2 and is associated with an effective radiative temperature of T o = 5771.2685 K, close to T = 5770 K, used in computing the intermediate tan curve in Figure 1 (Kopp and Lean, 2011). Figure 1 Solar Spectral Irradiance (SSI) vs. wavelength for reference day 2008-08-24, plotted in green, as measured by the SIM instrument onboard SORCE. For comparison, we also show Planck distributions for 6500 K in red, 5770 K in tan, and 4500 K in blue. The Planck distributions use Equation D.1 for a fixed temperature, with wavelength as the independent variable, and transformed to spectral irradiance by multiplying by the factor α s = π * ( R s AU ) 2 = 6.79426 * 10 −5 , with R s the Sun's mean radius, and AU the mean Earth-Sun distance, as in Equation D.3. Figure 2 is a zoom of Figure 1 for the wavelength range 240 nm to 680 nm. The apparently irregular bumps in this plot, and in Figure 1, are due to well-known Fraunhofer lines in the solar spectrum, smoothed to the SIM instrument's bandpass, which varies from about 1 nm width near wavelength 240 nm, up to almost 30 nm near 1000 nm, then decreases slightly (Harder et al., 2005). The width of a typical atomic Fraunhofer line is of order 1 Å, or 0.1 nm, so the observed bumps are smoothed clusters of several nearby lines. A few of the contributing atomic lines are indicated in the labels on the vertical dashed lines. For example, the green dashed line near 430 nm, is labeled CaFeg to indicate that lines of calcium, iron, and oxygen (g-band) are all included within the plotted bump in the green line. For identification of g-band lines (both atomic and molecular) and its variability related to magnetic field strength, see Shelyag et al. (2004). Effects of ionization thresholds are also seen, such as just above the Ca II H and K lines near 400 nm, which has photon energies near 3.1 eV. TSI provides key observational data about the Sun and is needed to compute the Sun's luminosity and lifetime (see Appendix B). TSI is not a solar constant, as had been assumed prior to the satellite era. Its value varies due to turbulent magnetic processes on the Sun. TSI variations amount to about 0.1% (1000 ppm) of the mean value over the four solar cycles so far observed by satellite (Cycles 21 through 24), since 1978. The average solar luminosity, and thus the TSI, is determined by nuclear processes in the Sun's core. These change over a much longer timescale than the solar cycle, up to billions of years, as nuclear processes transform hydrogen into helium. The present value of TSI, and thus solar luminosity can provide a good estimate of the Sun's lifetime, and thus the time that the Sun's nuclear fuel will eventually run out. Such calculations are shown in Appendix B, where it is shown that the current best TSI value at solar minimum, 1360.80 ± 0.50 W/ m 2 (Kopp and Lean, Figure 1 for the wavelength range 240 nm to 680 nm. The apparently irregular bumps in this plot, and in Figure 1, are due to well-known Fraunhofer lines in the solar spectrum, smoothed to the SIM instrument's bandpass, which varies from about 1 nm width near wavelength 240 nm, up to about 30 nm width near 1000 nm, then decreases slightly. The width of a typical atomic Fraunhofer line is of order 1 Å, or 0.1 nm, so many of the observed bumps are smoothed clusters of nearby lines. A few of the contributing atomic lines are indicated in the labels on the vertical dashed lines. For example, the tan dashed line near 430 nm, is labeled CaFeg to indicate that lines of calcium, iron, and oxygen (g-band) are all included within the plotted bump in the green line. Effects of ionization thresholds are also seen, such as just above the Ca II H and K lines near 400 nm, which has photon energies near 3.1 eV. 2011), gives the overall lifetime of the Sun as approximately 10.70 billion years. The current estimated age of the Sun, and of our solar system, is about equal to Earth's estimated age of 4.54 billion years (±50 million years). Hence, this leaves about 6.2 billion years, more or less, before the Sun will expand into a Red Giant, leaving a white dwarf star behind. The importance of TSI in climatic variability has been mentioned, for example in computing Earth's global average effective radiative temperature. Appendix C estimates the effective temperature of the Earth as 255.48 K, using the TSI on the reference day, and Earth's average albedo of 0.29 (Stephens et al., 2015). TSI is the integral of SSI over all wavelengths, and SSI in turn determines the solar spectral brightness temperature T at each wavelength. Determining T as well as SSI is useful in understanding the physical and chemical processes that take place on the Sun. For example, Figure 3a, a plot of the brightness temperature, T o , on the reference day, shows a broad peak above 1600 nm. This is associated with transitions in hydrogen ions H − (1 proton + 2 electrons). Photons with a wavelength λ < 1644 nm are dominated by the H − bound-free transitions, while photons with λ > 1644 nm are absorbed and re-emitted in H − free-free transitions (Wildt, 1939). The H − ion is the major source of optical opacity in the Sun's atmosphere, and thus the main source of visible light for the Sun and similar stars. Now, we consider the temporal variations of SSI. Figure 4 shows the time series of the irradiance corresponding to a fixed wavelength, in particular for Hα (Hα wavelength = 656.2 nm), the longest wavelength in hydrogen's Balmer Series. The variability of the SSI can be seen, with the deepest minimum occurring early in the record, during Oct-Nov 2003. The spike that goes below 1.523 W/m 2 /nm is associated with the Halloween solar storms, a series of solar flares and coronal mass ejections that occurred from mid-October to early Figure 1. Plot 3b is a zoom into the same short-wavelength range as in Figure 2. As in Figure 2, several bumps are labeled with contributing atomic lines, such as the green dashed line near 430 nm, labeled CaFeg (calcium, iron, oxygen g-band). As in Figure 2, the rise due to the ionization threshold is evident near 400 nm, just above the Ca II H and K lines. In both plots, the temperature at each wavelength was computed using a Mathematica root-finding procedure to solve for T in Equation D.3, SSI = α s B (λ, T ), with SSI the observed value. This occurred during the declining phase of Solar Cycle 23. On the slower year-to-year timescale, the Sun's activity declines into the much quieter period of the solar minimum during 2008 -2009 (Kopp, 2016). The solar minimum implies about a 0.1% decrease in Figure 4 Time series of irradiance for all records of daily average data from the full 17 years of SIM data, version 27, downloaded from LISIRD. In this case, we have chosen the H α wavelength, 656.2 nm. In this plot, it is evident that there is a minimum of solar activity in mid-2008. We choose as a reference day 2008-08-24, and consider variations about this day to approximate the temperatures on all other days. solar energy that arrives on Earth, causing the Earth's temperature to decrease slightly (Gray et al., 2010). After this solar minimum, solar activity increases again, as Cycle 24 sunspots and other solar activity increase in intensity into a solar maximum in 2014 -2015, before declining again, into a quieter minimum period of 2019 -2020. As can be seen in Figure 5, the brightness-temperature time series for Hα is also similar to the temporal variability of the SSI for Hα. It is evident that they are in phase. As SSI data is extended beyond the end of SIM, by TSIS-1 and successor missions, the solar cycles will become more evident, as happened with TSI (Solanki, Krivova, and Haigh, 2013). It is important to emphasize that the spectral brightness temperatures are wavelength-dependent radiative temperatures of the Sun, the temperatures at which the SSI data measured by the satellite coincides with what is obtained using the Planck distribution (Trishchenko, 2005). Figure 6 shows a plot of the linear analytic approximation of brightness temperature compared with the exact value, the value obtained by Equation D.2, or the root-finding solution of Equations D.1 and D.3. The linear analytic approximation is given by neglecting the quadratic term in Equation E.18, taking as reference the date during the solar minimum, 2008-08-24. Figure 6 shows that this approximation closely overlays the exact. To more clearly see the difference between the exact and the linear analytic approximation, Figure 7a shows the difference, exact -approximation, in units of mK = 10 −3 K, and Figure 7b the delta, difference/exact (Equation 3.1) in parts per million (ppm). The negative differences in Figures 7a and b show that the linear analytic approximation overestimates the exact value of the brightness temperature. The root-mean-square-error (RMSE) is 412.4545 × 10 −6 , i.e., very small, which explains why such differences are not evident in Figure 6. A significant increase in variability is seen in 2011 and afterwards, hence Figure 7a also displays the RMSE for both the earlier, quieter period, as well as the later, noisier Figure 5 Time series of the temperature T calculated in Wolfram Mathematica for all records of solar spectral data with fixed wavelength, Hα = 656.20 nm, using Equation D.3. We term the root-finding solution of Equation D.3 the "exact" value of the temperature, to distinguish it from the two analytic approximations (linear and quadratic) described in Equation E.18, with E.10 and E.17, and from the two statistical "fit" approximations (also linear and quadratic) described in Appendix G. period. Some of this increased noise is due to Solar Cycle 24, but some is likely also due to the aging of the satellite and the SIM instrument. from root finding with D.1 and D.3. This looks nearly identical to the analogous Figure 6 for the linear analytic approximation. However, in the plot analogous to Figure 7, we plot in Figure 9 the difference between the exact and the quadratic analytic approximation, and here the results are quite different from the linear case. Figure 9a shows the difference, exact -approximation in units of µK = 10 −6 K, and Figure 9b the delta, difference/exact in parts per million (ppm) in the quadratic case. The positive differences in Figures 9a and b show that the quadratic approximation underestimates the exact value of the brightness temperature, though are much closer than the linear, with RMSE reduced to 0.3428 × 10 −6 , more than 1000× smaller than the linear case in Figure 7, and Table 2 shows that the Mean Error (Bias) is also more than 1000× smaller than the linear. Comparing Figures 7 and 9 (and Table 2) indicates that the opposite signs of the bias suggests there may be a better approximation that lies "in between" the linear and quadratic approximations. Below, we will show that the "fit" approximations do typically provide such improvements. Figure 9a also shows that, in accord with intuition, the decrease in RMSE indicates that the approximate value is better the more terms are considered in the Taylor expansion. The improvement from RMSE in Figures 7a to 9a removes the most significant figures in RMSE in Figure 7a, suggesting a rapidly converging series. This indicates that finding an improved "in between" fit approximation will be a challenge, as the quadratic analytic approximation is excellent. Table 2 supports this last point, comparing the RMSE for linear and quadratic analytic models, with the RMSE for the linear and quadratic fit models for the same Hα wavelength used in Figures 7 and 9. Indeed, though the linear fit model RMSE is about 2.85× smaller than the linear analytic RMSE, the quadratic fit model RMSE is 2.81× times smaller again than the 1000× smaller quadratic analytic RMSE. Hence, at the Hα wavelength, the quadratic fit model is more precise even than the very precise quadratic analytic model. Figure 7 shows that the exact value lies between the linear and quadratic analytic approximations. Tables 1, 3 and 4 extend Table 2 to wavelengths 285.5 nm, 855.93 nm, and 1547.09 nm, respectively. As noted for Hα, at these near-ultraviolet and near-infrared wavelengths, the linear fit model also has smaller RMSE than the linear analytic. Also, if we compare the two quadratic models, then again for 255.5 nm, 855.3 nm, and 1547.09 nm, the quadratic fit model wins, and for 255.5 and 855.3 it is by an even larger factor than it does for Hα, by factors 10.34 and 7.78, respectively, while for 1547.09 nm the quadratic fit model wins over the quadratic analytic by a factor 2.00. If we take these four wavelengths as representative, then, the quadratic fit model is preferred, and nearly reproduces the exact values, despite the high precision of the quadratic analytic model. Some applications may not require such high precision. If we choose to restrict ourselves to linear models the fit model is still preferred, though it is a close call at 1547.09 nm, where the linear analytic model RMSE is 1.05× larger than the linear fit, so has only a 5% improvement. At that wavelength, the linear analytic may be sufficient, and indeed an analytic approach has some advantages. For example, it may be optimized for a particular range of dates of particular interest, and the single coefficient interpreted as a "linear sensitivity" of temperature to irradiance at this wavelength. Note that the SIM instrument registers a higher variability of spectral irradiance for shorter wavelengths, i.e., 285.5 nm and 355.93 nm. This occurs because the more ener- getic photons (according to the Planck-Einstein relationship E = hc/λ) allow for more transition and ionization processes than at near-infrared wavelengths, such as those shown in Figure 10, 855.3 nm and 1547.09 nm. Continuing with the plans for simplifying the calculations of the brightness temperature, which is the central objective of the article, Figure 11 shows the plots of the quotients of the linear analytic coefficients for certain wavelengths. Looking at the behavior of the curve of the quotients a , a polynomial interpolation was obtained, as discussed in Appendix I. This provides a simple mathematical expression useful in calculating the linear coefficients for any wavelength in the range from 400 nm to 1800 nm. With this, calculating the brightness temperature becomes simpler and faster than Equation E.18 with D.3, and valid for To compare the linear analytic and linear fit models, Figures 12 and 13 show the differences between the coefficients of the linear analytic approximation model, Equations E.10 and E.18 omitting the quadratic term, or G.5, and that of the linear fit model, Equation G.1. Note that the fit coefficient a in Equation G.1 is computed using R software, and depends on the range of days supplied. This can range over the full set of days available from SORCE SIM (17 years of daily data). For comparison we also compute aR1 over the set of days in the first half of the data, that have the smaller or RSME values shown in Figure 7a, as well as aR2 over the late-day range, with larger RMSE. In short, early and late-year ranges are R1 = 2003 -2010, and R2 = 2011 -2020. All three ranges, overall, R1 and R2 are shown in Figures 12 and 13. In the figures we can see that the values obtained with Equations G.1 and G.5 (with E.10) do not vary much for wavelengths less than 1400 nm and greater than 400 nm, therefore the brightness temperature values that are calculated in that range of wavelengths also do not differ much, using the linear analytic and linear fit models. Note that aR1 and aR2 values lie on either side of the overall difference value of a, which in every case lies in between, for each wavelength. Summary and Conclusions Our results and conclusions may be summarized as follows: (i) The linear and quadratic analytic approximation models, Equations E.18, with Equation E.10 for the linear term, and E.17 for the quadratic term, and E.3 to compute B from SSI, simplify calculations of solar brightness temperature T on any chosen day for a fixed wavelength, with B or SSI as a single variable. (ii) The linear analytic approximation overestimates the exact values of T , while the quadratic analytic approximation underestimates the exact values, but has much smaller RMSE (rms error) than the linear. (iii) By using the full dataset to find coefficients that minimize the RMSE we find linear and quadratic "fit" approximations that lie closer to the exact values for representative wavelengths, as can be seen by the "fit" RMSE values in Tables 1 to 4, being smaller than the corresponding analytic RMSEs, i.e., (fit RMSE)/(analytic RMSE) < 1 for both linear and quadratic cases, for near-ultraviolet, visible, and nearinfrared wavelengths. (iv) For wavelengths in between the tabulated ones, Equations I.1 and I.2 provide a smooth interpolating polynomial function of wavelength, which is simpler and faster to apply than Equation E.10 in the analytic case, or the R software in the fit case, and accurate for any wavelength within a broad range across the peak of the SSI, extending into near-infrared wavelengths that are of particular importance in modeling Earth's climate. The statistical measure used to understand the differences between values calculated by the linear and quadratic analytic approximation models with the exact values of T obtained from Equation D.2 (or root finding in Mathematica software), is the RMSE (root-mean- Table 2 show that for the Hα wavelength the RMSE for the linear (412.455 × 10 −6 K) and quadratic (0.3428 × 10 −6 K) analytic approximation models are small, and therefore the deviations between the estimated and exact values are small. Table 1 shows that for a wavelength of 285.5 nm the RMSEs for both analytic models remain small, though larger than for Hα. For both these wavelengths, the quadratic analytic model is superior to the linear analytic model. Tables 3 and 4 shows that for the longer nearinfrared wavelengths 855.93 nm and 1547.09 nm this pattern continues, with the quadratic analytic model being superior to the linear analytic. The fact that at all four wavelengths the quadratic analytic RMSE is smaller than the linear analytic RMSE suggests that further terms in the Taylor expansion may converge towards the exact over the full wavelength range. However, we do not have proof of convergence. Even if the series does converge, there is only a suggestion, not a guarantee, that it will converge to the exact value given by Equation D.2. Comparisons of the linear analytic coefficient (Equation E.10 or G.5) with the coefficient of the linear least squares fit of the data performed with the statistical packages of R software are shown in Figures 11 and 12. The linear fit model shows the line that best represents the entire data set, whereas the linear analytic approximation model has its maximum accuracy on the chosen reference day. Gaps in the data, the primary one being that which occurs from 2013-07-20 and 2014-03-12 (Harder, Beland, and Snow, 2019) have a direct influence on the coefficient of linear fit, because the solar spectrum measurement instruments SIM A and SIM B showed significant differences from the spectrum measured at the beginning of 2011, as can be seen for example in Figure 1 of the article of Harder, Beland, and Snow (2019). Despite the good quality of the two analytic approximations, we find that the two fit models provide better "in between" approximations. The most accurate of the four approximations considered here is the quadratic fit model. We have seen that the brightness temperatures that it produces are in most cases indistinguishable from the exact temperatures that are found as roots of the equation that defines the brightness temperature, SSI = α s B(T ), where B is the Planck distribution, and α s is the solid angle subtended by the Sun at the mean Earth distance. There will soon be new opportunities to apply and extend this study. Both TIM and SIM instruments are now acquiring daily data onboard the International Space Station. The new record, begun on 2018-03-14, had sufficient overlap with SORCE to enable the prior dataset to be adjusted to match TSIS-1 (https://lasp.colorado.edu/lisird/data/sorce_sim_tav_ l3b/). Currently, TSIS-1 extends to 2021-07-20 and continues to be extended. TSIS-1 will be succeeded by TSIS-2, which is expected to continue the record beyond the peak of Solar Cycle 25. We look forward to testing and applying the approximations studied here to future solar-cycle data, to enable improved understanding of the Sun's irradiance and temperature variations. Appendix A: Total Solar Irradiance and the Sun's Effective Temperature The Sun is not a blackbody, since the brightness temperature varies significantly with wavelength, as shown in Figure 1. However, we can define an "effective" radiative temperature T eff , using the blackbody formula, with the Stefan-Boltzmann constant σ = 5.670374 * 10 −8 W/m 2 /K 4 as follows Here, TSI is the total solar irradiance (historically "solar constant"), while α is the ratio between the total area of the Sun with radius R s = 6.957 * 10 8 m divided by the area of a sphere centered on the Sun with radius equal to one astronomical unit AU = 149 597 870 700.0 m, so that The energy flow emitted by the Sun decreases as it diverges from the Sun's photosphere, becoming isotropically decreasing as 1/distance 2 . TSI and SSI values measured by satellites like SORCE are adjusted to the mean Earth-Sun distance of one AU, thus removing variations due to the satellite orbit. Earth receives a small fraction of the energy emitted by the Sun and recorded by satellites, and that fraction will be considered in the following. Figure 13 The relative error between the analytic linear coefficient a and the fit linear coefficient a, where the fit coefficient a is calculated in the same three ways as in Figure 12, namely using the full available time period 2003 -2020, or R1 = 2003 -2010, or R2 = 2011 -2020. Appendix B: Solar Luminosity and the Sun's Lifetime Questions of how the Sun shines, and how old it is, have been objects of interest since ancient times, but it was not until the scientific revolution that there was an opportunity to give definitive answers, first from classical physics, then using ideas from relativity, quantum mechanics and nuclear physics. With the development of modern theories, the answer became well understood (Bethe, 1939;Bahcall, 2000;Adelberger et al., 2011). The solar luminosity, L, is the total solar power, the total radiative energy emitted from the Sun per second, isotropically in all directions. The best current estimate of L relies on the measurements of TSI, which is the solar power per m 2 at the mean Earth-Sun distance of one AU. To obtain L from TSI, multiply by the total number of square meters on a sphere with radius equal to the Earth-Sun distance, so using the TIM value from Kopp and Lean, 2011 gives L = TSI * 4π * AU 2 = 1360.8 W m 2 * 4π * AU 2 = 3.82696 * 10 26 W. (B.1) The energy produced by nuclear reactions in the Sun's core is determined using Einstein's E = mc 2 , where m is the mass loss in the primary reaction, which in the Sun is conversion of four H atoms into one He, as explained in 1939 by Hans Bethe in his classic 1939 paper "Energy production in stars," for which he won the Nobel Prize (Bethe, 1939). Assuming If the constant value of L is replaced by a linearly increasing L, while the Sun is also assumed to be about halfway through its lifetime, then the above estimate is not significantly altered, since a dimmer younger Sun is compensated by a brighter older Sun. Appendix C: Earth's Temperature from TSI Earth intercepts a small fraction of the solar energy, casting a small shadow on the sphere of area 4π * AU 2 . That absorbed energy fraction is determined by the product of the TSI, the Earth's cross section (π * R 2 E ), and the Earth's absorptivity, or its albedo α = 1 -absorptivity. The absorbed fraction determines Earth's global mean temperature (North, Cahalan, and Coakley, 1981;Gray et al., 2010). Earth's temperature then determines the total thermal energy that Earth emits back into space. The balance between the absorbed solar energy, and the emitted thermal energy, determines Earth's effective radiative temperature, T E . This condition of radiative equilibrium at the top of Earth's atmosphere is expressed as Dividing through by Earth's surface area gives the global average energy emitted and absorbed in the form where = TSI/4, and α = Earth albedo = 0.29. (Note, the albedo symbol α used in Equation C.2 is not the α of Equation A.2.) Now, knowing that the energy absorbed and radiated by the Earth are equal, in thermal equilibrium, the effective temperature of the Earth can be calculated as TSI impacts the average and long-term variability of Earth's temperature and, of course, its variations have impacted climate for millions of years (Kopp and Lean, 2011;Solanki, Krivova, and Haigh, 2013). TSI variations can be understood as a combined impact of variations in sunspots, and faculae, as well as variations occurring over the entire Sun. Models based on these have been key tools in studies of Earth's climate (e.g., Kopp and Lean, 2011;Foukal and Lean, 1985). Appendix D: "Exact" Solar Brightness Temperatures As already mentioned in previous sections, the Sun is not a pure blackbody. The SSI (solar spectral irradiance) has evident deviations from a pure Planck distribution, due to atomic absorption and ionization processes in the solar atmosphere. (See Figures 1 and 2.) An especially helpful way to study these deviations is by transforming SSI at each wavelength λ into solar brightness temperature T . To do this, at each fixed wavelength, we solve the Planck distribution for T . That is, we solve where k 1 = 10 20 c 1 = 1.19268 * 10 20 W m 2 /Sr and k 2 = 10 7 c 2 = 1.43877 * 10 7 K m are constants, and the units of B are W/m 2 /nm. Solving for T gives the following, which we term the "exact" solar brightness temperature: To obtain the solar spectral irradiance SSI from the Planck distribution B requires an integral over the solid angle of the Sun at the Earth's mean orbital distance. This gives where the value of α s = π * α, so from Equation A.2 we have α s = 6.79426 * 10 −5 . Note the wavelength λ is kept fixed, and for each wavelength there is a corresponding brightness temperature T , determined by the value of temperature for which the satellite's SSI observation coincides with the Planck distribution for that λ and T . Equivalently to Equation D.2, to solve Equation D.3 in Mathematica software, we use the initial condition T = 5770 K and α s = π * (R s /AU) 2 , and apply the function FindRoot to Equation D.3, which gives the same values of T as the explicit "exact" Equation D.2. The next appendix shows how to approximately calculate the brightness temperatures T for any fixed wavelength, having only the observed SSI values (or equivalently B) as a variable, because all other parameters are defined on a single "reference day" so do not vary from day-to-day. It is important to remember that the wavelength is fixed, and consequently, the parameters T o , SSI o , (dT /dSSI) o and higher derivatives (evaluated on the reference day) vary with wavelength. For the SIM data used in this article to produce the plots, the wavelengths range from 240 nm to 2416 nm. For each wavelength in this range, there is a set of parameters that can be used to determine a time series of brightness temperatures T for all other days in the date range [2003-04-14 to 2020-02-26]. Appendix E: Analytic Approximations for Brightness Temperature This appendix derives two simple analytic representations of the daily brightness temperatures that take advantage of the fact that, at a given wavelength, the SSI values are very nearly equal from day-to-day, and typically vary by less than 1%. The analytic approximations express the daily temperature values on any given day, T , at each fixed wavelength, by a Taylor expansion of the exact value of T as an analytic function of SSI, as given in Equation D.2. The expansion is about the value of SSI and T on a given "reference" day (T o , SSI o ), as follows We focus on the "linear approximation" that keeps just the first derivative, and then the "quadratic approximation" that keeps the first two derivatives. Higher-order terms will be neglected, except in the discussion of convergence. Since SSI is directly proportional to B by a constant rescaling, as given in Equation D.3, we may write E.1 as In order to compute the first and second derivatives via the chain rule, we introduce two new variables, y and z, as follows: Let Then, Therefore, Equations D.2 and E.3 imply We may compute the derivative of Equation E.9 using the chain rule, employing Equation E.8, to obtain k 1 * (e y − 1) 2 e y = k 2 λ 4 k 1 y 2 * (e y − 1) 2 e y . (E.10) To evaluate E.10 on the reference day, we set B = B o equal to the value on that day, compute y = y o from Equation E.3, and substitute that into Equation E.10. In order to compute the second derivative, we note that Equation E.10 is already in the form analogous to E.9, namely y (z (B))). (E.11) Therefore, as in computing Equation E.10, we take the derivative of E.11 using the chain rule, E.8 and E.10 to obtain (e y − 1) 2 e y + 1 y 2 d dy (e y − 1) 2 e y * −λ 5 k 1 * (e y − 1) 2 e y . (E.12) On the right side we applied the product rule to compute dT (1) /dy from Equation E.10, giving the two terms in the left square brackets, and used Equation E.8 to substitute into the right square brackets. Evaluating the first term in the left bracket of Equation E.12 allows us to factor out 1/y 2 from both terms. We also combine the rightmost constant −λ 5 /k 1 with the leftmost constant k 2 λ 4 /k 1 to yield the following We apply the product rule to the remaining derivative in the second term in the left-hand brackets, and use Equation E.4, which implies dz/dy = z, to give (E.14) In the second term within the left square brackets, we distribute the z, then factor out (z−1) 2 z to yield To evaluate Equation E.17 on the reference day, just as for Equation E.10, we set B = B o , equal to the value observed on that day, substitute that into Equation E.3 to compute y = y o , and substitute the value of y = y o into Equation E.17. Substituting these first and second derivatives of T evaluated on the reference day into Equation E.2, and neglecting all higherorder terms, we obtain the quadratic analytic approximation given by where the linear term is computed using E.10, and the quadratic term is computed using E.17. Omitting the quadratic term in E.18 gives the linear analytic approximation. Appendix F: The Sun's Effective Temperature Here, we derive a linear approximation for the "effective" temperature T eff , Equation F.4, associated with the total solar irradiance, TSI. This is a simpler case than for SSI, since for TSI, the Stefan-Boltzmann equation makes the exact T eff a simple analytic function of TSI, given in Equation F.1. As mentioned before, the Sun is not a blackbody, but we can calculate its associated effective temperature by using the Stefan-Boltzmann Equation and using TSI (total solar irradiance) measured directly by satellites above the atmosphere, by solving Equations A.1 and A.2 to obtain where σ = 5.670374 * 10 −8 W/m 2 /K 4 is the Stefan-Boltzmann constant, and from Equation A.2 α = 2.16268 * 10 −5 . Taking the derivative of F.1 gives an expression for the change in effective temperature with a change in TSI as follows: Since TSI typically varies by about 0.1% or less, Equation F.4 is quite accurate for most days. An extreme case is the "Halloween" event of 2003-10-29, when a large sunspot grouping dropped the temperature by about 3.6670 K below the reference day T eff on 2008-08-24. Equation F.4 estimates a 3.6605 K decrease from the reference day, i.e., a 0.0065 K underestimate, which is 0.1773% of the drop, or 0.0001% of (T eff ) o = 5771.2685 K. Substituting the 2003 "Halloween" values of T eff and TSI into Equation F.3 gives instead of F.4 the coefficient 1.06255. This day of minimum T eff is also the day of maximum coefficient of sensitivity over the full 17-year SORCE SIM record and is 19% larger than the coefficient on the reference day, shown in Equation F.4. Conversely, the minimum coefficient over the 17 years occurs on the day of maximum T eff , which is 5773.1820 K, which occurred on 2015-02-26. That minimum coefficient is 1.05947, 10% less than the reference-day ratio in F.4. The average coefficient over all days is 1.06029. The fact that this average value is 2% less than the reference day's implies that the linear approximation in Equation F.4 typically slightly overestimates the changes in T eff . The same is true for SSI: the linear analytic approximation of brightness temperature T , obtained by dropping the quadratic term in Equation E.18, also has a positive mean error, or bias, as shown for four representative wavelengths in Tables 1 -4. Those tables also show that inclusion of the quadratic term in E.18 largely removes this positive bias, leaving a very small mean error, and small RMSE, as discussed in more detail in the text. In principle, there are two ways to determine the total solar irradiance (TSI). The first is by using the SORCE TIM instrument to obtain a direct measurement. The second is to use the SSI measured by the SIM instrument and integrate as wide a range of wavelengths as possible. As expected, there is a shortfall in the value computed by integrating the SIM data compared to what is measured by TIM, mainly due to missing energy above the longest wavelengths measured by SIM, approximately 2400 nm. This TIM-SIM difference is shown in Figure 2 of the article by Harder, Beland, and Snow (2019), and amounts to 146.128 W/m 2 . This must be subtracted from the value measured by TIM, or added to the integrated SIM value, for comparisons to be made between TIM and SIM. In this paper we focus on SSI, though both SSI and TSI must be considered in the study of Earth's climatic variations. Appendix G: Linear and Quadratic Fit Model Comparing Figures 7 and 9 shows that the linear analytic approximation, using only the first two terms in E.18, overestimates the exact T , given in D.2, while the quadratic approximation, using all three terms in E.18, though closer to the exact, slightly underestimates. To consider a possible "in between" approximation, this appendix introduces linear and quadratic "fit" models. These statistical "fit" models calculate the brightness temperature as a function of wavelength using R software. In the linear and quadratic analytic approximation models discussed in earlier appendices, estimates of solar brightness temperatures T are made based on the measured solar spectrum of the chosen reference day, and the exact brightness temperatures computed for that day, which occurs during a time of minimum solar activity. By contrast, the fit models we discuss below take into consideration the statistical properties of the full set of daily data over the 17 years of the SORCE mission. A statistical model that provides a least square fit to the solar spectral irradiance (SSI) data obtained in the R software with linear regression may be written as where a, b are constants for a specific wavelength and T is the brightness temperature. In the same way, R software may compute a least squares quadratic fit of the form These two fit models express linear and quadratic dependences, respectively, between SSI and T . We obtain, using code developed in open-source R software, simple models that best fit the data, for which the mean square error is minimized. We rewrite the analytic Equation E.1 (or the equivalent E.2), up to the linear term, as follows: which has a similar form to Equation G.1. This will allow us to make a comparison between values of the constants that appear in Tables 7 and 9. The linear analytic coefficient, which appears in Table 6, and SSI o are constant, because they are evaluated for the data of the reference day that appears in Table 5. It is important to note that the constant a defined in this part is the same constant given in the linear analytic term in Equation E.1. We follow a similar procedure for the quadratic analytic model, rewriting Equation E.1 to obtain, which is a mathematical expression similar to Equation G.2, where the values of the constants are given as: The values of constants A , B , C in the analytic model, and A, B, C in the fit model, are shown in Tables 8 and 10, along with values of T obtained with the quadratic analytic model and the quadratic fit model for certain wavelengths. As mentioned above, the linear fit is obtained with regression techniques in the R software and for this, all the available spectral irradiance data are used for a fixed wavelength and therefore, if there is a change in the range of data, the linear fit changes because an analysis is done on all the data. In this way, a partition of the available data (2003 to 2020) into an "early period" designated R1 (2003 to 2010), and a "late period" designated R2 (2011 to 2020) was made to make a comparison between the linear coefficients shown in Tables 11 and 12. Appendix H: Example Calculations of Brightness Temperature Using the Analytic and Fit Models This appendix illustrates the T approximations by considering an example of a randomly chosen day. For this example, results from the linear and quadratic analytic models are compared with the results of applying the linear and quadratic fit models for the randomly chosen day. In Tables 1 -4, the RMSE (root-mean-square-error) and the ME (mean error, or bias), computed over all the available days in the SIM v27 record, are shown for all four models, linear and quadratic, analytic and fit. To better explain how the analytic and fit models work, consider the following example for the wavelength of λ = 656.20 nm (Hα). As mentioned above, the data of one particular day during solar minimum is taken as a reference, in this case 2008-08-24. On that Table 12 Values of the linear fit constant a calculated using two ranges of dates, R1 and R2: aR1 is the linear coefficient calculated with R software using data from 2003 to 2010, while aR2 is calculated using data from 2011 to 2020. The relative errors are also obtained when making the comparison with a . Similarly, using the same B o but varying wavelength, and so varying y o , Table 6 shows the linear analytic model coefficients for the wavelengths of 285.50 nm, 656.20 nm, 855.93 nm and 1547.09 nm. The linear analytic approximation is used to estimate the value of T for some other day, knowing the SSI of that day. If we choose a random day, for example 2011-10-10, the value of SSI of that day (see Table 5) for the wavelength of Hα is SSI = 1.527622 Wm −2 (nm) −1 . Using Equation E.18, without the quadratic term, then yields T = 5772.410671 K + 973.20427 * (1.527622 − 1.526558) K T = 5773.44616 K. (H.2) This is the linear analytic approximation for the brightness temperature for the "example" date 2011-10-10. It is close to, but slightly larger than, the value computed by the "exact" Equation D.2 (or root finding in Mathematica software), T = 5773.44598 K (see Table 5). The error of root finding is very small compared to either the analytic or statistical estimates, for the relatively smooth functions involved here, so in this article both the result of using Equation D.2 and the root-finding result are referred to as the "exact" value. If we consider the root-mean-square-error (RMSE) in Table 2, the value obtained with the linear approximation, 5773.44616 K ± 0.00041 K, agrees very well with the exact value, since it is within the range of values. The exact value is obtained by applying Equation D.2. The approximate result of H.2 is obtained when using the Equation G.5 with the values of the constants in Table 7. For the quadratic analytic approximation, we calculate the brightness temperature using Equation Similarly, the quadratic coefficients for the other wavelengths are given in Table 6, but rounded to three places. Therefore, the value of the Hα temperature obtained by including the quadratic term in H.2 is 5773.44597715 K, which when rounded to five places right of the decimal, agrees well with the exact. If we consider the RMSE, rounded to eight places, the range of the brightness temperature is 5773.44597715 ± 0.00000034 K, which also includes the exact value, and is much closer to the exact than is the linear. The above was for a wavelength of 656.20 nm. Table 5 shows the SSI and T values for this same wavelength, as well as for wavelengths of 285.50 nm, 855.93 nm and 1547.09 nm, and also the SSI and exact T values on 2011-10-10. Tables 7 and 8 show estimated results for 2011-10-10, using the analytic Equations G.5 and G.6, respectively. Finally, in Tables 9 and 10 the parameters of linear G.1 and quadratic G.2 fit models are shown, and the value of the estimated brightness temperature on 2011-10-10. Comparing these results with the values of the RMSE and ME listed in Tables 1, 2, 3 and 4, the results are shown to be in excellent agreement with the exact values obtained from Equation D.2 or the Mathematica root-finding method. Appendix I: Temperature Sensitivity Ratios and Rapid Interpolation This final appendix provides a rapid method of interpolation between the measured wavelengths. The ratio of a small change of the Sun's effective temperature divided by the associated change of the TSI is 1.06053 K/(W/m 2 ), as given by Equation F.4. The analogous spectral relationship is the linear coefficient a of the linear analytic approximation, the ratio of the change of spectral brightness temperature divided by the associated change in the SSI, the solar spectral irradiance, from the linear term in Equation E.18, using E.10 and D.3. To match the units of this TSI sensitivity ratio, it is appropriate to divide a by the wavelength λ. This allows determination of the wavelength for which the ratio a /λ is closest to the TSI value 1.06053 K/(W/m 2 ). The spectral values are given in Table 13, which shows a minimum value of a λ = 1.2280, which occurs near 486.3 nm, whereas otherwise a λ > 1.2280 > 1.06053 K/(W/m 2 ) = TSI sensitivity. For interpolation between measured wavelengths, it is useful to obtain a simple analytic mathematical expression for the brightness temperature sensitivity ratio a /λ. An interpolation function for the ratio a λ may be expressed as a λ = 6.043791 × 10 −6 λ 2 − 6.076933 × 10 −3 λ + 2.851032, (I.1) where λ is any wavelength that satisfies 400 nm ≤ λ ≤ 1800 nm. With the previous expression one can calculate a for any wavelength λ within this range, and then use the SSI value on any day to compute the associated brightness temperature T , using the linear analytic approximation, which can be written (I.2) Equation I.2 represents a method of calculating the brightness temperature that is simpler and faster than the linear term in Equation E.18, and valid for any wavelength within a broad range.
14,190.4
2022-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Turnover rate of coenzyme A in mouse brain and liver Coenzyme A (CoA) is a fundamental cofactor involved in a number of important biochemical reactions in the cell. Altered CoA metabolism results in severe conditions such as pantothenate kinase-associated neurodegeneration (PKAN) in which a reduction of the activity of pantothenate kinase isoform 2 (PANK2) present in CoA biosynthesis in the brain consequently lowers the level of CoA in this organ. In order to develop a new drug aimed at restoring the sufficient amount of CoA in the brain of PKAN patients, we looked at its turnover. We report here the results of two experiments that enabled us to measure the half-life of pantothenic acid, free CoA (CoASH) and acetylCoA in the brains and livers of male and female C57BL/6N mice, and total CoA in the brains of male mice. We administered (intrastriatally or orally) a single dose of a [13C3-15N-18O]-labelled coenzyme A precursor (fosmetpantotenate or [13C3-15N]-pantothenic acid) to the mice and measured, by liquid chromatography-mass spectrometry, unlabelled- and labelled-coenzyme A species appearance and disappearance over time. We found that the turnover of all metabolites was faster in the liver than in the brain in both genders with no evident gender difference observed. In the oral study, the CoASH half-life was: 69 ± 5 h (male) and 82 ± 6 h (female) in the liver; 136 ± 14 h (male) and 144 ± 12 h (female) in the brain. AcetylCoA half-life was 74 ± 9 h (male) and 71 ± 7 h (female) in the liver; 117 ± 13 h (male) and 158 ± 23 (female) in the brain. These results were in accordance with the corresponding values obtained after intrastriatal infusion of labelled-fosmetpantotenate (CoASH 124 ± 13 h, acetylCoA 117 ± 11 and total CoA 144 ± 17 in male brain). Introduction Coenzyme A (CoA) has a clearly defined role as a cofactor for a number of oxidative and biosynthetic reactions in intermediary metabolism [1,2]. It is involved in cellular metabolism (fatty acid synthesis, tricarboxylic acid cycle, oxidation of fatty acid, bile acid conjugation), lacking is the CoA turnover rate in the target organ. This information is crucial to design novel disease treatments aimed to restore and maintain physiologically relevant levels of CoA. CoA half-life would directly impact the frequency of therapeutic dosing. To address this knowledge gap, here we report the determination of the half-lives of CoASH, acetylCoA and total CoA in vivo in the brains and livers of mice. Formation and disappearance over time of endogenous CoA biosynthetically labelled with 13 C and 15 N was measured using liquid chromatography-mass spectrometry (LC-MS). Half-life was determined based on the rate of decay of the labelled CoA species. [ 13 C 3 -15 N]-PA-labelled ((R)-3-(2,4-dihydroxy-3,3-dimethylbutanamido)propanoic acid) was synthesized in house. ( 18 O, 15 N, 13 C 3 )-fosmetpantotenate +6 atomic mass unit (AMU) [29] was synthesized in house, purity 99.9% by UHPLC, distereomer ratio 54/48 from 31 P NMR. Calcium pantothenate was supplied by EDQM (Strasbourg, France). CoASH was purchased from Larodan AB (SOLNA, Sweden). AcetylCoA Na + salt, atenalol and KOH were obtained from Sigma-Aldrich (MO, USA). HPLC-MS grade ACN, water, TFA and formic acid (FA) were obtained from Fluka. Preparation of PA, CoASH and acetylCoA stock and working solutions PA, CoASH and acetylCoA stock solutions were freshly prepared at 10 mM in 50 mM ammonium acetate pH 6.0. Working solutions (WS) were prepared in the same buffer by serial dilutions of the stock. Atenolol (internal standard (IS) for tissue samples) stock solution was prepared in DMSO at 1 mg/mL and stored for 2 months at -20˚C. Working solution for IS (WS-IS) was freshly prepared before use at 2.5 μg/mL in methanol (MeOH)/50 mM ammonium acetate pH 6.0 70/30 (v/v) for PA, CoASH and acetylCoA determination and in 50 mM ammonium acetate pH 6.0 for total CoA determination. Animal studies Oral (PO) administration study in C57BL/6N mice. This study was conducted at IRBM in full compliance with the EU Directive 63/2010 (On the Protection of Animals Used for Scientific Purposes) and its Italian transposition (Italian Decree no. 26/2014) as well as with all applicable Italian legislation and relevant guidelines. IRBM is authorized by the Italian Ministry of Health and local veterinary authority to house, breed, and use laboratory rodents for scientific purposes. Protocol was approved by the internal IRBM Ethics Committee and by the Italian Ministry of Health. Male and female C57BL/6N mice (5 weeks old, Charles River (Como, Italy)) were housed in individual ventilated cages (IVCs Tecniplast) with sawdust as bedding. Cages were identified by a color code label recording the sample ID, animal number and details of treatment (route, dose and time point). Animals were identified with a unique number on each tail via permanent markers. Room temperature was maintained at approximately 22˚C with relative humidity between 40-70% and an average daily airflow of at least 10 fresh air changes per hour. Rooms were lit by fluorescent tubes controlled to give an artificial cycle of 12 h light and 12 h dark each day. All animals were monitored two times a week and clinical signs were recorded. The monitored parameters included changes in the skin, eyes, nose, mouth, head, breath, urines, feci and in locomotor activity. Animals were also weighed two times a week to monitor the body mass change. In case of suffering or more than 20% loss of body weight animals were euthanized. In a preliminary study conducted in male mice, a low PA diet period of three and six days before treatment was evaluated to facilitate the incorporation of [ 13 C 3 -15 N]-PA into CoASH and acetylCoA. Then, levels of endogenous PA, CoASH and acetylCoA were measured. For three and six days animals (n = 3) were fed with PA deficient diet (Mucedola MD 95248). Brain and liver samples were collected at day 0, 3 and 6. The mice were anesthetized using Isoflurane. The sacrifice was performed in accordance with IRBM standard operation procedure and in compliance with EU/Italian laws (decapitation/dislocation in a state of anaesthesia). Immediately after the mouse sacrifice, the whole brains and livers were explanted, washed with refrigerated saline solution (4˚C), and divided into two halves, then weighed and snap frozen in dry ice in Precellys1 (Bertin Technologies, Montigny-le-Bretonneux, France) tubes. Brains and livers were then stored at -80˚C until LC-MSMS analysis. In the [ 13 C 3 -15 N]-PA PO administration study for 3 days all animals were fed with PA deficient diet (Mucedola MD 95248). Then, just before oral administration of [ 13 C 3 -15 N]-PA, standard rodent diet (Mucedola 4RF21) was administered until the end of the study. Animals were offered drinking water ad libitum (Mucedola, Milano, Italy). All animals were weighed immediately before testing. PA-free food intake was measured daily and the consumption was calculated. Animals (approximately 6 weeks old) were dosed orally in a fed state. The appropriate dose volume of [ 13 C 3 -15 N]-PA, calculated per each individual animal according to body weight (administration volume: 10 mL/kg), was administered orally using a gastric cannula connected to a 300 μL syringe (BD Plastipak 3/10 cc insulin syringe U-100 28g cat n. 309300). Liver and tissue samples were collected at the following time points: pre-dose, 6,18,30,54,78,102,150,198,246,294, and 342 h post-dosing and treated as described for the preliminary low PA study. Animals were well for all the duration of the study. No sign of suffering was observed. Intrastriatal administration study in C57/Bl6 mice. The in vivo study was conducted at Charles River Labs in accordance with protocols approved by the Institutional Animal Care and Use Committee of Charles River Laboratories, SSF. Twenty-seven male C57/Bl6 mice (n = 27, CRL, 8 weeks old) were used for the experiment. Upon arrival, animals were grouphoused (n = 2-5/cage) with access to food and water ad libitum. Animals were maintained on a 12/12 h light/dark cycle (lights ON at 7:00 AM) in a temperature-(22 ± 2˚C) and humidity (approx. 50%) controlled room. Animals were acclimated for at least 7 days prior to the beginning of the experiment. Three mice were not dosed (time 0 group) while 24 mice received bilaterally infusion with fosmetpantotenate (+6 AMU) into the dorsal striatum (5 μL/side, 25 μg/μL, total dose 250 μg). To do so, mice were anesthetized using Isoflurane (2%, 800 mL/min O 2 ). Bupivacaine was used for local analgesia and carprofen was used for peri-/post-operative analgesia. The animals were placed in a stereotaxic frame (Kopf instruments, USA). The following coordinates were used for the dorsal striatum (antero-posterior: +0.8 mm and lateral: ±1.4 from bregma, dorsoventral: -1.7 mm from dura). Utilizing a Hamilton syringe (model # 80308; 10 μL syringe with corresponding 30 ga blunt tip needle) and the stereotactic micromanipulator, the location of the bur holes were designated and drilled. The infusion cannula was lowered into the brain to the depth of the desired location and infusion conducted at a rate of 0.5 μL/min. When delivery was completed, a 5-minute waiting period was applied before withdrawing the needle. Following the waiting period, the contralateral side was infused in the same manner. Once the two infusions were completed, the skin incision was closed with sutures. At the appropriate takedown time (0, 24,48,96,144,192,240,288, or 336 h after treatment infusion), mice (n = 3) were euthanized via CO 2 . Brain tissue was extracted, separated into left and right hemisphere, immediately frozen (-80˚C) in separate Precellys1 tubes and sent to IRBM for analysis. Mouse tissue sample preparation Determination of unlabelled and labelled PA, CoASH and acetylCoA. Tissue samples (brain and liver) were thawed in an ice bath for 15 min. Brain hemispheres (or half liver) were homogenized with 4 volumes of MeOH/50 mM ammonium acetate pH 6.0 70/30 (v/v) containing 31.2 ng/mL (brain) or 125 ng/mL (liver) atenolol as internal standard using a Pre-cellys1 24 homogenizer at 5000 rpm for 15 s. After vortex and centrifugation (16,000 rcf, 15 min at 4˚C) supernatants were either dried under nitrogen and reconstituted with 50 mM ammonium acetate pH 6.0 buffer or diluted with the same buffer and directly analysed by LC-MS. Determination of unlabelled and labelled total CoA. Brain tissue samples were thawed in an ice bath for 15 min, 4 volumes of aqueous 0.25 M potassium hydroxide solution were added and homogenized with Precellys1 24 homogenizer (5,000 rpm, 15 s). Then, the extracts were heated at 55˚C and after 10 min an aliquot of a 10-50% acetic acid solution was added to each sample to adjust the pH to 5.0. Samples were then centrifuged at 14,000 rcf for 15 min at 4˚C and supernatants were diluted with 50 mM ammonium acetate pH 6.0 containing the internal standard (2.5 μg/mL atenolol) and analyzed by LC-MSMS. LC-MS analysis LC-HRMS method for simultaneous quantitation of unlabelled and labelled (+4 AMU) PA, CoASH and acetylCoA (PO study). An Ultimate 3000 UHPLC system coupled with an Orbitrap QExactive™ mass spectrometer (Thermo Scientific) was used for the analysis of brain and liver extracts of the PO study. Chromatographic separation was performed with a Merck Chromolith Performance C18 column (2.0 x 100 mm). The mobile phases were A) 2.5 mM ammonium acetate pH 6.7 and B) acetonitrile (ACN)/2-propanol 98:2 (v/v). The eluting gradient was as follows: the column was equilibrated with 1% B for 0.25 min, B increased to 70% in 2.15 min, then further increased to 90% in 0.60 min, remained constant for 0.50 min, decreased to 1% in 0.10 min and remained 1% for 3.40 min. The flow rate was 0.3 mL/min and the column T 25˚C. The injection volume was 2 μL. For MS analysis the following parameters were used: ion spray voltage 3.2 kV, sheath gas flow 50 a.u., aux gas flow 5 a.u., aux T 300˚C, capillary T 320˚C, S-lens RF level 50%. Full MS parameters were as follows: mass range 150-2000, AGC target 1x10 6 Calibration standards and quality controls. PA, CoASH and acetylCoA (unlabelled and +4 AMU labelled) were simultaneously measured in brain and liver extracts. Calibration standards (CSs) and quality controls (QCs) were prepared in matrix (brain and liver homogenate supernatants of pre-dose mice) by additions of the unlabelled standard compounds to the tissue extracts. Area ratio (peak area analyte/peak area IS) of endogenous compounds determined in QC0 samples, were then subtracted to CSs and QCs. For the analysis of the mouse brain, a nine-points standard curve (range 2.0-51 μg/g brain for CoASH, 0.53-14.0 μg/g brain for acetylCoA and 0.29-7.30 μg/g brain for PA) was run at the beginning and at the end of sample analysis and four sets of quality controls in triplicates were included (for CoASH: QC0 = endogenous, QC low = 4.30 μg/g brain , QC med = 13.0 μg/g brain and QC high = 38.0 μg/g brain ; for acetylCoA: QC0 = endogenous, QC low = 1.10 μg/g brain , QC med = 3.40 μg/g brain and QC high = 10.0 μg/g brain ; for PA: QC0 = endogenous, QC low = 0.61 μg/g brain , QC med = 1.80 μg/g brain and QC high = 5.50 μg/g brain ). For CSs and QCs acceptance criteria was ±30% of accuracy. For later time point samples in which level of labelled metabolite was below limit of quantification (LOQ), linearity of response between C max and 10% of C max was demonstrated by dilution (2, 5 and 10 folds) of C max sample extract with blank matrix extract (mouse liver and brain). The slope of the so obtained curve was compared to the slope of the unlabelled standard curve. Similar slope (±20%) indicated linearity below LOQ. LC-MSMS method for simultaneous quantitation of unlabelled and labelled (+4 and +6 AMU) CoASH and acetylCoA (intrastriatal study). An Acquity UPLC I-Class system (Waters Corp., Milford, MA) coupled to an API 6500 triple quadrupole mass spectrometer (AB Sciex, Toronto, Canada) was used for simultaneous quantitation of CoASH and acetylCoA species in mouse brain extracts. The chromatographic conditions used are reported in Table 1. MS analysis was performed using an API 6500 triple quadrupole mass spectrometer with a Turbo IonSpray1 ionization source in positive ion mode. Electrospray ionization parameters were set as follows: source temperature, 450˚C; curtain gas, 20 psi; gas 1, 40 psi; gas 2, 60 psi; collision gas (CAD) 9; ion spray voltage 5500 V. CoASH and acetylCoA quantitation was accomplished through the use of multiple reaction monitoring (MRM) acquisition methods using the transitions reported in Table 2. Data acquisition and analysis was performed with Analyst™ 1.6.2 (AB Sciex, Toronto, Canada). Calibration standards and quality controls. CoASH and acetylCoA (unlabelled, +4 and +6 AMU) were simultaneously measured in brain extracts. CSs and QCs were prepared in Table 1. Chromatographic conditions for analysis of free CoA (CoASH) and acetylCoA (unlabelled, +4 and +6 AMU) in brain extracts. PLOS ONE Turnover rate of coenzyme A in mouse brain and liver C57/Bl6 mouse brain depleted of endogenous CoASH and acetylCoA. The brain was kept at room temperature for 2 days, then homogenised in 4 volumes (w/v) of IS-WS 70/30 MeOH/50 mM ammonium acetate pH 6.0 with a Precellys1 24 homogenizer. The homogenate was centrifuged at 4˚C at 16,000 rcf for 15 min and then the supernatant was divided into 40 μL aliquots and spiked with 10 μL of acetylCoA/CoASH spiking solution for CSs and QCs. Supernatant (10 μL) was diluted with 90 μL of 50 mM ammonium acetate pH 6.0 containing 200 ng/mL of atenolol (IS) and directly analysed by LC-MSMS. The standard curve and QCs were for CoASH (range 0.18-23.0 μg/g brain , QC0, QC low = 0.90, QC med = 3.6, QC high = 14.0 μg/g brain ) and for acetylCoA (range 0.046-5.90 μg/g brain , QC0, QC low = 0.24, QC med = 0.95, QC high = 3.80 μg/g brain ). Acceptance criteria was ±30% of accuracy. Data analysis and statistics Calibration curves for quantitation of CoASH and acetylCoA (PO and intrastriatal studies), linearity of response between C max and 10% of C max and area ratio of labelled CoA species versus time plots (semi-log scale) were obtained by weighted (1/X) linear least square regression and were generated using GraphPad Prism version 7.00. Statistical analysis was performed using GraphPad Prism. Student's t test was used to calculate p-values. Differences between groups were considered statistically significant when p was <0.02. Half-life determination CoA half-life (t 1/2 ) was determined based on the analysis of the disappearance of the labelled-CoA species as a function of time. CoA levels were quantified as the peak area relative to an internal standard (area ratio). Area ratio of labelled species from the C max to the elimination was considered for the half-life calculation. The elimination constant k was calculated by plotting mean area ratio values (n = 3) on a semi-logarithmic scale and fitting with a best fit linear regression. The half-life (t 1/2 ) expressed in hours was derived using Eq 1: The associated error (Δt 1/2 ) was estimated as the ratio between the error related to the elimination constant and the square of the elimination constant, as indicated in the formula: PLOS ONE Turnover rate of coenzyme A in mouse brain and liver Linear regression, elimination constant and the error related to the elimination constant were generated using GraphPad Prism version 7.00. CoASH and acetylCoA turnover rate in liver and brain of C57BL/6N male and female mice (PO study) To determine the half-life of CoASH and acetylCoA in mouse liver and brain, [ 13 C 3 -15 N]-PA was orally administered to the healthy mice. In a preliminary study conducted in male mice, a low PA diet period of three and six days before treatment was evaluated to facilitate the incorporation of [ 13 C 3 -15 N]-PA into CoASH and acetylCoA. Then, levels of endogenous PA, CoASH and acetylCoA were measured. As shown in Fig 1, the PA restriction diet reduced PA levels in liver and brain as early as day 3 while maintaining unaltered CoASH and acetylCoA levels and the health status of the animals. Because of this, 3 days was selected to trigger the incorporation of the labelled PA into CoASH and acetylCoA. This result is in agreement with a previous study conducted by Shibita K. et al. [35], in which the author showed that the administration of a PA-free diet to rats decreased PA levels in a variety of tissues while CoASH and acetylCoA were not affected. Thus, for half-life determination, mice of both genders were kept on a low PA diet for 3 days and a standard rodent diet was established just before administering a single oral dose of [ 13 C 3 -15 N]-PA at 25 mg/kg to reflect physiological conditions during half-life determination. The labelled material was used in the biosynthetic pathway leading to formation of [ 13 C 3 -15 N]-CoASH and [ 13 C 3 -15 N]-acetylCoA (+4 AMU species) that were measured in brain and liver tissues, together with the unlabelled species, by LC-HRMS up to 342 h post-dose. Mice were sacrificed at selected timepoints (three animals/timepoint) and tissues were collected, snap-frozen in dry ice and stored at -80˚C until LC-HRMS analysis. The timeframe was selected on the basis of a preliminary experiment performed with just male mice (not shown). The tissue content of unlabelled and labelled PA, CoASH, acetylCoA species was measured using a method able to quantify the endogenous levels and labelled species at C max level. The area ratio of labelled species (peak area analyte/peak area internal standard) was used to determine CoASH and acetylCoA halflives in tissues. In the latest time points, samples (from 102 to 342 h) labelled metabolites levels PLOS ONE had fallen below LOQ. The linearity of response between C max and 10% of C max was thus assessed diluting C max extracts (containing the highest level of labelled species) with blank matrix extracts (containing IS) and plotting the peak area of labelled-analyte (normalized for IS response) versus the inverse of the dilution factor. The slope of the linear regression model (y = mx + c) of the dilution-response curve was compared to the slope of the standard curve of the corresponding unlabelled metabolite. Response was considered linear below LOQ if slopes were similar (±20%). As shown in S1 and S2 Figs, a linear response below LOQ (up to 10 folds dilution of C max ) was obtained, both in the brain and liver, for all the metabolites (CoASH and acetylCoA +4 AMU). All the timepoints collected fell in the linear range and were thus included in the half-life calculation. As shown in Fig 2 and Table 3, levels of unlabelled metabolites in the male and female liver and brain were stable during the entire period Table 3. Average endogenous level of pantothenic acid (PA), free CoA (CoASH) and acetylCoA in the male and female C57BL/6N mouse liver and brain. Endogenous concentration in liver (μg/g tissue ) § Endogenous concentration in brain (μg/g tissue ) § of observation. PA instead was minimal at t = 0 as a consequence of the low PA diet and after restoration of a regular diet, reached steady level within 18 and 30 h in the female and male liver, respectively, and 30 and 54 h in the female and male brain, respectively. The PA level, then, remained stable until the end of the study. As shown in Figs 3 and 4, in liver and brain, [ 13 C 3 -15 N]-PA concentration was maximum at 6 h after dosing, which was the first time point collected. In the liver, [ 13 C 3 -15 N]-PA C max was 1.4 ± 0.8 and 1.8 ± 0.8 μg/g liver in male and female, respectively, the half-life was <18 h in both genders. In the brain, [ 13 C 3 -15 N]-PA C max was 2.3 ± 1.0 and 5.5 ± 2.0 μg/g brain in male and female, respectively, the half-life was 47 ± 9 h in male and 29 ± 4 h in female. [ 13 C 3 -15 N]-PA T max was 6 h (first time point after dosing) for both organs and genders. The shorter half-life of labelled-PA in liver reflects the faster metabolic rate of this tissue when compared to the brain. CoASH and acetylCoA had a comparable turnover. Their turnover rate was significantly different between the brain and liver, being slower in the former (p < 0.01). No evident gender difference was observed. In particular, CoASH half-life was 69 ± 5 h and 82 ± 6 h in the male and female liver, respectively. The corresponding values for acetylCoA were 74 ± 9 h and 71 ± 7 h in male and female liver, respectively. In the male and female brain, CoASH half-life was 136 ± 14 h and 144 ± 12 h, respectively. The corresponding values for acetylCoA were 117 ± 13 h and 158 ± 23 h in male and female brain, respectively. A summary of the half-life result is reported in Table 4. CoASH, acetylCoA and total CoA turnover rate in male mice (intrastriatal study) An additional study was performed to confirm the half-life results in the brain of mice. This confirmation was performed using only male mice to restrict the use of animals in a surgical procedure of high severity. In this new study, male C57/Bl6 mice (n = 3 per time point) received bilateral intrastriatal injections of 250 μg (125 μg each striatum) of isotopically labelled-fosmetpantotenate (+6 AMU) [31,36]. Fosfometpantotenate a 4'-phosphopanthotenic acid precursor, enters the CoA and acyl-CoA biosynthetic pathways resulting in the formation of +6 and +4 AMU labelled CoA species (Fig 5). Beside CoASH and acetylCoA, also total CoA was determined to account for all the acyl-CoA species endogenously present in the brain. At selected time points, the brain hemisphere samples were dissected and CoASH, acetylCoA and total CoA (unlabelled, +4 and +6 AMU) levels were measured by LC-MSMS over 336 h. The concentration of the labelled metabolites was used to calculate the turnover rate of CoASH, acetylCoA and total CoA. The use of endogenous metabolites depleted matrix and ion-pair chromatography helped improve the LOQ by 5-10 fold when compared to the standard addition combined with Monolitich/HR-MS method used for the PO study. Thus, the sensitivity was sufficient to measure the labelled metabolites formation and disappearance over the total time period of the experiment. The labelled species, absent in the pre-dose samples, were already observed at 24 h after dosing and decreased from 24 h (+4 AMU ones) or 96 h (+6 AMU ones) to 336 h. The level of unlabelled CoASH, acetylCoA and total CoA remained stable throughout the entire period of observation ( Table 5). The slower turnover rate of the +4 AMU species was probably due to their regeneration from catabolism of the +6 AMU ones (see Fig 5, pathway B and C), as shown by their different T max (24 h and 96 h for + 6 AMU and + 4 AMU species, respectively). 18 O/ 16 O exchange could also contribute to the longer half-life value observed for the +4 AMU CoA species. A summary of half-life results in mouse brain is reported in Table 6. The results obtained after the intrastriatal administration of labelled-fosmetpantotenate are in accordance with the results obtained after the PO dosing of labelled PA. Discussion The knowledge of the in vivo turnover rate of the main representative species of CoA is crucial to define the dosing regimen of drugs targeting CoA deficiency. However, despite the relevance of this information, an accurate evaluation of CoA half-life in important tissues like the liver and brain is still lagging behind. We conducted two studies (PO and intrastriatal) to determine PA, CoASH, acetylCoA half-life in the male and female mouse brain and liver and the total CoA in the male mouse brain. In the PO study, all animals were fed with PA deficient diet for 3 days to favour labelled-PA incorporation into CoA species while maintaining endogenous levels and the health status of the animals. Then, before the oral administration of [ 13 C 3 -15 N]-PA, a standard rodent diet was administered until the end of the study. The decay over time of the biosynthetically formed labelled-CoA species was measured by LC-MS. The obtained results were confirmed by the intrastriatal study, in which the labelled fosmetpantotenate was injected directly into the brain of the male mice. For the two studies, the well-being of the animal was monitored and no adverse effects were observed. In both studies, the endogenous CoA level was unaltered for the duration of the entire experiment, assuring that the animals did not suffer from CoA deprivation, nor did they experience metabolic impairment. In the PO study, we observed a very long half-life in the brain for CoASH (136 ± 14 h (male) and 144 ± 12 (female)) and acetylCoA (117 ± 13 h (male) and 158 ± 23 h (female)) and in the liver (CoASH 69 ± 5 h (male) and 82 ± 6 h (female), actylCoA 74 ± 9 h (male) and 71 ± 7 h (female)). The turnover was shorter in the liver than in the brain, reflecting organ function and physiology. No significant gender differences were observed. Similar results were confirmed in the intrastriatal study conducted in male mice only; 124 ± 13 h for CoASH, 117 ± 11 h for acetylCoA and 144 ± 17 h for total CoA. To the best of our knowledge, this is the first report of in vivo CoA turnover rate in the mouse brain. An estimation of CoASH half-life in the mouse liver was published by Zhang et al. [37]. These authors investigated the biochemical and genetic alterations occurring in mice when inhibiting CoA biosynthesis using a pantothenate structural analogue, hopantenate (HoPan), which inhibits all active PANK isoforms [38]. HoPan, administered to male and female mice (100 μg/g/day by gavage) depleted the free and total CoA in mouse tissues. The drop of CoA levels was particularly evident in the liver and the kidney suggesting a faster turnover rate for CoASH in these tissues compared to the brain and heart, which is in agreement with our results. In the HoPan treated mice, the CoA half-life in the liver was estimated as 20-24 h in males and about 90 h in females. While the turnover rate in female mice was in agreement with our result (82 ± 6 h), the result in the male mice was noticeably shorter (20-24 h compared to 69 ± 5 h). This discrepancy might be due to the different health status of the male and female animals after the HoPan treatment (the former expiring after 5 days while the latter surviving for 16 days from the beginning of the treatment) and compared to our study in which the animals were healthy for the entire period of observation and no adverse effects were recorded. In the HoPan study, a severe liver impairment was also observed. Our data illustrate for the first time the turnover rate of CoA in the mouse brain and liver in physiological healthy conditions, showing different tissue turnover rates. Conclusions In this study we reported results for the first in vivo turnover rate determination of CoASH, acetylCoA and total CoA in the brain and liver of healthy mice. These results demonstrate that all CoA species investigated have a long turnover rate, shorter in the liver than in the brain (about 3 days in liver and 5-6 days in brain). The difference between the two organs was not surprising and probably reflects the faster metabolic activity of the liver. Gender differences, not observed in the PO study, where not investigated in the intrastriatal study which was conducted only in male mice. The turnover rate of the CoA species could be useful to develop a therapeutic agent aimed to increase CoA level in deficiency conditions leading to serious neurodegenerative disease like PKAN.
6,948.2
2021-05-21T00:00:00.000
[ "Medicine", "Biology" ]
The Hubbard model on triangular $N$-leg cylinders: chiral and non-chiral spin liquids The existence of a gapped chiral spin liquid has been recently suggested in the vicinity of the metal-insulator transition of the Hubbard model on the triangular lattice, by intensive density-matrix renormalization group (DMRG) simulations [A. Szasz, J. Motruk, M.P. Zaletel, and J.E. Moore, Phys. Rev. X {\bf 10}, 021042 (2020)]. Here, we report the results obtained within the variational Monte Carlo technique based upon Jastrow-Slater wave functions, implemented with backflow correlations. As in DMRG calculations, we consider $N$-leg cylinders. For $N=4$ and in the presence of a next-nearest neighbor hopping, a chiral spin liquid emerges between the metal and the insulator with magnetic quasi-long-range order. Within our approach, the chiral state is gapped and breaks the reflection symmetry. By contrast, for both $N=5$ and $N=6$, the chiral spin liquid is not the state with the lowest variational energy: in the former case a nematic spin liquid is found in the entire insulating regime, while for the less frustrated case with $N=6$ the results are very similar to the one obtained on two-dimensional clusters [L.F. Tocchio, A. Montorsi, and F. Becca, Phys. Rev. B {\bf 102}, 115150 (2020)], with an antiferromagnetic phase close to the metal-insulator transition and a nematic spin liquid in the strong-coupling regime. I. INTRODUCTION The quest for spin-liquid states has fascinated the condensed matter physics community since the first proposal of the resonating-valence bond (RVB) theory by Fazekas and Anderson [1,2]. This approach has been one of the first attempts to describe a Mott insulator without any sort of symmetry breaking, even at zero temperature. In recent years, spin liquids have been reported in an increasing number of materials. Examples are given by Herbertsmithite, which is well described by the Heisenberg model on the kagome lattice [3], organic compounds, like κ(ET) 2 Cu 2 (CN) 3 and Me 3 EtSb[Pd(dmit) 2 ] 2 [4,5], or transition-metal dichalcogenides, like 1T-TaS 2 , whose low-temperature behavior could be captured by the Hubbard model on the triangular lattice [6]. An important open question concerns the nature of the insulating phase of the two-dimensional Hubbard model on the triangular lattice at half filling. Most of the investigations have been concentrated to its strong-coupling regime, where only spin S = 1/2 degrees of freedom are left. Here, spin liquids can be systematically classified, according to the projective-symmetry group (PSG) theory [7][8][9]. In particular, one can distinguish between Z 2 and U (1) spin liquids, according to the low-energy symmetry of the emerging gauge fields [10]. Starting from the Heisenberg model with nearest-neighbor (NN) super-exchange J, spin-liquid phases are expected to be stabilized when including either a next-nearest-neighbor (NNN) coupling J or a four-spin ring-exchange term K. The latter one can be justified within the fourthorder strong-coupling expansion in t/U and is usually considered for an effective description of density fluctuations close to the Mott transition [11]. As far as the J − J model is concerned, a gapless U (1) spin liquid has been proposed by both variational Monte Carlo (VMC) [12,13] and recent density-matrix renormalization group (DMRG) calculations [14], while older DMRG results suggested the presence of a gapped spin liquid [15,16]. In addition, also ring-exchange terms may stabilize a gapless spin liquid (with a spinon Fermi surface), as proposed by earlier VMC studies [17] and confirmed by later DMRG simulations [18,19]. Further VMC investigations suggested two other gapless spinliquid states, none of them possessing a spinon Fermi surface [20]. However, more recent tensor-network approaches, implemented from Gutzwiller-projected wave functions, do not support the existence of a gapless spin liquid [21]. Recently, chiral spin liquids attracted much attention, because of their similarities with quantum Hall states [22,23]. Interestingly, chiral states may exist not only when the Hamiltonian explicitly breaks timereversal symmetry (as in the quantum Hall effect) [24], but also as a result of a spontaneous symmetry breaking phenomenon [25]. On the triangular lattice, some evidence of this exotic phase has been obtained by adding a scalar chiral interaction to the Heisenberg Hamiltonian [26,27], or even in a fully symmetric Heisenberg model with super-exchange couplings up to the third neighbors [28]. A PSG classification of chiral states is possible, as worked out for the fermionic case for different lattices [29]. In particular, two simple Ansätze can be constructed [30]: The first one (dubbed CSL1) is a U (1) chiral spin liquid, with complex hoppings defined on a 2 × 1 unit cell and no pairing; the second one (dubbed CSL2) is a Gutzwiller projected d + id superconductor. The situation in the Hubbard model, characterized by a kinetic term t and an on-site Coulomb repulsion U , is much less clear. The main difficulty comes from the presence of density fluctuations, whose energy scale is related to U that is much larger than the typical energy scale of spin fluctuations, i.e., J = 4t 2 /U . Therefore, it is not simple to detect tiny effects related to spin degrees of freedom when density fluctuations are present. In addition, numerical methods like exact diagonalization or DMRG suffer from the fact that the local Hilbert space is doubled with respect to the case of S = 1/2. Nevertheless, this effort is necessary in order to capture density fluctuations that are inevitably present in real materials. The possibility that a spin-liquid phase may exist not only in the strong-coupling regime U t, but also close to the metal-insulator transition has been discussed by different theoretical and numerical approaches in the past [31][32][33][34][35][36][37]. The term weak-Mott insulator has been used in this case, namely when a spin liquid intrudes between the weak-coupling metal and the strong-coupling antiferromagnetic insulator [11]. In particular, recent extensive DMRG calculations [38,39] highlighted the possibility for a gapped chiral spin liquid close to the Mott transition. A possible description of such a state has been proposed within a bosonic RVB description [40], as well as within a spin model with the four-spin ring-exchange term [41]. Calculations of Ref. [39] are limited to 4-leg cylinders, which highly frustrate the 120 • magnetic pattern, since the corresponding k vectors are not allowed by the quantization of momenta. Instead, in Ref. [38] also 6-leg cylinders have been considered, even if the presence of two almost degenerate momentum sectors at intermediate U/t can make the interpretation of the results not completely trivial. In addition, a recent study at finite temperature, still focusing on 4-leg cylinders, highlighted the concomitant presence of chiral correlations and nematic order at finite, but low, temperature and intermediate coupling [42]. Instead, a DMRG investigation on 3-leg cylinders suggested that a gapless and nonchiral spin liquid appears close to the Mott transition [43]. A more conventional picture, with a direct transition between a metal and an insulator with magnetic order, has been found in Refs. [44][45][46]. We would like to remark that the analysis of the insulating phase in the vicinity of the Mott transition is complicated by the significant difference in locating the Mott transition, as observed with the different methods. Finally, the effect of NNN hopping has been addressed in Ref. [47], using the VCA method with few (12) sites, leading to a large spin-liquid region for t /t > 0, and in a VMC study that suggests always a direct transition between a metal and a magnetic insulator, even if with an asymmetry between positive and negative values of t /t [48]. In this work, we present variational Monte Carlo results, based upon Jastrow-Slater wave functions and backflow correlations, for the Hubbard model on the triangular lattice on N -leg cylinders, with N = 4, 5, and 6. The role of the NNN hopping term t is also discussed. On 4-leg cylinders, we find a chiral spin liquid on a relatively extended region in the vicinity of the metalinsulator transition, when t /t < 0. This intermediate phase is gapped, in analogy with DMRG results. In addition, a gapless nonchiral state appears at larger values of the Coulomb repulsion. For t /t ≥ 0, the Mott insulator is always nonchiral, with quasi-long-range 120 • magnetic order, even if the chiral spin-liquid state is quite close in energy, at least in the vicinity of the Mott transition. On 5-and 6-leg cylinders, the chiral spin liquid does not give the best variational energy. For N = 5, the whole insulating regime is described by a gapless nonchiral spin liquid, while for N = 6 the phase diagram is similar to the one found in the two-dimensional case [48], with antiferromagnetic order close to the metal-insulator transition and a gapless nonchiral spin-liquid phase at strong coupling. The paper is organized as follows: in section II, we describe the model and the various variational wave functions, as well as the quantities that have been used to obtain the important information; in section III, we present the numerical results; finally, in section IV, we draw our conclusions. II. MODEL AND METHOD We consider the single-band Hubbard model on the triangular lattice: where c † i,σ (c i,σ ) creates (destroys) an electron with spin σ on site i and n i,σ = c † i,σ c i,σ is the electronic density per spin σ on site i. The NN and NNN hoppings are denoted as t and t , respectively; U is the on-site Coulomb interaction. We define three vectors connecting NN sites, a 1 = (1, 0), a 2 = (1/2, √ 3/2), and a 3 = (−1/2, √ 3/2); in addition, we also define three vectors for NNN sites, b 1 = a 1 + a 2 , b 2 = a 2 + a 3 , and b 3 = a 3 − a 1 . We consider clusters with periodic boundary conditions defined by T 1 = L 1 a 1 and T 2 = L 2 a 2 , in order to have L = L 1 × L 2 sites. We focus on cylinders with four (L 2 = 4), five (L 2 = 5), and six (L 2 = 6) legs, see the case with L 2 = 4 in Fig. 1. Most of the calculations have been done with L 1 = 30, which is large enough not to suffer from significant finite-size effects. The half-filled case, where the Mott transition takes place, is considered. In this case, only the sign of the ratio t /t is relevant and not the individual signs of t and t . Our numerical results are obtained by means of the VMC method, which is based on the definition of suitable wave functions to approximate the ground-state properties beyond perturbative approaches [49]. In particular, we consider the so-called Jastrow-Slater wave functions that include long-range electron-electron correlations via the Jastrow factor [50,51], on top of an uncorrelated Slater determinant (possibly including electron pairing). In addition, the so-called backflow correlations will be applied to the Slater determinant, in order to sizably improve the quality of the variational state [52,53]. Thanks to Jastrow and backflow terms, these wave functions can reach a very high degree of accuracy in Hubbard-like models, for different regimes of parameters, including frustrated cases [54]. Therefore, they represent a valid tool to investigate strongly-correlated systems, competing with state-of-the-art numerical methods, as DMRG or tensor networks. Our variational wave function for describing the spinliquid phase is defined as: where J d is the density-density Jastrow factor and |Φ MF is a state where the orbitals of an auxiliary Hamiltonian are redefined on the basis of the many-body electronic configuration, incorporating virtual hopping processes, via the backflow correlations [52,53]. The density-density Jastrow factor is given by where n i = σ n i,σ is the electron density on site i and v i,j are pseudopotentials that are optimized for every independent distance |R i − R j |. The density-density Jastrow factor allows us to describe a nonmagnetic Mott insulator for a sufficiently singular Jastrow factor v q ∼ 1/q 2 (v q being the Fourier transform of v i,j ) [50,51]. The auxiliary Hamiltonian is then defined as follows: where ξ k =˜ k − µ defines the free-band dispersion (including the chemical potential µ) and ∆ k is the singlet pairing amplitude. In our previous work on the twodimensional lattice [48], we found that the best spin liquid has a nematic character, the hopping terms being given by: Instead, the pairing amplitudes are: which possess a d-wave symmetry on the two bonds with hopping t. In two dimensions, we foundt d ≈ 0 and ∆ d ≈ 0, while on cylinders they may assume finite values. Remarkably, this choice (with different couplings along a 2 and a 3 ) gives the best variational energy also on cylinders, this implying an explicit breaking in pointgroup symmetries. In addition, we focus on chiral spin-liquid states, which have been claimed to be relevant both in the Heisenberg limit [30] and in the Hubbard model close to the Mott transition [38,39]. The CSL2 state is a projected d + id superconductor characterized by uniform (real) hopping along NN and NNN bonds and a pairing where ω = e 2iπ/3 . Another chiral state (dubbed here CSL3) may be defined by the hopping amplitude of Eq. (5) and a different d + id pairing structure: (8) Finally, a chiral spin liquid with U (1) symmetry has been proposed (dubbed CSL1 in Ref. [30]), with magnetic fluxes piercing the elementary plaquettes. In presence of density fluctuations, this state breaks the translational symmetry and does not give a competitive variational energy. Therefore, in the following, this Ansatz is not reported. Within the two-dimensional case, antiferromagnetically ordered wave functions represent an important class of states, since a large portion of the phase diagram corresponds to phases that spontaneously break the SU(2) spin symmetry. Cylinders are quasi-one dimensional systems, in which a continuous symmetry cannot be broken. Nevertheless, variational wave function can be still constructed from a magnetically ordered Slater determinant. Then, density and spin correlations may be inserted by Jastrow factors: here, J d is the density-density term of Eq. (3) and J s is the spin-spin Jastrow factor, which is written in terms of a pseudopotential u i,j that couples the z-component of the spin operators on different sites Finally, |Φ AF is obtained, after taking into account the backflow corrections, from the following auxiliary Hamiltonian: where, k is the free dispersion of Eq. (1), S i = (S x i , S y i , S z i ) is the spin operator at site i and M i is defined as M i = [cos(Q · R i ), sin(Q · R i ), 0], where Q is the pitch vector. The three-sublattice 120 • order has Q = ( 4π 3 , 0) or ( 2π 3 , 2π √ 3 ), while the stripe collinear order with a two-sublattice periodicity has Q = (0, 2π √ 3 ) or Q = (π, π √ 3 ). On 6-leg cylinders, the pitch vector corresponding to the 120 • order is allowed by the quantization of momenta; instead, on 4-and 5-leg cases it is not allowed and we take the closest possible momentum. On 5 legs, also the pitch vector of the stripe collinear order is not allowed. In general, the effect of the spin-spin Jastrow factor J s is to reduce the value of magnetic order of the uncorrelated Slater determinant [55,56]. In purely onedimensional systems, the presence of a long-range Jastrow factor is able to completely destroy magnetic order, leading to the correct behavior of the spin-spin correlations [57]. On cylinders with a finite number of legs N , a residual magnetic order persists, thus giving rise to a spurious wave function that breaks the SU(2) symmetry. Here, we interpret the possibility to stabilize this kind of variational state as the tendency to develop magnetic order in the two-dimensional system. For simplicity, in the following, the Ansatz of Eq. (9) will be denoted by "antiferromagnetic". We remark that, in principle, it would be possible to restore the SU(2) symmetry by projecting on the S = 0 subspace [58]. However, this procedure is rather computationally expensive, whenever the computational basis has a definite value of S z = i S z i but not of S 2 = ( i S i ) 2 . All the pseudopotentials in the Jastrow factors, the parameters in the auxiliary Hamiltonian, as well as the backflow corrections are optimized with the stochastic reconfiguration method [49]. In order to assess the metallic or insulating nature of the ground state we can compute the static densitydensity structure factor: where . . . indicates the expectation value over the variational wave function. Indeed, density excitations are gapless when N (q) ∝ |q| for |q| → 0, while a gap is present whenever N (q) ∝ |q| 2 for |q| → 0 [53,59]. Analogously, the presence of a spin gap can be checked by looking at the small-q behavior of the static spin-spin correlations [60]: III. RESULTS Here, we discuss the results for the variational energy of different states on the 4-leg cylinder geometry. Let us start from the case with t /t = 0, see Fig. 2 (upper panel). In this case the Mott transition occurs between U/t = 9 and 9.5, as extracted from the low-q behavior of the density-density correlations, see that the conducting phase is a standard metal, with neither magnetic nor superconducting order. Instead, in the insulating phase, the optimal wave function is the antiferromagnetic one with the pitch vector corresponding to approximately 120 • order, i.e., Q = ( 2π 3 , 7π . The overall situation is not much different from what has been obtained, within the same approach, in the two-dimensional limit [48] (except the fact that in the latter case, a true antiferromagnetic order settles down). We also remark that the energy gain of the antiferromagnetic state with respect to the spin-liquid one is smaller on four legs than in two dimensions. Then, a large spin-liquid region appears immediately above the Mott transition, by including a finite NNN hopping t /t = −0.3, see Fig. 2 (middle panel). Here, the metal-insulator transition takes place at U/t = 11.5±0.5, see Fig 3 (middle panel). The best variational state, between U/t = 12 and 16, is given by the CSL3, even though the other spin-liquid states are very close in energy. By increasing the ratio U/t, the ground state passes through an intermediate phase where the best variational state is the antiferromagnetic one (with collinear order), before entering a further (strong-coupling) spin-liquid region that has no chiral features, in analogy with the results previously obtained in two dimensions [48]. In Fig. 2, we do not report the flux phase CSL1, since its variational energy is always significantly higher than the other states. 3 ), up to U/t ≈ 20. For larger values of the electron-electron repulsion, a nonchiral spin-liquid state emerges. Note that the energies reported for the antiferromagnetic state with collinear order below the Mott transition correspond to a local minimum with insulating features. In order to determine the nature of the chiral spinliquid state, we analyze the spin-spin correlations by computing the spin-spin structure factor of Eq. (13). In Fig. 4, we report calculations for t /t = −0.3 and values of U/t across the Mott transition. The main result is that the chiral spin liquid, realized close to the Mott transition, has a spin gap, since S(q) ∝ |q| 2 for small values of the momentum q. This is in agreement with recent DMRG studies [38,39]. We remark that this feature is solid, since it is also shared by the other two spinliquid states with nearby energies, i.e., the CSL2 and the nonchiral one parametrized by Eqs. (5) and (6). On the contrary, the large-U state is gapless. In this regime the optimal parameterst d ≈ 0 and ∆ d ≈ 0 lead to a gapless spectrum in the auxiliary Hamiltonian (4), thus indicating that the nature of the unprojected state is not changed when including the Jastrow factor. The optimal chiral spin liquid (close to the Mott transition), as well as the nonchiral one (in the strongcoupling regime) are very anisotropic, as shown by computing the nearest-neighbor spin-spin correlations D j = S z Ri S z Ri+aj , with j = 1, 2, 3. For example, for t /t = −0.3 and U/t = 12, the CSL3 state has D 1 = D 3 = −0.029(1) and D 2 = −0.069 (1). For U/t = 20, the nonchiral spin liquid has D 1 = D 3 = −0.101(1) and D 2 = +0.041 (1). Within the error bar, these results are the same from L = 18 × 4 to L = 30 × 4. As discussed in section II, this anisotropy follows directly the parametrization of the spin-liquid state, see Eqs. (5) and (6) for the nonchiral Ansatz and Eqs. (5) and (8) for the CSL3. Then, we show the stability of the chiral spin liquid when going from N = 4 to N = 6. Results are shown in Fig. 5, together with the ones for a truly two-dimensional cluster (with L = 18 × 18 sites), which has been already discussed in our previous work [48]. On a twodimensional cluster, the CSL3 state is a local minimum, with energy higher than the other states. Instead, on 6leg cylinders, the CSL3 state is not reported, since, upon optimization, it converges to the non-chiral state. The most important fact is that no chiral phases are present in the insulating region close to the metal-insulator transition (which, for N = 6 appears between U/t = 11 and U/t = 12). Here, the insulating phase is either an antiferromagnet with collinear order, in the vicinity of the Mott transition, or a gapless nonchiral spin liquid, in the strong-coupling regime. Note that, also in this case, the antiferromagnetic state with collinear order becomes a local minimum below the Mott transition. The reason for the stabilization of the chiral state on 4-leg cylinders comes from its remarkable energy gain when going from N = 6 (or equivalently two dimensions) to N = 4; by contrast, the variational energies of the antiferromagnetic state do not change much when varying N . Overall, the resulting phase diagram for N = 6 is qualitatively similar to the one obtained in two dimensions. Therefore, within our approach, the chiral spin liquid exists only for particular values of N , like on 4-leg cylinders. Finally, we have also considered cylinders with an odd number of legs, i.e., with L 2 = 5. This is a particularly frustrated case, since both 120 • and stripe collinear magnetic correlations are not allowed by the quantization of transverse momenta. Results for the energies of the different variational states are reported in Fig. 6. The Mott transition is determined also in this case by looking at the static structure factor of Eq (12). In this case, the insulator is always a gapless spin liquid, with no chiral features. Indeed, the best variational state is the one defined by Eqs. (5) and (6), with optimal variational parameters ∆ d ≈ 0 andt d ≈ 0, which is the same as the large-U spin liquid reported on the 4-leg case. The two magnetic states are now both disfavored because of the 5-leg geometry and they are approximated by the pitch vectors Q = (π, 3π 5 √ 3 ) (for the stripe collinear order) and Q = ( 2π 3 , 26π 15 √ 3 ) (for the 120 • order). The two chiral states (CSL2 and CSL3) have also higher energies with respect to the nonchiral one. Our finding is in agreement with what reported by DMRG in Ref. [38], where no chiral features are observed on the 5-leg cylinder when using periodic boundary conditions. IV. CONCLUSIONS In summary, we have studied the Hubbard model on cylinders with a triangular lattice geometry by means of the VMC approach. Both a NN hopping t and a NNN hopping t are considered in the model. First, we fo-cused on the 4-leg case, with different values of the ratio t /t. For t /t < 0, a spin liquid is stabilized in the vicinity of the Mott transition. This state is a gapped chiral spin liquid that also breaks the point-group symmetry. At larger values of U/t, a further gapless spin liquid appears. For t = 0, the insulating region is always antiferromagnetic (with approximately 120 • order), while, for t /t > 0, we observe a gapless spin liquid in the strong-coupling regime. However, the chiral spin liquid disappears on cylinders with 5 and 6 legs, as well as in the truly two-dimensional case. In these cases, a gapless spin liquid survives in the large-U region. These results are summarized in Fig. 7. Our calculations convey two main messages: On one side, the spin liquid that we obtain on the 4-leg cylinder, close to the Mott transition, is chiral and spin gapped, in agreement with recent DMRG calculations [38,39]. In addition, the best chiral state breaks the reflection symmetry, as also suggested in the finite-temperature tensornetwork method of Ref. [42]. Nevertheless, within variational Monte Carlo, an additional NNN hopping is necessary to stabilize the chiral state. On the other side, our results suggest that a chiral spin liquid exists only in particular geometries (e.g., the 4-leg cylinder). Instead, on cylinders with 5 and 6 legs (as well as in two dimensions), the chiral spin liquid either is not stable upon optimization or has a variational energy that is quite higher than the optimal state. Finally, we observe that chiral flux phases (defined on the 2 × 1 unit cell) have a variational energy that is not competitive with other wave functions. As every variational calculations, our results suffer from an intrinsic bias, given by the choice of the variational Ansatz; still, the Jastrow-Slater state possess a large flexibility, being able to describe a wide variety of different phases, including quantum spin liquids, with or without chiral order. The fact that we do not observe a chiral spin liquid in 5-and 6-leg cylinders and in two dimensional clusters, suggests that either this state is not present or it cannot be represented by the Ansätze that have been considered here.
6,157.2
2021-05-03T00:00:00.000
[ "Physics" ]
Quasi-isotropic UV Emission in the ULX NGC~1313~X--1 A major prediction of most super-Eddington accretion theories is the presence of anisotropic emission from supercritical disks, but the degree of anisotropy and its dependency with energy remain poorly constrained observationally. A key breakthrough allowing to test such predictions was the discovery of high-excitation photoionized nebulae around Ultraluminous X-ray sources (ULXs). We present efforts to tackle the degree of anisotropy of the UV/EUV emission in super-Eddington accretion flows by studying the emission-line nebula around the archetypical ULX NGC~1313~X--1. We first take advantage of the extensive wealth of optical/near-UV and X-ray data from \textit{Hubble Space Telescope}, \textit{XMM-Newton}, \textit{Swift}-XRT and \textit{NuSTAR} observatories to perform multi-band, state-resolved spectroscopy of the source to constrain the spectral energy distribution (SED) along the line of sight. We then compare spatially-resolved \texttt{Cloudy} predictions using the observed line-of-sight SED with the nebular line ratios to assess whether the nebula `sees' the same SED as observed along the line of sight. We show that to reproduce the line ratios in the surrounding nebula, the photo-ionizing SED must be a factor $\approx 4$ dimmer in ultraviolet emission than along the line-of-sight. Such nearly-iosotropic UV emission may be attributed to the quasi-spherical emission from the wind photosphere. We also discuss the apparent dichotomy in the observational properties of emission-line nebulae around soft and hard ULXs, and suggest only differences in mass-transfer rates can account for the EUV/X-ray spectral differences, as opposed to inclination effects. Finally, our multi-band spectroscopy suggest the optical/near-UV emission is not dominated by the companion star. INTRODUCTION Ultraluminous X-ray sources are defined as extragalactic off-nuclear point-like sources with an X-ray luminosity exceeding the Eddington limit of a 10 M ⊙ black hole (BH) (e.g.Kaaret et al. 2017;King et al. 2023).It is now established that the vast majority of these systems are powered by super-Eddington accretion onto a stellar-mass compact objects in binary configurations with a donor star.While it is speculated that the population of ULXs might be dominated by transient systems briefly reaching the ULX threshold (Brightman et al. 2023), most of the well-known systems shine persistently at such extreme luminosities, acting as laboratories for the study of sustained super-Eddington accretion.However, despite decades of studies, how such extreme luminosities are produced remains a matter of debate. One major prediction of super-Eddington accretion theory is the presence of highly anisotropic emission (Shakura & Sunyaev 1973;Poutanen et al. 2007).As the mass-transfer rates reaches or exceeds the Eddington limit, powerful radiation-driven optically-thick outflows are launched from the accretion disc, creating an evacuated cone or funnel around the rotational axis of the compact object.This causes observers at high-inclination to see the reprocessed emission of the outflow photosphere, whereas observer peering down the funnel will see the hot emission from the inner parts of the accretion flow (Poutanen et al. 2007;Abolmasov et al. 2009).Even if ★ E-mail<EMAIL_ADDRESS>the quantitative details may differ, there is now a body of numerical simulations which indeed reproduce the accretion flow geometry and anisotropic emission pattern envisioned by Shakura & Sunyaev (1973) (e.g.Kawashima et al. 2012;Narayan et al. 2017;Mills et al. 2023), . The discovery of neutron stars (NSs) in ULXs through X-ray pulsations (Bachetti et al. 2014;Israel et al. 2017;Castillo et al. 2020) opened up new avenues in which super-Eddington accretion may proceed.For instance, it is known that magnetic fields reduce the cross-section for electron scattering, thereby increasing the allowed Eddington luminosities (Basko & Sunyaev 1976;Mushtukov et al. 2015).NSs are also expected to be more radiatively efficient compared to BHs, as the latter swallow the excess radiation which is emitted otherwise at the NS surface (Takahashi et al. 2018).Additionally, at high-mass transfer rates, the NS may be engulfed in an optically-thick magnetosphere, whose spectrum is predicted to emit instead as a multi-color blackbody with a dipolar temperature dependency (Mushtukov et al. 2017). Constraining the degree of anisotropy in ULXs is thus not only key to understand the accretion flow geometry powering them and test existing theories, but also imperative to understand the effect of the ULX on its environment.For instance, the exact radiative output will inform about the role of ULXs/X-ray binaries on the epoch of re-ionization (Madau & Fragos 2017) as well as help explain how or whether ULXs can shape some galaxy properties.In this regard, explaining the presence of bright nebular Heii 4686 emission line in the integrated spectra of metal-poor galaxies remains a long-standing issue, as regular stars do not produce enough photons above its ionisation potential (IP = 54 eV) to explain it (Schaerer et al. 2019, and references therein).The discovery of HeIII regions around a few ULXs (Pakull & Mirioni 2002;Kaaret et al. 2004;Abolmasov et al. 2008) together with the fact that this line seems more prevalent in low-metallicity galaxies, where hard-ionising sources such as X-ray binaries and ULXs are more common (e.g.Shirazi & Brinchmann 2012;Kovlakas et al. 2020;Lehmer et al. 2021), has made ULXs receive attention as a potential explanation for this so-called 'Heii problem' (e.g.Simmonds et al. 2021;Kovlakas et al. 2022).Whether that is the case remains uncertain, mainly due the poor understanding of the ULX UV emission and anisotropy effects (Simmonds et al. 2021;Kovlakas et al. 2022). A key discovery allowing to constrain the degree of anisotropy was the observation of extended (25-80 pc) EUV/X-ray photoionized gas around a handful of ULXs (Pakull & Mirioni 2002).Such nebulaewhen spatially-resolved -effectively provide a 2D map of the ionising SED not directed onto the line of sight, allowing observers to compare the nebular emission lines with those expected from the line-of-sight SED.If the expected emission lines and the observed ones from the nebula are comparable, then the degree of anisotropy must be relatively small. Most works to date have focused on the HeIII region around the ULX Holmberg II X-1 (e.g.Pakull & Mirioni 2002;Kaaret et al. 2004;Berghea et al. 2010;Berghea & Dudik 2012).These works have found the nebula sees similar SED as that observed along the line of sight, arguing for isotropic emission.It must be noted however that most of these works were based on optical observations, with particular focus on the Heii 4686 line, which is most effective in probing the extreme-UV (EUV) emission.Instead, theoretical/numerical works suggest anisotropy must be strongest in the X-ray band (e.g.Poutanen et al. 2007;Narayan et al. 2017), although observations of the Galactic edge-on ULX-like system SS433 suggest collimation along the polar funnel may well take place in the EUV too (Waisberg et al. 2019).A crucial aspect is therefore the exact extrapolation of the X-ray spectrum to the unaccesible EUV.Evidence suggest that in general, direct extrapolation of X-ray derived models to the UV band is not accurate for ULXs (Dudik et al. 2016;Abolmasov et al. 2008).A way forward is therefore to combine broadband spectroscopy with nebular observations, of which only a few studies exist (Berghea et al. 2010;Berghea & Dudik 2012). In this work, we attempt to constrain the degree of anisotropy in the ULX NGC 1313 X-1 using the ∼200 pc photoionized nebula we discovered in our previous work (Gúrpide et al. 2022).Here we combine state-resolved multi-band spectroscopy -which allows to us to constrain the SED along the line of sight and reduce uncertainties related to the extrapolation to the UV -with spatially-resolved line maps from IFU spectroscopy (Gúrpide et al. 2022) -which allow us to constrain the SED seen by the nebula along two different sight lines. We will show that the degree of collimation of the UV/EUV emission is small (a factor ∼4) in agreement with previous works (e.g.Kaaret et al. 2004).However, we will show that unlike the ULXs Holmberg II X-1 or NGC 6946 X-1, NGC 1313 X-1 does not produce a strong HeIII region.We will argue that the reason most likely lies in the differences in mass-accretion rate between these three sources. This paper is structured as follows: in Section 2 we present our multi-band data reduction and its classification into the spectral states of NGC 1313 X-1 based on the long-term behaviour of the source.In Section 3 we present spectral modelling of the multi-band spectral states of NGC 1313 X-1.Section 4 presents the Cloudy photoionization modelling of the nebular emission along two different sight lines and finally, Sections 5 and 6 present our Discussion and Conclusions. DATA REDUCTION In order to characterize the broadband SED of NGC 1313 X-1, we have investigated the available archival data to characterize the source temporal and spectral properties.In particular, we have considered the long-term monitoring provided by the Swift-XRT (Burrows et al. 2005), optical photometry from the Hubble Space Telescope and Xray spectroscopy from XMM-Newton (Jansen et al. 2001) and the NuSTAR (Harrison et al. 2013) observatories. The long-term lightcurve of NGC 1313 X-1 along with the hardness-intensity diagram (HID) is shown in Fig. 1 and was extracted using the standard online tools (Evans et al. 2007(Evans et al. , 2009)).From the Figure, it is clear how the source transits through two main states (a behaviour also supported by XMM-Newton observations; Gúrpide et al. 2021a;Pintore & Zampieri 2012).We refer to them as the 'high' (≳ 0.25 Swift-XRT ct/s) and 'low' (< 0.15 ct/s) states, respectively, hereafter.Such bi-modality is commonly observed in other ULXs (Luangtip et al. 2016;Amato et al. 2023) but it appears less strong in the case of NGC 1313 X-1 (Weng & Feng 2018).As we discuss below in Section 4, our nebular modelling is insensitive to short-term changes in flux.Therefore we focused on building the broadband SED of the source for these two broadly defined spectral states. X-rays X-ray spectral products from XMM-Newton and NuSTAR were taken from Gúrpide et al. (2021a).In particular, based on the recurrent behaviour of NGC 1313 X-1 from the Swift-XRT lightcurve (Fig. 1) and the analysis from Gúrpide et al. (2021a), we selected XMM-Newton and NuSTAR observations taken in 2012 (corresponding to 10XN and 11XN in the notation used by Gúrpide et al. (2021a) or XN1 in Walton et al. (2020a)), March of 2017 (14XN or XN3 respectively) and August/September 2017 (17XXN or XN5 respectively) to characterise the low state.For the characterisation of the high state, we extracted the first 10 ks of XMM-Newton obsid 0803990101 together with the first 20 ks (owing to the lower number of counts) of NuSTAR obsid 30302016002 taken in June of 2017 (15XN or XN4).More details on the characterization of the high state are provided in the Appendix A. We noticed recent important calibration updates for NuSTAR, so we reduced the data using nuproducts version 2.1.2with the most recent calibration files as of February of 2023.Source and background regions were extracted following Gúrpide et al. (2021a).All XMM-Newton (EPIC-PN and the MOS cameras) and the NuSTAR spectra were rebinned using the scheme proposed by Kaastra & Bleeker (2016) and fitted over the band where the source dominated above the background (this was the ∼ 0.3-10 keV band for the XMM-Newton data and typically up to 20-25 keV for the NuSTAR data).All spectra had sufficient number of counts per bin to use the 2 statistic. In order to extract fluxes in the different filters from the optical counterpart identified by Yang et al. (2011), we performed aperture photometry using a 0.2"-radius circular aperture in the ACS/WFC and WFC3/UVIS detectors (corresponding to 4 and 5 pixels respectively) and 0.15" (6 pixels) for the ACS/HRC.The aperture centroid was determined using 2D Gaussian fitting.We then corrected the counts for the finite aperture, following a two step process, as recommended2 : first, because the aperture correction at small scales is known to vary with time and location on the detector, we estimated the aperture correction to 10 pixels using isolated bright stars in the field, typically selecting 1 to 4 stars (depending on the availability of isolated stars in the field) and averaging the results when possible.Next, we corrected the 10-pixel fluxes to infinity using the tabulated values for each combination of detector and filter3 .In instances where it was not possible to find any suitable star to estimate the aperture correction, we relied on the tabulated values for the full correction. The background level was determined by taking the 3--clipped median count rate in a concentric annulus around the source region.For the UVIS filters these regions contained several bright stars, so we instead used a nearby ∼0.7" circular region relatively free of stars but still containing some of the galaxy diffuse emission.The uncertainties on the final background-subtracted count rates where derived assuming Poisson statistics for the source and background regions and accounting for the uncertainty on the aperture correction. In order to derive extinction-corrected fluxes, we first defined a likelihood function as the sum of the uncertainty-weighted residuals between the background-subtracted count rates and the predicted count rates by a model when convolved with the corresponding HST filter.Along with the source model parameters (described below), we modelled extinction using two components: a Galactic component using Cardelli et al. (1989) reddening law, with V = 3.1 and an additional, extragalactic component with V = 4.05 using the extinction curve from Calzetti et al. (2000), which may be appropriate for a star-forming galaxy such as NGC 1313.The galactic component was fixed to the value along the line of sight ( ( − ) G = 0.11; Schlafly & Finkbeiner 2011) while the extragalactic component was included as a Gaussian ( E(B−V) = 0.15, E(B−V) = 0.03) prior on ( − ).The constraints on the extragalactic extinction come from our MUSE spectrum and are justified later in Section 3.1, where we show there is additional extinction ( v = 0.59±0.11mag) towards NGC 1313 X-1.By including ( − ) in this manner we were able to potentially constrain it further based on the HST filters or in the worst case scenario, propagate its uncertainties to the final deabsorbed-fluxes.However, in all instances we could not constrained further and obtained our prior back in the posteriors of ( − ). For the source model, we assumed the same absorbed powerlaw ( = where are the fluxes in erg/cm 2 /s/Å and are the pivot wavelengths of the filters) for all filters in a given epoch.Exceptions to this were epoch 2004-02-22 where the fluxes clearly deviated from a single powerlaw ( 2 > 20 for 1 degree of freedom; see also figure 4 in Yang et al. 2011) and epochs for which only one measurement was available.In these cases we assumed a flat spectrum in erg/cm 2 /s/Å, parametrised by its amplitude . To estimate the best-fit model parameters and their uncertainties, we drew parameter samples (either , , ( −) or , ( −)) to sample the posteriors using 32 Markov-Chain Monte Carlo (MCMC) chains using the emcee package (Foreman-Mackey et al. 2013).The chains were run until a) the number of steps reached 100 times the integrated autocorrelation time (), which was estimated on the fly every 800 samples, and b) changed less than 1% compared to the previous estimate.We then burned the first 30 samples and thinned the chains by /2.The final intrinsic fluxes and its uncertainties were estimated by drawing 2,000 realizations of , , (or in cases of a flat spectrum) from the posteriors to estimate the distribution of best-fit intrinsic fluxes in each filter.The mean and the 1 confidence interval of the distribution were taken as our final estimates (these posteriors were symmetric and Gaussian-like in all instances).The final extinction-corrected ST magnitudes and fluxes are reported in Table 1. While the exact values obviously differ from those reported by Yang et al. (2011) due to the different treatment of the extinction, our analysis is in agreement with the variability reported in the F555W filter by Yang et al. (2011).The rest of the filters for which multiepoch data exist are approximately consistent within uncertainties.We verified that correcting only for foreground extinction as in Yang et al. (2011) yielded fluxes in good agreement with their results. We also attempted to supplement the multi-band data with UV fluxes from the optical monitor (OM) onboard XMM-Newton.We ran omichain on obsid 0803990101 and found NGC 1313 X-1 is detected with a significance above the 8 level in the UV filters, while it is undetected in the optical filters.We used the default aperture of 6" in radius and converted the instrumental-corrected count-rates to fluxes using the average tabulated values4 .However, upon comparison of the OM fluxes with those from HST in similar bands we found the OM fluxes (≳10 −16 erg/cm 2 /Å/s) to be 2 order of magnitude overestimated, owing to the large aperture which likely contains significant stellar contribution.We concluded that the contribution of NGC 1313 X-1 to the detection must be minimal and discarded the OM data from further analysis. Source spectral states In order to place the HST observations in context with respect to the X-ray spectral states, we looked in the archives for X-ray data taken simultaneously with the HST observations.We found three Chandra observations simultaneous with the first two sets of HST observations (taken in November 2003 andFebruary 2004), which were presented also in Yang et al. (2011).The details of the Chandra observations are given in Table 2.The 2014 WFC3/UVIS observations were instead covered by the Swift-XRT long-term monitoring (Fig. 1). The Swift-XRT places the HST 2014 observations when NGC 1313 X-1 was in the low state.The Chandra data simultaneous with the other HST observations requires more detailed modelling, particularly given the presence of pile-up in some observations.We therefore relegate the full analysis to the Appendix (Section A) but briefly, we found that despite the presence of pile up, we can confidently place NGC 1313 X-1 in the high state during the November 2003 observations, and in the low state during the February 2004 observations.The unabsorbed fluxes and luminosities for the best fits for both Chandra observations are reported in Table 2 along with the determined spectral state.Fig. 1 shows the two equivalent Swift-XRT count rates determined from the Chandra observations on the Swift-XRT lightcurve. As a summary, Table 3 presents all the data gathered to characterized the broadband SED for the low and high states, respectively.We proceed to characterise the state-resolved broadband SED of the source in the following Section. MULTI-BAND SED MODELLING: WHAT WE SEE Our aim here is mostly focused on constraining the EUV emission along the line of sight by testing different spectral models.We will then test these models using Cloudy photo-ionization modelling of the emission-line nebula.Additionally, we wish to determine whether the optical emission is dominated by the companion star or reprocessing in the outer disc, which remains unclear (e.g.Grisé et al. 2012) and can only be tackled with strictly simultaneous broadband data, of which only a few studies exist (e.g.Soria et al. 2012;Sathyaprakash et al. 2022). Extinction Modelling of the optical data requires knowledge of the level of extinction towards NGC 1313 X-1.We used the MUSE data presented Notes.Magnitudes have been corrected for Galactic extinction along the line of sight and for additional extinction towards NGC 1313 X-1.Uncertainties represent the 1 confidence level and take into account the uncertainties on the aperture correction, the best-fit model parameters and the extinction used to derive the intrinsic fluxes.'−' Indicates there is no X-ray simultaneous information to categorise the source in one of its two states. Aperture correction. Total extinction used for the correction estimated from the Balmer decrement.In all fits H was frozen to the value derived by Gúrpide et al. (2021a).Uncertainties are given at the 90% confidence level for one parameter of interest. Photon index of the powerlaw or temperature of the diskbb component. Unabsorbed luminosity over the same band. This observation was not used as it was simultaneous with obsid 4750, which instead was free of pile up. in Gúrpide et al. (2022) to estimate the level of extinction from the ratio of the Balmer lines (H and H).To this end, we extracted an average spectrum from cube 1 in that work from a circular region of 1" (corresponding roughly to the PSF FWHM) around the optical counterpart.We then corrected the spectrum from Galactic extinction using the Cardelli et al. (1989) extinction curve with V = 3.1.From this foreground-extinction-corrected average spectrum we measured (H) = 4.2±0.1 and (H) = 14.2±0.1,both in units of 10 −18 erg/s/cm 2 , using a simple constant model for the local continuum around each line and a Gaussian for the line itself.The Balmer decrement (H)/(H) = 3.39±0.11suggests there is additional extinction towards NGC 1313 X-1.The extinction correction requires knowledge of the intrinsic Balmer decrement, which depends on the electron density ( e ) and temperature of the gas () (Osterbrock & Ferland 2006).While we do not have access to temperature-sensitive line diagnostics, the typical values we found in these regions for the electron-density sensitive line ratio were 3. These values correspond to the low-density regime which is nearly temperature-insensitive, indicating e < 100 cm −2 .Assuming case B recombination and = 10000 K (broadly consistent with our nebular modelling; Section 4) the intrinsic ratio is H/H= 2.863 (noting the likely absence of shocks in this region also constrains < 20, 000 K; Osterbrock & Ferland 2006).Therefore we found ( − ) = 0.15 ± 0.03 adopting the Calzetti et al. (2000) extinction curve with V = 4.05 as stated in Section 2. The total absorption is thus v = 0.93 ± 0.11 mag. Another argument supporting the additional level of extinction can be made considering the relationship between the neutral hydrogen absorption column ( H ) and reddening found by Güver & Özel (2009) using simultaneous X-ray and optical observations of Galactic supernova remnants: Using the Galactic value along the line of sight ( − ) = 0.11 (Schlafly & Finkbeiner 2011) with v = 3.1 would suggest H ∼ 7.5 × 10 20 cm −2 , which is an order of magnitude below the value ).We will additionally show that all models used below also require H > 2 × 10 21 cm −2 .Instead, using the total V derived above suggests H = (2.1 ± 0.3) × 10 21 cm −2 , which is consistent within the order of magnitude with the values derived from X-ray spectral fitting and the values derived below.This suggest our estimate for the amount of reddening is reasonable. Spectral fits To model the spectra we decided to use two physically motivated models along with a third phenomenological model typically used to describe the 0.3-25 keV emission in ULXs, which we summarize below.The low-state presented the well-known strong residuals at around 1 keV (Middleton et al. 2015b) which have been associated with radiatively-driven relativistic outflows (Pinto et al. 2016(Pinto et al. , 2020)).These residuals have a minor impact on the continuum estimation but they can affect the estimation of the level of neutral absorption.Moreover, because we wanted our 2 to reflect closely the goodness of fit, we modelled the residuals with a Gaussian at ∼ 1keV with ∼0.09 keV, which resulted in Δ 2 improvements upwards of 100 (depending on the exact model) for 3 degrees of freedom.Neutral X-ray absorption was modelled with a tbabs model component as explained in Appendix A. Whether H is variable in ULXs remains a contested issue virtually unstudied owing to the lack of physically motivated models.In NGC 1313 X-1, Gúrpide et al. (2021a), using phenomenological models, showed H varies only in an unusual state of the source -termed 'obscured state' in that work (see also Middleton et al. 2015b).Since this state is not considered here, we decided to tie H between the high and low states, while leaving the rest of the parameters free to vary unless stated otherwise. Below, we summarise the models employed and the resulting fits to the data.Results from the spectral fitting are reported in Table 4 and Fig. 2 shows the best-fit spectral models and residuals. • diskir: The diskir model is an extension of the multicolour disc blackbody (Mitsuda et al. 1984) which takes into account selfirradiation effects by a Compton tail and by emission from the disc inner regions (Gierliński et al. 2009).The most relevant parameters for the UV/optical data are the reprocessed flux fraction which is thermalized in the disc ( out ) and the outer disc radius ( out ), whereas the rest of the parameters are constrained by the X-ray data.This model has been used to fit the broadband spectrum of ULXs by analogy with XRBs (Grisé et al. 2012;Tao et al. 2012;Vinokurov et al. 2013;Sutton et al. 2014;Dudik et al. 2016) finding satisfactory agreement with the data, although always with some degeneracy with the companion star (Grisé et al. 2012;Tao et al. 2012).It must be noted that most of these works did not use simultaneous data, which has been shown to be crucial to discriminate between models (e.g.Dudik et al. 2016).Compared to existing works (Grisé et al. 2012;Vinokurov et al. 2013;Dudik et al. 2016) where some parameters were frozen (namely those concerning the Compton tail: the electron temperature k e and the ratio of luminosity in the Compton tail to that of the unilluminated disc, C / D ) we were able to constrain all parameters simultaneously thanks to the high-energy coverage provided by the NuSTAR data.We found good consistency between the C / D parameter between the high and low states, which is unsurprising given the lack of variability of NGC 1313 X-1 above ∼10 keV (Gúrpide et al. 2021a;Walton et al. 2020a).We therefore tied this parameter between the high and the low states.The fit provides a good description of the continuum with 2 /dof = 1616/1264 as can be seen from Fig. 2. We note however, that the reprocessing fraction for the low state ( out ∼ 4 × 10 −2 ) is about an order of magnitude higher than found in XRBs (see Urquhart et al. 2018, and reference therein).In using this model, we do not necessarily have a physical interpretation in mind, particularly because we expect the accretion flow geometry to deviate substantially from the sub-Eddington accretion flow geometry assumed by the model.We rather consider the diskir as a proxy to obtain a realistic extrapolation to the inaccessible EUV. • sirf: The self-irradiated multi-color funnel can be considered an extension of the diskir to the supercritical regime, in which the disc/wind acquires a cone-shaped geometry altering the selfirradiation pattern (Abolmasov et al. 2009).As a caveat, we note that this model does not include Comptonization or the irradiation of the disc wind onto the outer disc itself (e.g.Middleton et al. 2022). For simplicity, we fixed some parameters which we considered as nuisance for our analysis to their fiducial values.These were the velocity law exponent of the wind, which we fixed to −0.5 (i.e.parabolic velocity law) and the adiabatic index () which we fixed to 4/3 as the gas is radiation-pressure supported.Leaving the outflow photosphere out free for both states yielded a loosely constrained and highly degenerate fit.We further noted that this parameter did not strongly affect the fits.We managed to obtain successful fits by leaving out free for the high state, while for the low state, we fixed out to 100 R sph .We also included self-irradiation effects and found 4-5 iterations were sufficient to reach convergence. The fit was insensitive to the inclination so as long as was smaller than the half-opening angle of the funnel f so we fixed it to 0.5 • (i.e.nearly face-on) as expected for a ULX such as NGC 1313 X-1 (Middleton et al. 2015a;Gúrpide et al. 2021a).Solutions with > f were statistically excluded by the data. The model fitted reasonably the continuum (Fig. 2), although substantially worse than the diskir ( 2 /dof = 1714.2/1268).This model cannot reproduce the fluxes redward of ∼ 5555 Å particularly in the low state (where the residuals ≳7).Perhaps unsurprisingly the parameters are similar to those found for Holmberg II X-1 and NGC 5204 X-1 (Gúrpide et al. 2021b) but with a narrower opening angle of the funnel ( f ∼ 37 • ) and a lower Eddington mass ejection rate eje compared to Holmberg II X-1. As can be seen from Fig. 2 the main difference with respect to the diskir model is the predictions in the EUV: the sirf predicts a UV flux more than an order of magnitude higher than the diskir, with the UV flux peaking around the He + ionization potential of 54 eV (Fig. 2).Contrary to the diskir model, this model predicts little difference in the UV/optical emission between the high and low states. • phenomenological: Finally, we tested a phenomenological model based on an absorbed dual thermal diskbb including upscattering of the hard diskbb through the empirical simpl model (Steiner et al. 2009) (see e.g.Gúrpide et al. 2021a;Walton et al. 2020a).The complete model in XSPEC was tbabs⊗(diskbb + simpl⊗diskbb).Here too we tied the photon index Γ between the high and low states owing to the aforementioned lack of variability above ∼10 keV.Because the model make no account for the optical/near UV emission, we fitted this model to the X-ray data only.Fig. 2 shows this model severely underestimates the UV/optical fluxes compared to the other models.Similar underpredictions of the optical/UV fluxes from direct extrapolation of X-ray spectral models were reported for other ULXs such as Holmberg IX X-1 (Dudik et al. 2016) and NGC 6946 X-1 (Abolmasov et al. 2008). Donor star or irradiated disc? As we have seen the only model capable of explaining the broadband data is the diskir, in agreement with earlier works (Berghea & Dudik 2012;Grisé et al. 2012).Instead, the phenomenological and sirf struggle to describe the optical data.We thus considered an alternative description of the data, in which the optical/near UV emission may have additional contribution from the donor star (in Section 5.4 we discuss whether such interpretation holds physically).We approximated its spectrum using a blackbody (bbody in XSPEC) alongside the emission from the accretion flow.We further assumed the putative star parameters remain constant between the high and low states. Table 4 shows the resulting best-fit parameter, including the constraints on the star temperature * and radius * , while Fig. 3 shows the best-fit models and residuals.Both the phenomenological and the diskir would favour * ∼7 ⊙ and * ∼ 30, 000K.This would imply an O-type star with a ∼ 78, 000 ⊙ and a mass of >20 ⊙ (Ekström et al. 2012).On the other hand, the sirf requires a ∼26 ⊙ and * ∼ 8, 000K star, which would correspond to an F0-A star with a mass of <9 ⊙ (Ekström et al. 2012).We discuss the implication of these results in more detail in Section 5.4. The blackbody provides a significant fit improvement for both the phenomenological and the sirf (for instance a Δ 2 = 84 for 2 degrees of freedom for the sirf model).However, the diskir again provides the best overall fit ( 2 /dof = 1613.6/1262),without much improvement with respect to the starless model, as the optical fluxes were already well described by the disc emission alone.Similar result was found by Berghea & Dudik (2012) in NGC 6946 X-1.The sirf provides only slightly worse fit in terms of 2 (1630.2/1266)but as can be seen from Figure 3 this model offers a poor description of the high-energy (> 10 keV) tail.This may be expected as the model does not include Comptonization.However, if we consider the trade off between the likelihood and the number of parameters, in terms of Bayesian Information Criterion (BIC; Schwarz 1978) it could be argued that the sirf provides a better description of the data, as it offers the lowest BIC (ΔBIC ≃ -12 compared with the diskir). The phenomenological provides a marginal description of the data ( 2 ≈ 1.4), offering a poor description of the high-energy (> 10 keV) tail and the optical data.We have verified that the high-energy residuals persists even if we allow Γ to vary between the high and low states ( 2 =1.42), indicating that the phenomenological model cannot account simultaneously for both the high-energy and optical emission.In terms of overall fit, therefore the best representation of the line-of-sight SED is given by the diskir because it can account simultaneously for both the optical and the high-energy tail.In Section 4.2 we put further constraints on the line line-of-sight SED, and provide supporting evidence for the UV extrapolation provided by the diskir. Crucially, these models make different predictions about the UV flux.This is clearly reflected in Figures 2 and 3.The UV luminosity (defined as the luminosity in the 3 eV-0.1 keV band) is also reported in Table 4.The phenomenological predicts the lowest amount, differing in more than an order of magnitude from the prediction made by the sirf.We are now ready to test these predictions against the emission-line nebula. Cloudy MODELLING: WHAT THE NEBULA 'SEES' Having constrained the broadband SED of NGC 1313 X-1, we now turn to examine the effects on its environment through two different sight-lines: sideways, by studying the surrounding, extended nebula reported in Gúrpide et al. (2022) (Section 4.1) and along the line of sight, by studying the line-of-sight integrated nebular spectrum (Section 4.2).In order to select an appropriate ionising SED for our photo-ionization modelling, we noted the numerical calculations presented by Chiang & Rappaport (1996), who studied nebulae photoionized by variable supersoft X-ray sources.Their calculations considered SEDs and densities to a good degree physically relevant for ULX nebulae.These authors showed that so long as the duty the cycle of the source is much shorter than the recombination timescales, the nebula will effectively 'see' or react to the equivalent of a time-averaged SED of the source.In diffuse nebulae the longest recombination timescale is that of hydrogen (Osterbrock & Ferland 2006), which based on typical values from our nebular modelling we find of the order of (H + ) ∼2×10 5 yr.Shorter timescales are typically (HeII ++ ) ∼2,000 yr and (O +++ ) ∼ 900 yr, therefore much longer than the variability timescales of the source.Assuming the Table 4. Best-fit parameters to the multi-band SED of NGC 1313 X-1.All models were fitted to the multi-band data except for the phenomenological without the blackbody, which was fitted to the X-ray data only. Notes. Uncertainties are given at the 1 level.All luminosities are corrected for absorption. Note the change in number of degrees of freedom here is due to the addition of the HST data in the fit which includes the bbody.Swift-XRT variability is representative of the overall variability of NGC 1313 X-1 (also supported by XMM-Newton observations; Gúrpide et al. 2021a), the temporal average (black dashed line in Fig. 1) suggest the SEDs from the low state are a good approximation for the ionising SED, as the excursions to the high state are too rapid for any meaningful impact on the overall ionising radiation.We return to this point in Section 5.5.We therefore created a set of Cloudy (version C22.02; Ferland et al. 2017) models using our best-fit (extinction-corrected) low-state SEDs from Table 4, totalling 6 different SEDs (3 of them including contribution from the putative companion star).We tested two sets of metallicities for the gas, = 0.15 ⊙ , 0.3 ⊙5 , corresponding to 12 + log(O/H)= 7.86 and 12 + log(O/H)= 8.17, representative of the nebula around NGC 1313 X-1 (Gúrpide et al. 2022), assumed a filling factor of 1 and open geometry.The center of the nebula was taken as the ULX position in the data cube presented in Gúrpide et al. (2022). We carried out two set of calculations: one where we assumed the ULX was the only source of ionisation and another where we included an ionising stellar-background from the stars in the field.In order to include a realistic stellar ionising background, we obtained a composite stellar spectrum from the Binary Population and Spectral Synthesis (BPASS) v2.1 (Eldridge et al. 2017) including masses up to 100 M ⊙ .We adopted a metallicity matching that of the gas and an age of the stellar population matching that of the nearby stars ( = 10 7.5 yr; Yang et al. 2011).We followed Simmonds et al. (2021) and rescaled the spectrum based on the star-forming rate (SFR) of NGC 1313.To do so, we noted Suzuki et al. (2013) measured the SFR surface density in NGC 1313 to be ∼0.01M ⊙ /yr/kpc 2 .Considering the region of interest here (∼0.04 kpc 2 ), we found the local SFR = 0.0004 ⊙ yr −1 .Using the scaling between UV luminosity and SFR from Kennicutt (1998), a bolometric luminosity for the stellar background of ∼9.5×10 39 erg s −1 is suggested.This value is only accurate to the order of magnitude owing to uncertainties related to the spatial distribution of the stars in the field, potential contribution from stars outside the photo-ionized region and their exact distance(s) to the nebula.Therefore we have ran our calculations for three different luminosity values for the stellar-background, = 2.75 × 10 40 , 9.5×10 39 and 2.75×10 39 erg/s, in order to consider the effects of varying this parameter.As we show below, independent on the exact luminosity, all calculations strongly support the fact that stars contribute to the line of ratios of the nebula.The best results were found for = 2.75 × 10 40 erg/s and we focus on the results obtained for this luminosity throughout, although we also discuss the less luminous Photon Energy (keV) As stated in Gúrpide et al. (2022), the Heii 4686 was not detected in cube 2 taken in extended mode.The Cloudy predictions we make below prompted us to examine carefully the presence of this line or at least derive an upper limit on its flux in order to constrain further our models.Because we found the line too faint for a pixel-by-pixel fit, we constructed a flux map by integrating the spaxels around the expected position of the Heii 4686 line based on the systemic redshift of NGC 1313 ( = 0.001568) -from 4688.5 Å to 4698.6 Å -and subtracting the mean value of the nearby continuum.The left panel of Fig. 5 shows the resulting map resampled by a factor 3 to highlight a tenuous feature close to the ULX position.We extracted an average spectrum from this region (shown in blue in the right panel of the 2022), we measured an average (−) value in the same region of (−) = 0.179±0.005mag.With this value and using the Calzetti et al. (2000) extinction curve with v = 4.05 we arrived at a total Heii 4686 luminosity of (7.5±1.4)×10 35erg/s.While we do not use this value directly in our model-data comparison, we discuss it further below. The Side View In order to study the extended surrounding nebula discovered in Gúrpide et al. ( 2022), we converted the 1D Cloudy predictions to 2D images and resampled them to the MUSE pixel scale of 0.2".We then blurred the images applying a 2D Gaussian kernel with FWHM matching the datacube's PSF (∼1"; Gúrpide et al. 2022).Next we compared the line ratios from these set of Cloudy-generated images to the real MUSE-derived line ratio images.We restricted the datamodel comparison to the regions where the gas was identified as being EUV/X-ray photoionized, where the ULX contribution dominates (see Figure 7 below) and where the influence of shocks is minimized (Gúrpide et al. 2022).The choice to work on line ratios rather than in fluxes was in order to reduce uncertainties related to distance, extinction and geometry. The geometry above implicitly assumes that the cloud is approximately co-planar with the ULX in the plane of the sky and that the dimension along the line of sight is much smaller than the dimensions on the plane, such that the line ratios along a given line of sight are approximately constant.We are working on extending our modelling to more complex 3D structures and we will present a more refined treatment of the cloud geometry in a future publication, but note our preliminary results assuming a projected, spherical sector give results qualitatively consistent with those presented here. For each line ratio ([O iii]5007/H, [O i]6300/H, [N ii]6583/H, [S ii]6716/H and [S ii]6716/[S ii]6730, [Ar iii]7135/Hand [S iii]9069/H) we computed a 2 using the model-and data-pixel values and uncertainties on the flux ratio propagated from the measurement error of each line in each spaxel (estimated from the pixel-by-pixel Gaussian-line fitting presented in Gúrpide et al. 2022).By adding each individual line ratio 2 , we were able to select the model with the overall lowest 2 .We did not use [O iii]4959 nor [N ii]6548 as their fluxes were tied by theoretical constraints (Storey & Zeippen 2000).Given other sources of uncertainty such as the exact density profile, geometry, clumpiness of the cloud, projection effects, other sources in the field, etc. and the fact that some of the pixels in our models will have no values in them (see below), our 2 should be regarded as an heuristic to select the best model, rather than the usual goodness of fit. We produced models for a range of constant hydrogen number densities log( H ), varying from 0.0 to 1.0 in steps of Δ log( H ) =0.05 (noting that the [S ii] lines indicate e < 100cm −3 ) 7 and inner radius of the cloud in = 1, 12.5, 22.3, 40 pc, setting values inside the cavity or outside of the Cloudy calculation to 0. The outer radius was set to 200 pc, roughly matching the extent of the high [O i]6300/H ratio and to avoid the emission from a stellar cluster further south (see the bright blob to the far south of the ULX in Figures B1).Fig. 6 shows an example- 2 contours derived for the diskir_star model (circular markers) for the two metallicity values (green and blue for = 0.15 ⊙ and = 0.3 ⊙ respectively), which also illustrates that the effects of varying in are minimal.The Figure also 7 We ran models with higher densities but they clearly failed to reproduce the extent of the nebula. shows the resulting 2 contours for the same ULX model when the stellar background is included (star-shaped markers).It is clear that the inclusion of the stellar background improves significantly the fit.Fig. 7 shows a 2D data-model comparison for the best-fit disk_star model for = 0.15 ⊙ for the ULX and ULX + stellar background runs, to illustrate the region we probed and the type of comparisons we carried out. Tables C2 and 5 show the resulting overall 2 alongside the maximum observed line ratios for the best-fit models and the data for = 0.3 ⊙ and = 0.15 ⊙ , respectively.The models with = 0.3 ⊙ overpredict all line ratios by a factor of ∼2, regardless of whether the stellar background is included or not (Table C2).We therefore were able to reject this metallicity based on the data.This may be surprising as this value matches more closely the metallicity inferred around NGC 1313 X-1.However, as we show below we clearly find that the models with = 0.15 ⊙ match more reasonably the observed line ratios (see also Fig. 6).We note that in Gúrpide et al. (2022) we were unable to measure the metallicity in the photoionized region itself as metallicity estimators are mostly calibrated for standard HII regions.Hence it may be possible that the nebula has a lower metallicity content than the neighbouring gas.In any case, although such metallicity is low, it is not unrealistic as it is at the lower end of the values measured in Gúrpide et al. (2022).We therefore focused on the results for = 0.15 ⊙ . In the case of no stellar background, both diskir and phenomenological offer comparable level of agreement with the data, and superior to that offered by the sirf.The main difference between the phenomenological and the diskir is the peak [O iii]5007/H ratios, due to the lower UV level of the former model.We note that it is actually possible to reproduce peak [O iii]5007/H ratios in the 5-6 range with the phenomenological model (cf.5-7.5 for the diskir and 6-7.5 for the sirf), therefore compatible with the data.However, due to the lower overall values coupled with the smaller region of enhanced [O iii]5007/H produced by the phenomenological due to its lower UV flux, the spatial smoothing smears these values below 3, which highlights the importance of considering the spatial scale. As shown in Fig. 7 (upper panels), the extent of the [O iii]5007/H area is roughly matched although it is too compact with respect to the data, while the peak values are in fair agreement with those observed.For the [O i]6300/H ratio, although the extent and morphology are well matched, the peak values are overpredicted by a factor 2. This overprediction occurs for most low-ionisation lines (e.g.[S ii]6716/H) as can be seen in Table 5 and for all models.This can be understood due to a lack of strong soft optical spectra from the background stars, which will excite more readily the Balmer lines and increase them compared to e.g.[O i]6300, which is produced in the outer neutral parts of the nebula via highlypenetrating soft X-rays.The inclusion of the putative companion star does not change this basic conclusion and cannot account for these differences.This prompted us to take into account the contribution from the stellar background. The inclusion of the stellar background not only significantly improved all 2 (Fig. 6; Table 5 for = 2.75 × 10 40 erg/s and Tables C1 for the less luminous stellar-background cases), but now the peak line ratios in low ionisation lines are much closer to the observed values, particularly for the less-UV bright diskir and phenomenological models.To show the effects of adding the stellar background more clearly, Fig. 8 Green and blue colors show the results for metallicities Z= 0.15Z ⊙ and Z= 0.30Z ⊙ , respectively.We can see that for fixed density, the effects of varying in are negligible and that the inclusion of the stellar background significantly improves the results.also widens the area over which [O iii]5007 is excited.This effect is similar to that observed by Berghea et al. (2010); Berghea & Dudik (2012) in Holmberg II X-1 and NGC 6946 X-1, where it was found that the low-energy photons from the companion star create low-ionisation states which are further ionised by the high-energy ULX photons (Figure 8 in Berghea et al. 2010).The effect on the low-ionisation lines is instead the opposite.Both these effects support there is an additional source of ionisation along with the ULX.Despite the widening of the enhanced [O iii]5007/H region introduced by the stellar background, all models fail to match the width of the observed radial profile.In Fig. 9 we show a 1D radial profile along the peak of the excitation region comparing the models and the data to illustrate this.The width of the profile was set to 10 pixels to smooth out variations and avoid gaps due low signal-to-noise ratio pixels.Although strictly speaking we compared the 2D generated images with the data, we find these 1D visualization provide an accurate summary of our results.We can see that [O iii]5007/H is overpredicted in all models while the extent over which [O iii]5007 is produced is underpredicted.We can confirm that part of the reason is due to projection effects.As alluded earlier, here we have assumed that the nebula is thin enough in the line of sight direction such that the line ratios along a given line of sight are approximately constant.Instead, if the nebula has considerable structure in the dimension along the line of sight, for instance in the case of a spherical sector, the line ratios along a given sightline will be averaged over regions with different degree of ionization.This will smooth out the line ratios over a wider region compared to the planar geometry assumed here, hence lowering their peak values and broadening their profiles compared to the line ratio profiles shown in Figure 9.We can preliminary confirm these effects from ongoing work, but leave the treatment of more complicated cloud geometries for future work. Another effect that could be affecting our results is the treatment of the stellar background emission as a point-like source, instead of an extended component.In reality the optical stellar emission will be approximately uniformly distributed over the nebula, both diluting and extending the [O iii]5007/H region compared to the pointlike treatment afforded by Cloudy.This may also explain the wider region of high [N ii]6583/H in the data compared to the models. To inspect whether this is the case, we have run another Cloudy simulation with the stellar background alone with a typical log( H [cm −3 ]) = 0.60 to inspect the line ratios it would produce.may be partially affected by this and this has to be borne in mind when interpreting the results. While the fact that the stellar background can produce rather high [O iii]5007/H ratios may call into question whether the nebula is actually produced by the ULX, we see that the stellar background by itself cannot account for the morphology of the nebula (Fig. 8).In particular, the [O iii]5007/H emission is concentrated around the point-like source, while in the models including the ULX the gas close to the source is too ionized to produce [O iii]5007.Sim-ilarly, the stellar background alone does not produce the extended enhanced [O i]6300/H region, which can only be produced by the large mean free path of the X-rays.Therefore, our modelling shows undoubtedly that a high-energy source is needed to explain the nebular morphology. In terms of 2 , the preferred model is the phenomenological, as it provides the best-overall 2 .From Fig. 9 we can see that both the diskir and phenomenological offer comparable level of agreement with the data, while the sirf overpredicts more severely the [O i]6300/H and [S ii]6716/H ratios.Despite the differences and the clear more-complex profiles in the data due to the effects mentioned above, the overall structure in most lines is well reproduced.The exception to this is the [S iii]9069/H (gray line in the plots).While all models peak at around 120 pc, the observed peak is seen at just 50 pc from the ULX.We suspect this line might be more strongly affected by the sky subtraction and/or shocks, which may explain this discrepancy.As discussed above, the remaining differences may be attributed to projection effects, the treatment of the stellar background as a point-like source and slight differences in abundances and density profile of the gas.Nevertheless, the peak values in the first two models match those observed in the data (see also Table 5). As stated earlier, we have run an additionally set of simulations by lowering the stellar contribution to study the sensitivity of our results to this component.The results are shown in Table C1 and show that the fits worsen in all instances.In particular, we can see that the models now again overpredict most low-ionisation lines by a factor 1.5-2.We conclude that the bright ( = 2.75×10 40 erg) is a better match to the data, with the caveats outlined above.Nevertheless, the phenomenological and diskir continue to be the preferred models regardless of the exact treatment of the background stars.On this basis, we consider the UV flux predicted by the sirf to be too high to match the nebular emission. To test our final conclusion, based on our best-fit models, we calculated the expected Heii 4686 flux by multiplying each of the models Heii 4686/H ratios by the observed H extracted from the datacube (Gúrpide et al. 2022).We then extracted the fluxes per spaxel expected from the same blue region shown in Fig. 5.The lower panel of Fig. 9 shows the range of values expected for each model (averaged over the region), the average (Heii 4686) value measured in the data (Figure 5) and the estimated 3 detection limit.To estimate the latter, we inspected the individual spaxels from the blue region in Fig. 5 and found the lowest flux at which the 3 error on the flux was consistent with 0. This value was ∼10 −18 erg s −1 cm −2 and is shown in the Figure as a black dashed line.We further managed to detect the line in 13 individual spaxels although the relative 3 errors are quite high (70-95%).These detections are also shown in the same Fig. 9 in the lower panel (orange colored histogram). From the histograms in Fig. 9, we can see that the sirf model produces (Heii 4686) values that are too high with respect the observed values.Instead, the best agreement is again provided by the phenomenological model, because most of the spaxels have values below the detection threshold, as expected based on the lack of strong detections, and the overall histogram shows the best agreement with the data.Therefore, the marginal detection of the Heii 4686 again reinforces the idea that the predicted UV flux by both the sirf and diskir is overestimated. The Front View In Gúrpide et al. (2022) we showed there is a lack EUV/X-ray photoionization signatures around NGC 1313 X-1 in other directions, most likely due to a lower-ISM density in those areas.However, as can be appreciated from the BPT diagrams presented in Gúrpide et al. (2022) (their Figure 11) the [O iii]5007/H ratio is slightly enhanced (∼3.2) at the position of NGC 1313 X-1.The enhanced [O iii]5007/H ratio can also be observed in Figure 10 (left panel), where we show the extinction-corrected spectrum extracted in Section 3.1 in order to measure the extinction towards NGC 1313 X-1.More specifically, from this spectrum we measured [O iii]5007/H = 2.93±1.2,([S ii]6716 + [S ii]6731)/H = 0.65±0.01 and [O i] 6300/H = 0.176±0.006,which again are rather unusual for Hii regions.We recall this is the average spectrum per pixel from a circular region of 1" in radius around the source and therefore can be considered to be from a spatial scale of a single MUSE pixel (0.2" or ∼4 pc at 4.25 Mpc). While these ratios are not as extreme as in its vicinity (Section 4.1), this is to be expected if we are observing the integrated emission of the photo-ionized nebula along the line of sight.Particularly, should the supercritical funnel in NGC 1313 X-1 be orientated towards us, then it may be reasonable to consider it as a potential source of ionisation.The lack of clear diagnostics, such as the extent of the [O iii] 5007/H and [O i] 6300H regions, makes attributing these enhanced ratios to photo-ionization by the ULX more uncertain (for instance the high ([S ii]6716 + [S ii]6731)/H ratio could be due to shocks, although we have also seen these ratios are also produced by EUV/X-ray photo-ionization; Figure 9).Therefore as a first step we considered whether we could reproduce these ratios as photoionization by stellar continua. To this end, we attempted to reproduce the fluxes from the lines in the spectrum above -indicated in the Figure with black tick underneath -using photo-ionization models.As in Section 3.1, in order to extract line fluxes we fitted the lines using a Gaussian and a constant for the local continuum.Extinction-corrected fluxes (averaged over the extraction region) for all lines of interest are reported in Table 6.The flux of Heii 4686 was derived using the same region but from cube 2. The 3 negative error was consistent with zero, therefore we considered this measurement an upper limit.Its extinction-corrected flux and the 1 uncertainty are also reported in Table 6.These results were consistent with those obtained using instead a smaller aperture of 0.5 ′′ in radius. We then compared the observed line fluxes from this spectrum to the fluxes obtained by integrating the Cloudy nebulae along the line of sight using again 2 statistics.The upper limit on Heii 4686 was Notes. To calculate the reduced 2 , the number of degrees of freedom is defined as the number of pixels covering the photo-ionized region (11474) minus 2 variables ( H and in ).These 2 need to be understood as an heuristic to rank the models, rather than a goodness-of-fit.taken into account in the modelling following Hoof (1997): where obs and model refer to the observed and predicted (Heii 4686) fluxes and obs is the (1) uncertainty on obs . We assumed we only see nebular emission due to gas located be-tween the ULX and ourselves (i.e.no contribution from an equallyextended nebula behind the ULX).Assuming we only observe the gas in front of the ULX may be reasonable considering much of the emitting nebular gas behind the ULX will be absorbed by the system itself.However, such fraction may be small considering we are observing gas averaged (or integrated) over an angular (physical) diameter of 2 ′′ (∼40 pc).Assuming a symmetric case, where the gas is equally distributed in front and behind the ULX, is not expected to affect the results significantly as the effect would be to simply rescale the line fluxes for a given density/inner radius by a factor 2, without altering the predicted line ratios.We have rerun the calculation under this assumption and verified that while there are small numerical differences between the two approaches, our overall conclusions below do not change.We present the results for the former assumption (termed asymmetric case) and discuss the calculations for the latter assumption (symmetric case hereafter) when relevant. We performed the data-model comparison following two approaches and found very good agreement between them (Δ 2 < 6 for all models), so we only present the results from the latter approach.In the first method, we considered the averaged fluxes over the spatial region and assumed we are observing a column of × × out , where is the physical size of a MUSE pixel at = 4.25 Mpc and out is the (unknown) outer edge of the nebula along the line of sight.We found best results when considering all models to be ionization-bounded.Therefore the nebula outer radius out was set to the maximum of the Cloudy calculation ( out = 200 pc) or lower in cases of higher density where the hydrogen ionization front was reached before.In the second approach, instead of considering the average spectrum over the 1" aperture, we have instead used the integrated one.We then have compared the integrated line fluxes to those obtained in Cloudy by using the aperture command to integrate the simulated nebula over the same spatial region. We noticed some of the models 2 minima were at the limit of our density calculation (log( H [cm −3 ]) = 1), so we extended the calculations to densities up to log( H [cm −3 ]) = 1.6 and in = 60 pc to ensure we found the absolute minima for each model. As stated above, we first verified whether we could explain the line fluxes as photo-ionized by a population of stars by running models with the stellar background alone.In particular, we have run models for stellar backgrounds with = 2.75 × 10 38,39,40 erg/s for the range of densities and radii quoted above.We have found 2 /dof upwards of 470 in all instances (also for the symmetric case), with unusually lowly extended (≲ 30 pc) nebulae due to the hydrogen ionization front being reached very close to the source as a result of the gas being optically thick to the soft radiation.Moreover, sulfur or [O i] 6300 lines were predicted to be ≳5 and ≳ 15 lower than observed, respectively.This agrees with the fact that these pixels were classified as 'AGN' in the BPT diagram presented in Gúrpide et al. (2022).Therefore, we considered whether we could put further constraints on the SED of the ULX along the line of sight by modelling the nebular emission in this direction as photo-ionization by the ULX instead. Table 6 shows the obtained data/model line ratios and resulting 2 for all models, including those for which the stellar contribution discussed in Section 4.1 is added to the ULX SEDs. Figure 10 (right panel) shows the 2 contours obtained for the three models.As opposed to the extended nebula (Section 4.1), where the inclusion of the stellar background clearly improves the results, here most models worsen or show little improvement when the background stars are added.It is also clear that best results are obtained for the non-background or moderate ( = 2.75 × 10 39 erg/s) background cases (Table 6 and Figure 10, right panel).Similar trend was found for the symmetric case.This is reasonable considering here we are modelling a single sight line, where the major (or only) contributor is likely to be the ULX.The best match is obtained for the diskir model alone (or with moderate background stars) and suggest the nebula is denser ( H ∼ 9 cm −3 ) and ∼112 pc long along the line of sight.Expectedly, the additional gas implicitly present in the symmetric case instead lowers the required cloud density, to H ∼ 5.6 cm −3 .Note that although the reduced 2 is high, almost all lines are predicted within a factor 2 or less with the diskir model.Thus the ULX clearly provides a much better match to the line fluxes than the stellar continua above.The worst agreement is found for the sulfur lines.As alluded in the previous Section, we suspect [S iii]9068 is likely affected by the sky subtraction.The discrepancy between the model predictions and the other sulfur lines is less clear cut, but may be attributed either to some contribution from shocks and/or to a higher abundance of sulfur. Along with the predicted line fluxes and 2 , in Table 6 we also report the integrated hydrogen absorption column along the line of sight, together with that derived from spectral fitting (Section 3).Most derived H values from the photo-ionization modelling, although not unrealistic, are higher than those derived from X-ray spectral fitting after subtraction of the galactic contribution to the total H .In principle, we would expect H derived from spectral fitting to be higher than that of the nebula due to additional contribution from the system itself.One possibility is that indeed there is some contribution to the nebular lines from behind the ULX.For the symmetric cases, H is reduced approximately by ∼ (0.3 − 0.4) × 10 21 cm −3 , which would bring them to a level below that derived from spectral fitting, arguably in more reasonable agreement with the X-ray data.It is also likely that there are additional sources of uncertainty due to the exact underlying model and the fact that we have not included the exact (unknown) abundances into account when deriving H from spectral fitting.Therefore these values should be taken with caution.Nevertheless, our analysis suggest most contribution to the absorption column may be due to the photo-ionized nebula itself. While our photo-ionization modelling here is more uncertain due to additional contribution from possible shocks and the unknown extent of the nebula, we can confidently rule out the sirf model as a good line-of-sight SED, as it cannot account for the low-ionisation lines (as they are all underestimated by a factor 3 or more) while producing too strong Heii 4686.At the same time, it suggest our best-fit line of sight SED (the diskir; Section 3) is not unrealistic based on the nebular line fluxes observed along the line of sight.Therefore, while we cannot rule some contribution from shocks, we can at least ascertain that a high-energy source (namely the ULX) is a good match to the line fluxes if these are to be explained by a photo-ionization and that these are best explained by the diskir model. DISCUSSION Through multi-band spectroscopy and modelling of the nebular emission we have been able to confirm our previous assertion (Gúrpide et al. 2022) that NGC 1313 X-1 powers an extended ∼200 pc EUV/Xray photo-ionized region, with additional contribution from the stars in the field.Using state-resolved multi-band spectroscopic data, we have constrained the SED of the ULX along the line of sight.The best-fit model, capable of explaining both the high-energy tail (>10 keV) and the UV/optical data simultaneously is the diskir model (regardless of whether we add any contribution from the putative stellar counterpart; Section 3).This result agrees with earlier works that have found that such model, despite not describing the accretion flow geometry envisioned for a supercritically accreting compact object (e.g.Lipunova 1999;Poutanen et al. 2007;Abolmasov et al. 2009), fits ULX broadband data satisfactorily (e.g.Kaaret & Corbel 2009;Grisé et al. 2012;Berghea & Dudik 2012;Tao et al. 2012).We do not however take this as evidence for NGC 1313 X-1 being powered by a standard self-irradiated disc -an unlikely scenario owing to the unusual X-ray spectral shape (Bachetti et al. 2013), the Notes.Uncertainties at the 1 level. 3 upper limit along with the 1 uncertainty. Degrees of freedom are defined as 10 (lines) -2 variables (log( H ) and in ). Extra-galactic neutral absorption column derived from spectral fitting (quoted from the diskir model; Table 4) i.e. the Galactic contribution along the line of sight ( G H = 7.07×10 20 cm −2 ; HI4PI Collaboration et al. 2016) has been subtracted. presence of relativistic outflows (Pinto et al. 2016) and the ∼400 pc shock-ionized bubble surrounding the source (Gúrpide et al. 2022).Instead, we consider the diskir as a physically reasonable extrapolation to the unaccessible EUV.Such extrapolation to the UValthough uncertain -is supported by our modelling of the nebula along the line of sight (Section 4.2).We have further shown that direct extrapolation of X-ray only models (the phenomenological model; Section 3) fail to account for the full SED when the optical data is considered, an issue highlighted also in previous works (Abolmasov et al. 2008;Dudik et al. 2016). Through photo-ionization modelling of the extended emission-line nebula (Section 4.1) we have instead attempted to constrain the SED seen by the nebula through a different sight line.The fact that the extended [O iii]5007 and [O i]6300 emission do not overlap (Fig- ures 7 and 8) is a strong indication that the nebula sees the accretion flow in NGC 1313 X-1 sideways.Therefore NGC 1313 X-1 and its EUV/X-ray excited nebula offers us an opportunity to study super-Eddington accretion flows effectively from two different sightlines. Here we have shown that the nebular lines are best described with a model with a lower UV flux that than constrained along the line of sight.In particular, the best match to the nebular lines is provided by the phenomenological model, whose UV flux is about a factor 4 lower compared to the diskbb (Table 4).This model not only provides the best match to the nebular lines (Table 5) but also accounts for the lack of strong nebular Heii 4686 detection (Fig. 9).One may argue that similar results were found by Berghea et al. (2010); Berghea & Dudik (2012) in Holmberg II X-1 and NGC 6946 X-1, wherein it was found that the models with the lowest UV levels were a better match to the nebular lines (their Figures 4 and 3 respectively or their PLMCD and MBC models respectively)-although we caution that these works did not use spatially-resolved data, which is needed if the degree of beaming is to be determined. Below we elaborate on how these findings, that is, the discrepancy between the best-fit line-of-sight SED (with UV ∼ 2 × 10 39 erg/s) and the best-fit nebular model ( UV ∼ 0.5 × 10 39 erg/s) allows us to put constraints on the degree of anisotropy of the UV emission in the ULX NGC 1313 X-1. Quasi-isotropic UV emission Our results may suggest the extended nebula is not seeing the same SED as that derived from the line-of-sight.That is, our observations may be interpreted as evidence for a mild degree of anisotropy in the emission of NGC 1313 X-1.Because of the nature of the emission lines used in this work, with the highest ionisation potentials falling in the UV band (Figures 2 and 3), our observations constrain the degree of anisotropy mostly in the EUV, and any extrapolation to the soft/hard X-rays remains more speculative.This can be observed, for instance, by the fact that the Heii 4686 line is insensitive to the X-rays (≳0.1 keV), as the three models predict vastly different Heii 4686 fluxes (Fig. 9) despite all having similar X-ray luminosities constrained by the data (Table 4).Observations probing higher excitation lines found in the IR (Berghea et al. 2010;Berghea & Dudik 2012) will allow to put tighter constrains and extend our measurements to higher energies. Although below we provide quantitative calculations for the degree of beaming based on our results, we would like to highlight that there are inevitably additional sources of uncertainty we cannot account for, potential contribution of shocks, other SED extrapolations we have not tested for and the treatment of the stellar background (we discuss these in more detail below in Section 5.5).Nevertheless, while the quantitative details may be uncertain, we believe our results can be confidently interpreted as a lack of strong anisotropy in the UV emission. The differences in UV luminosity between the best-fit line-of-sight SED (diskir) and the nebular one (phenomenological) may be used to constrain the beaming factor proposed by King et al. (2001).King et al. (2001) defines the beaming factor as: where is the true emitted radiative luminosity and sph is the observed luminosity under the assumption of isotropic emission. Crucially, this factor must also depend on the inclination of the system and energy.The dependence of with the inclination complicates estimating this value quantitatively from an observational point of view, particularly due to the uncertain inclination of the ULX with respect to our line of sight and the nebula.Nevertheless, we may approximate its value as: where for sph we assume that NGC 1313 X-1 is observed close to face on ( = 0 • ) and where nebula is the luminosity observed by the nebula at an unknown inclination angle .We may get a crude estimate of from the diskir normalization, which is ∝ cos().We have reran our analysis in Section 4.1 by 'inclining' the diskir_star to inclination angles = 45 • , 60 • , 80 • , finding the best-fit , in in each case.All calculations were carried out with the stellar background stars and assuming the flux of the putative companion is isotropic.We have found the best match to the nebular lines is given for = 45 • with log( H [cm −3 ]) = 0.45, in = 1 pc and 2 /dof = 89 (cf. 2 /dof = 92 for the diskir_star).However, this inclined version not only provides a worse fit than the phenomenological model, but also still produces Heii 4686 in comparable numbers to the nominal diskir_star.This suggests the differences between line-of-sight and the ionising SED are more complex than a simple scaling factor. Alternatively, we may find by considering what value of is needed to reduce the UV luminosity of the diskir to a level comparable to that of the phenomenological (i.e. a factor 4 dimmer).This would imply = 80 • , although this model gives worse fits to the nebular emission than the nominal diskir.These values are obviously uncertain due to the uncertainties related in using the diskir to describe a super-Eddington accretion flow, but may suggest the nebula sees an inclined = 45 • − 80 • version of the line-of-sight SED.Nevertheless, we have already argued that the non-overlapping [O iii]5007/H and [O i]6300/H regions strongly suggest the nebula sees the emission sideways.We additionally note that the sirf instead predicts a nearly isotropic UV flux and no amount of inclination can yield the necessary reduction in UV flux. Therefore considering the differences in UV luminosity between the line-of-sight SED (diskir) and that inferred from the extended nebula (phenomenological), we constrain the beaming factor in the EUV of NGC 1313 X-1 to about 0.3-0.15 and suggest the nebula sees an inclined = 45 • − 80 • version of the line-of-sight SED.Such estimates are in agreement with measurements of the photoionized nebula around Holmberg II X-1 (Kaaret et al. 2004) who found beaming factors >> 0.1 (their Table 1) -with the caveat of the extrapolation of the X-ray spectrum (see the Introduction).Such estimates are also consistent with detailed analytical calculations by Abolmasov et al. (2009).If such results can be extrapolated to the Xray band, then we may be able to rule out the strong beaming invoked to explain the emission from some PULXs (King & Lasota 2020) and would support the constraints derived on from modelling of the observed PULX pulse-fractions (Mushtukov & Portegies Zwart 2023;Mushtukov et al. 2021). Alternatively, the beaming factor may indeed be more pronounced in the X-ray band as alluded in the Introduction, as these photons emanate from within the wind funnel from the supercritical disk.Instead, the wind photosphere, which is expected to dominate the UV emission, is expected to emit quasi-isotropically (see e.g.Shakura & Sunyaev 1973;Weng & Feng 2018).Hence our results would be consistent with this picture.In this regard, our results also seem broadly consistent with the general relativistic radiation-magnetohydrodynamic simulations presented by Narayan et al. (2017), although it is hard to derive a quantitative comparison.Broadly speaking, their postprocessed spectra (e.g.their Figure 13) show the UV is reduced by about a factor ∼6 when going from = 10 • to 60 • .This would be broadly consistent with our reasoning based on the diskir model and the increase in inclination needed to match a reduction in flux of about 4. ULXs in the UV Regardless of whether we consider the intrinsic UV flux in NGC 1313 X-1 as the line-of-sight value or that inferred from the extended nebula, both estimates are significantly lower than those measured in NGC 6946 X-1 (Abolmasov et al. 2008;Kaaret et al. 2010).The measurement in the F140LP reported by Kaaret et al. (2010) is shown in Fig. 4 for comparison and is about a factor 5 than predicted by any of our models.Similarly, we have integrated the Heii 4686 line predicted by the best-nebular model (phenomenological with the background stars) over the whole nebula (assuming isotropy), and found (Heii4686) ∼9×10 35 erg/s, slightly above our observed value of (3.4±0.6)×10 35erg/s (Section 4).The value derived from the modelling may be considered an upper limit as we have considered the same Heii 4686H ratios everywhere around the source.Still, such value is significantly lower than the measured (Heii4686) = (2±0.2)×10 37erg/s in the MF16 nebula surrounding NGC 6946 X-1 reported by Abolmasov et al. (2008).If these UV differences were due to an inclination effect, then we should have seen evidence for strong UV (comparable to that of NGC 6946 X-1) either along the line of sight or in the nebular lines.Such differences suggest the UV emission in NGC 6946 X-1 and NGC 1313 X-1 is intrinsically different, suggesting we are probing differences in the mass-transfer rate. Because the UV emission is thought to be linked to the wind photosphere (∝ 0 −3/4 ; Poutanen et al. 2007, where 0 is the mass-transfer rate at the companion in Eddington units) our analysis suggest a lower mass-transfer rate in NGC 1313 X-1 compared to NGC 6946 X-1, despite the stronger X-ray luminosity of the former (Gúrpide et al. 2021a).We suggest the inclination of these two systems might be comparable, but NGC 6946 X-1 might possess a narrower funnel due to higher accretion-rate, creating a strong soft X-ray/EUV source and mimicking a highly inclined source.In NGC 1313 X-1 instead we peer down the funnel most of the time owing to the wider opening angle of the funnel, except in the extremely soft and unusual 'obscured state' (Gúrpide et al. 2021a), where we showed the source becomes even softer than NGC 6946 X-1.A brighter UV and softer X-ray spectrum in NGC 6946 X-1 due to a higher mass-transfer rates would be fully consistent with predictions from R(M)HD simulations (Kawashima et al. 2012;Narayan et al. 2017) and arguments made by Abolmasov et al. (2007) based on the observed nebular lines in a sample of ULXs. In Table 7 we have collated literature results regarding the properties of high-excitation nebulae surrounding ULXs for which the X-ray spectral regime is known, which we have taken from Sutton et al. (2013); Urquhart & Soria (2016); Gúrpide et al. (2021a).Notice the apparent differences in nebular emission between hard ULXs such as NGC 1313 X-1 and M81 X-6 and soft ULXs such as NGC 5408 X-1 (Kaaret & Corbel 2009), Holmberg II X-1 (Kaaret et al. 2004) or NGC 6946 X-1 (Abolmasov et al. 2008).These differences are not trivial as we have already stressed the fact that the Heii 4686 is insensitive to the X-rays (Fig. 9 lower panel) as its ionization cross-section falls roughly as −3 (Osterbrock & Ferland 2006).The Heii 4686 is obviously sensitive to the density or distribution of material around the ULX.For this reason in the Table we also report the Heii 4686/H ratio, which should be less sensitive to density differences.As can be seen, the ratios are generally lower in hard ULXs, indicating that the differences in Heii 4686 luminosity are not due to a density difference.Thus the differences in nebular Heii 4686 around hard and soft ULXs are a telltale that hard and soft ULXs are not only distinct in their X-ray spectral properties, but also in their UV/EUV.Because strong EUV is linked to the mass-transfer rate according to RMHD simulations (Narayan et al. 2017) and analytical estimates (Poutanen et al. 2007), bright Heii 4686 around soft ULXs signals these systems must possess higher mass-transfer rates compared to their hard counterparts.Instead, hard ULXs, due to their faint EUV, can produce strong [O iii]5007/H ratios but dim or no detectable Heii 4686 (e.g. as it is the case in NGC 1313 X-1, Holmberg IX X-1 and NGC 1313 X-2).Our findings are consistent with earlier assertions made by Abolmasov et al. (2007) and suggest the mass-transfer rate may be more relevant parameter distinguishing hard and soft ULXs, as opposed to their inclination (Sutton et al. 2013). In Table 7 we have also collated mechanical powers mec inferred from observations of optical/radio bubbles surrounding ULXs.Because we expect the outflow power to increase with 0 (e.g.Kitaki et al. 2021), we should expect soft ULXs to possess higher mec .From the current limited sample, it does not appear that soft ULXs show higher mec , but certainly a systematic study is needed here, which is beyond the scope of this paper.It is also unclear whether some of these nebulae are comparable: for instance, the bubble around Holmberg IX X-1 shows a nearly spherical morphology (Abolmasov & Moiseev 2008), while Holmberg II X-1 or M51 ULX-1 show instead bipolar bubbles associated with collimated jets (Cseh et al. 2014;Urquhart et al. 2018).Soft ULXs seem also to be less likely associated with shock signatures (e.g. the case of Holmberg II X-1 or NGC 5408 X-1 Kaaret et al. 2004;Cseh et al. 2012), which may suggest hard ULXs are more likely to clear out the material around them, hindering the detectability of the Heii 4686 line.Other factors such as the mass, or even the nature of the accretor are also likely to play a role in explaining such differences.In particular, hard ULXs have been systematically shown to be more likely to host NSs (Pintore et al. 2017;Walton et al. 2018;Gúrpide et al. 2021a;Amato et al. 2023).Whether the nature of the accretor could also explain such differences remains to be seen, but from the limited sample it would seem that differences in the X-ray spectral are translating into differences in the interaction with the environment. The nebular Heii 𝜆4686 problem Whether ULXs can produce Heii emission in enough numbers to account for the Heii 4686 line observed in the integrated spectra of metal-poor galaxies (the Heii 4686 problem) has been recently examined in Simmonds et al. (2021) and Kovlakas et al. (2022), reaching contradictory results.Simmonds et al. (2021) found that a the multi-band diskir model presented by Berghea & Dudik (2012) could produce Heii 4686 in enough numbers to explain it.Kovlakas et al. (2022) on the other hand, built empirical models based analytical descriptions of super-Eddington accretion discs (Shakura & Sunyaev 1973;Lipunova 1999;Poutanen et al. 2007) and found ULXs do not produce enough ionising UV photons to explain the Heii 4686 line. The main uncertainty on these works was the lack of knowledge about the UV emission (see Figure 6 in Kovlakas et al. (2022)), to which the Heii 4686 line is most sensitive.Based on our discussion above and our analysis, we suggest that there should be a dichotomy between hard and soft ULXs in terms of their capacity to excite Heii 4686, with hard ULXs ruled out as potential candidates to explain the nebular Heii 4686 in metal-poor galaxies.Finally, we also note that the relatively isotropic EUV found here would render uncertainties related to beaming and its dependency with in the study of the nebular Heii 4686 problem nearly unimportant (e.g.Kovlakas et al. 2022).We aim to pursue similar studies in other ULXs with different X-ray spectral hardness to reliable confirm these results. The origin of the optical/near-UV light In Section 3 we have included a blackbody component to model the optical/near-UV fluxes from the HST.While this component is not required by the diskir, it is strongly required by the sirf and phenomenological models to fit the HST data.We initially presented this component as a proxy for the contribution from the companion star.However, based on its luminosity and temperature, we now investigate whether a stellar origin is physically plausible and consider alternative explanations. We have seen both the phenomenological and the diskir give similar parameters for this blackbody.The temperature and luminosity of this component imply an O-type star, requiring a star >20 M ⊙ .This would be consistent with the donor OB-types inferred from direct optical modelling of the ULX spectra (Tao et al. 2011).However, here we cannot rule out this type of star as it would be at odds with constraints from the population and age of the nearby stars presented in Yang et al. (2011), which suggest masses under 10 M ⊙ instead.Therefore for NGC 1313 X-1, we can rule out the optical/near-UV fluxes being dominated by the donor star, which cast doubts on the OB-type stars inferred in other ULXs (e.g.Grisé et al. 2012).This is consistent with the optical short-term variability observed by Yang et al. (2011) (and confirmed here Section 2) and the strong optical variability observed during the nascent ULX in M83 (Soria et al. 2012).Yang et al. (2011) and Tao et al. (2011) presented additional diagnostics that can help shed light on the nature of the emission.Based on our reanalysis of the HST data and the state-resolved optical and X-ray data we updated the Johnson-Cousin Vega extinction-corrected magnitude 0 = 23.28 ± 0.07 and the optical to X-ray ratio diagnostic from van Paradijs (1981) = 0 + 2.5 log( X ) = 21.8 ± 0.7, reported in Yang et al. (2011).As stated in Yang et al. (2011), such values are typical for low-mass X-ray binaries and suggest the emission is dominated by the disc itself (or reprocessed emission from it) rather than the companion star (Kaaret et al. 2017).Indeed, Tao et al. (2011) found through careful analysis of the HST data of a handful of ULXs counterparts that the optical emission is not consistent with any stellar type. An alternative explanation is that this component represents the emission from the wind photosphere.The temperature of this component ( ∼ 3 × 10 4 K) is comparable to that measured in SS433 ( = (7 ± 2) × 10 4 ; Dolan et al. 1997) and in NGC 6946 X-1 ( = 3.1 × 10 4 K; Kaaret et al. 2010) both using similar photometric filers and a single blackbody model.We note however that the value on SS433 is highly uncertain as relies on a likely overestimated level of extinction, as noted by the detailed X-SHOOTER analysis on SS433 (Waisberg et al. 2019). Nevertheless, the radii inferred for both SS433 and NGC6946 X-1 are of the order of 10 12 cm, while for NGC 1313 X-1 we measured about half that value.Whether we can associated any predictive power to this component is questionable owing to the uncertainty on the exact modelling (and in fact most likely additional components are present/needed; Kaaret et al. 2010;Soria et al. 2012), however given the photosphere radius is expected to scale with 3/2 0 , this is indeed in line with our previous suggestion of a lower mass-transfer rate in NGC 1313 X-1.Such interpretation is consistent by the lower overall UV luminosity in NGC 1313 X-1 (≲ 10 39 erg/s) compared to NGC 6946 X-1 (Abolmasov et al. 2008) or SS433 (Dolan et al. 1997;Waisberg et al. 2019), which both have a UV luminosities in excess of 10 40 erg/s. In this regard, while we have shown the level of UV predicted by the sirf is too high to explain the nebular emission (Section 4), it may offer a more accurate description of the optical/near-UV data.This is because when employing the sirf model, it is the (supercritical) disc that dominates the optical/near-UV fluxes, with the putative companion adding a negligible contribution (Fig. 3).Such description of the SED would fit more accurately our expectation of the optical/near-UV fluxes based on the reasoning above.Given the putative star parameters, we infer an F0-A-type star with a mass of ≲ 9M ⊙ (Ekström et al. 2012) which would be consistent with the population of stars around NGC 1313 X-1 (Yang et al. 2011).We further note that this is the stellar-type inferred for the companion star of SS433 (see Goranskij 2011, and references therein), which may add supporting evidence for the link between SS433 and ULXs.However, we must stress that this ignores binary evolutionary effects and irradiation by the X-rays from the disc/wind, which will distort the spectrum of the companion star (see discussion in Ambrosi et al. 2022;Sathyaprakash et al. 2022, and references therein).In any case, our analysis suggest that the optical/UV emission in NGC 1313 X-1 is not dominated by the companion star.Similar situation was found in Holmberg II X-1 (Tao et al. 2012) and may suggest the same is true in NGC 5408 X-1 (Grisé et al. 2012). Finally, we consider the results presented by Vinokurov et al. (2018), who pointed out that the brightest ULXs in the optical (approximately V < −5.5 where V is the absolute magnitude in the Johnson band) show a powerlaw-like spectra, whereas dimmer ULXs instead have blackbody-like optical spectra.Vinokurov et al. (2018); Fabrika et al. (2021) argue that the dimmer appearance of some ULXs is due to a lower contribution of the wind photosphere, linked to the mass-accretion rate.Such interpretation would reinforce our conclusion that indeed the 0 in NGC 1313 X-1 is lower compared to softer ULXs such as e.g.NGC 6946 X-1 or NGC 5408 X-1, which instead have V <-6 (Tao et al. 2011).Vinokurov et al. (2018) argues that the blackbody-like spectra in the dimmer systems may represent emission from the donor.While the low optical luminosity of NGC 1313 X-1 ( V ∼ −4.9) may suggest the latter scenario applies here as well, we have already seen that the companion star inferred (O-type) from blackbody fits is at odds with the stellar population around NGC 1313 X-1.Ignoring geometrical and irradiation effects (Abolmasov et al. 2009), assuming the wind has the virial velocity at the spherization radius (Poutanen et al. 2007), and that the wind photosphere radiates as a spherical blackbody, one can show that its luminosity is roughly equal to a third of the Eddington luminosity and independent of 0 (Poutanen et al. 2007): Thus the luminosity of the optical blackbody (∼ 2×10 38 erg/s) can be easily explained a by a ∼5 M ⊙ BH.However we note that we could not match its temperature ( ∼3×10 −3 keV) for a reasonable value of 0 . To illustrate this, consider the temperature of the wind photosphere may be expressed as Poutanen et al. (2007): where = wind /( wind + rad ), with wind the kinetic luminosity of the wind and rad the observed radiative luminosity (Poutanen et al. 2007).Assuming = 0.5 (Pinto et al. 2016;Gúrpide et al. 2022), in order to match the observed blackbody temperature we would need an abnormally high 0 ∼ 1000 for the aforementioned BH mass.Thus, as alluded above, it is unlikely that we can associate this component solely to the wind photosphere and the picture may be more complex due to geometrical, irradiation effects (Abolmasov et al. 2008) and deviations from a blackbody due to scattering (Lipunova 1999). Caveats There are several limitations of our study that are worth highlight and need to be borne in mind when interpreting the results.The first is the timescales involved in the nebular emission with respect to the variability of the source, which will unavoidably affect any study of this kind.As alluded in Section 4, the recombination timescales of the nebula are of the order of thousands of years, while the observation baseline of NGC 1313 X-1 is only of dozens of years, if we consider XMM-Newton observations prior to the Swift-XRT.Hence the time-averaged spectrum could be higher or lower depending on the (unaccessible) activity history of NGC 1313 X-1.However, the fact that the diskir provides a reasonable description of the line of sight nebular lines (all lines accurately predicted within a factor 2) -which are obviously subject to similar recombination timescales -suggests that the time-average SED cannot deviate substantially from the present-day estimate.In fact, we can rule out a brighter time-average spectra as the sirf substantially overpredicts the line fluxes both along the line of sight and in the extended nebula (Tables 5 and 6).We have explicitly confirmed that the high-state SEDs did not offer better match to the nebular lines than the low state SEDs.There remains the possibility that the time-average spectrum is dimmer than the diskir but still brighter than the phenomenological, because the level of UV provided by the phenomenological already provides a worse match to the line-of-sight line fluxes than the diskir (Table 6).It may thus be possible to explain both the line of sight and extended nebular line fluxes better by a time-average spectrum whose UV brightness sits between these two spectra, which may suggest the UV is closer to being isotropic than we have estimated. Another possibility is that some of the line ratios (or fluxes) are slightly affected by shocks.In our modelling of the extended emission (Section 4.1) we do not expect this to be an issue affecting significantly our results.The first reason is that we already showed in Gúrpide et al. (2022) that shocks are mainly concentrated in the edges or rim of the bubble, whereas the inner parts where instead dominated by EUV/X-ray excitation.Secondly, the morphology of the nebula, that is the observed ionization gradient, most remarkable in the [O iii] 5007 and [O i] 6300 lines, can only be produced by EUV/X-ray photo-ionization.Shocks would instead produce rather co-spatial [O i] 6300 and [O iii] 5007 regions.Moreover, we already showed in Gúrpide et al. (2022) that to produce the observed levels of [O iii] 5007 would require shock velocities >300 km/s (see also Berghea et al. 2010), clearly ruled out by the kinematic data.Therefore, if anything, we may expect a slight contribution from shocks in low-ionization lines in the outermost parts of the photoionized region.However, winds can still alter the distribution of the gas, by depleting and compressing it onto a thin shell (e.g.Siwek et al. 2017;Garofali et al. 2023;Gúrpide et al. 2024).To some extent this has been taken into account, since we have marginalized our results over a range of radii and density.In practice however, it is often hard to derive an accurate description of the geometry of the nebulae, due to projection effects.We are also working on extending our modelling to 3D structures to provide potentially a more accurate description of the cloud geometry.However, preliminary results suggest that extending to 3D, at least for the present work, is not likely to have a strong impact on the results. Shocks may be important however in our photo-ionization modelling of the nebula along the line of sight.As stressed in Section 4.2, the lack of clear diagnostics such as the extent of the regions with enhanced oxygen to Balmer line ratios makes the attribution of the observed line ratios to photo-ionization by the ULX less certain.A more refined modelling, self-consistently accounting for shocks and photo-ionization may provide a more accurate picture.Nevertheless, once again we do not expect this to strongly affect lines such as [O iii] 5007 and Heii 4686 and therefore we can confidently ascertain that the line of sight SED is best described by the diskir or the phenomenological models.Thus we consider our assertion that beaming in the UV must be small to be robust to these caveats.We leave for future work a more detailed treatment self-consistently accounting for photo-ionization and shocks. Lastly, moving forward it may be advantageous to include the stars in the field in a spatially resolved manner, here and for other ULXs in crowded fields such as Holmberg II X-1 (Pakull & Mirioni 2002). At present, this cannot be done in Cloudy, but may be explored in the future with three-dimensional photo-ionization codes such as MOCASSIN (Ercolano et al. 2003). CONCLUSIONS Coupling multi-band spectroscopy with detailed modelling of the EUV/X-ray excited emission-line nebula surrounding NGC 1313 X-1, we have attempted to constrain the degree of anisotropy of the UV emission in this archetypal ULX.Our results suggest that the UV emission is mildly beamed, by a about a factor ∼4 at most.We have also shown that the optical emission in NGC 1313 X-1 is unlikely to be dominated by the companion star. We have also discussed the weak detection of Heii 4686 in the nebula surrounding NGC 1313 X-1, which seems to be a common finding around other hard ULXs.Instead, bright nebular Heii 4686 seems to be a common finding around soft ULXs.We suggest differences in mass-transfer rate may explain such dichotomy, since according to analytical calculations and numerical simulations only at high-mass transfer rates a ULX will become an extreme EUV source.This implies that only a subset of the whole ULX population may excite Heii 4686 in high enough numbers to account for the observed Heii 4686 line in metal-poor galaxies. Moving forward, a better understanding of the ULX SED or observations targeted at reducing the uncertainty on the line-of-sight SED by probing the FUV emission would be of great interest to reduce the uncertainties in our work.While the lines probed here do not allow us to constrain the degree of anisotropy of the soft X-ray emission, probing the high-excitation lines found in the IR with James Webb Space Telescope would enable to constrain the degree of anisotropy of the ULX emission at higher energies, allowing us to test existing super-Eddington accretion theories and improving our understanding of ULXs and the feedback on their environments. Figure 1 . Figure 1.Swift-XRT observations showing the variability and spectral states of NGC 1313 X-1.(Left) Swift-XRT lightcurve.The dashed blue line shows the time when the HST/WFC3/UVIS observations were performed (too close in time to be distinguished here).The red and green shaded areas show the equivalent Swift-XRT countrates of the 2003 and 2004 Chandra observations respectively.The uncertainties on the former includes any uncertainties associated with the pile up modelling (see text for details).The black solid and dashed line shows the mean Swift-XRT count-rate and its standard error.(Right) hardness ratio given as the count rate in the 1.5 -10 keV band over the 0.3 -1.5 keV band.The red and green stars mark the high and low state of NGC 1313 X-1 respectively, which were derived taking the mean and standard deviation of the snapshots above and below 0.225 ct/s respectively. Figure 4 . Figure 4. BPASS stellar template alongside the ULX models used for the low state of NGC 1313 X-1.Both stellar templates have an age of 10 7.5 yr and were approximately rescaled to the local SFR around NGC 1313 X-1.The shaded bands are as per Fig. 2 and correspond to the bands used to calculate the fluxes in Table4. Figure) along with a spectrum from the nearby stellar cluster (shown in orange) to compare the presence of the Heii 4686 line.The resulting spectra are shown in Fig. 5 (right panel).The line is clearly detected 6 in the patch next to the ULX at the expected position based on the redshift of NGC 1313 (shown in the Figure by a black dashed vertical tick).Fitting a Gaussian and a constant for the local continuum (shown as a magenta dashed line) we measured an average flux (Heii 4686) = 3.6±0.7×10−19 erg/s/cm 2 .From the ( − ) map derived in Gúrpide et al. ( Figure 5 .Figure 6 . Figure 5. (Left) Heii 4686 flux map, constructed by integrating the spaxels around the Heii 4686 line (4688.5-4698.6Å) -accounting for the systemic redshift of NGC1313 -and subtracting the nearby local continuum (using Cube 2 from Gúrpide et al. 2022).The map has been resampled by a factor 3 compared to the original resolution to enhance a feature (blue rectangle) close to the ULX position (white circle).(Right) Average spectra from the blue and orange regions.The orange spectrum has been divided by 1.4 for visual clarity.The spectrum extracted from the patch close to the ULX shows a clear emission line at the expected position of the Heii 4686 based on the redshift of NGC 1313 (indicated by a vertical black dashed line).The magenta dashed line and shaded area shows the best-fit with a Gaussian for the Heii 4686 line and a constant for the local continuum, and its 1 uncertainties. Figure 7 . Figure 7. Cloudy-generated 2D best-fit models (contours) and data (background image) comparison for the diskir model with the putative companion star (top panels) and same model but now including the stellar background (bottom panels), for = 0.15 ⊙ .[OI]6300/H and [OIII]5007/H maps are shown on the left and right panels, respectively.The contours show the Cloudy model predictions, with the numbers for each colored annuli given in the legend.The contours also show the region used in the comparison, where the EUV/X-ray nebula is located.All images show a 22"×22" region centered around the ULX. Figure 8 . Figure 8. Effects of including the stellar-background on the [O iii]5007/H (left axis) and [O i]6300/H (right axis) ratios (shown for the phenomenological_star model).The stellar background increases the peak [O iii]5007/H ratio and 'widens' the area over which [O iii]5007 is produced, whereas it has the opposite effect on [O i]6300/H.These 1D plots are for illustration purposes only, as they are not resampled and smoothed to match MUSE spatial resolution. Figure 9 . Figure9.(Top) Comparison of 1D radial profiles extracted from the data (solid and shaded areas) and the best-fit 2D Cloudy models (dashed lines), after resampling and smoothing to match MUSE spatial resolution.The profiles were extracted with a width of 10 pixels.For the data, the weighted average and standard deviation are shown as solid lines and dashed regions, respectively.Gaps in the data are due to low signal to noise ratio or bad pixels in those areas.The left axes show the line ratios in low ionisation lines, whereas the right axes (black lines) shows the [OIII]5007/H ratio.The sirf model overpredicts the low ionization lines in the outer part of the nebula.Note the increase in [O iii]5007/H towards the end in the data is due to another nearby group of stars.(Bottom) Range of Heii 4686 flux predicted for each best-fit model (green histograms) averaged over the blue region shown in Figure5, compared to the average value observed in the data in the same region (blue dashed line showing the ±1 uncertainty) and the 13 individual detections (orange histogram).Both diskir and sirf predict too strong Heii 4686 compared to the data, particularly considering the 3 upper limit on Heii 4686. Figure 10 . Figure10.Cloudy analysis of the nebular spectrum along the line of sight.(Left) Spectrum extracted around NGC 1313 X-1, averaged over a circular region of 1" in radius roughly matching the cube PSF FWHM.Lines used in the analysis are labelled and their expected position based on the redshift of NGC 1313 are indicated by a black vertical tick.The spectrum has been resampled by a factor 2 compared to MUSE 1.25 Å spectral sampling.(Right) Reduced 2 obtained for the three photo-ionization models with and without the stellar background ( = 2.75 × 10 39 erg/s; Z = 0.15Z ⊙ ).Lines and symbols as per Figure6, including now the results for in = 60 pc with loosely dotted lines. Figure B1 . Figure B1.maps for [Ar iii]7135 (left) and [S iii]9069 (right) derived as per Gúrpide et al. (2022) (see their Figure5).Pixels below a S/N of 5 have been masked.The brighter blob to the far south of the ULX is due to a nearby stellar cluster. Table 1 . Derived HST fluxes for the counterpart of NGC 1313 X-1. Table 2 . Fits to the Chandra data used to determine the spectral state of NGC 1313 X-1 during the simultaneous HST observations. Table 3 . Datasets used to characterised the state-resolved broadband SED of NGC 1313 X-1. Table 5 . Cloudy modelling results for the extended nebula for = 0.15 ⊙ . Table 6 . Cloudy modelling results for the emission-line nebula along the line of sight for = 0.15Z ⊙ .Line fluxes are extinction-corrected and averaged over an extraction region of 1 ′′ . Table 7 . Comparison of ULX spectral properties and their high-excitation emission-line nebulae.Notes. Nebular Heii 4686 luminosity.Different values correspond to different measurements reported in the references. Whether unusually high [O iii]5007/H ratios indicative of high-excitation have been reported. UV luminosity inferred from the nebula when available. Table C1 . As per Table5but with different contribution from the background stars. = 0.15 ⊙ .
23,573.2
2024-05-23T00:00:00.000
[ "Physics" ]
A novel machine learning-based approach for the computational functional assessment of pharmacogenomic variants Background The field of pharmacogenomics focuses on the way a person’s genome affects his or her response to a certain dose of a specified medication. The main aim is to utilize this information to guide and personalize the treatment in a way that maximizes the clinical benefits and minimizes the risks for the patients, thus fulfilling the promises of personalized medicine. Technological advances in genome sequencing, combined with the development of improved computational methods for the efficient analysis of the huge amount of generated data, have allowed the fast and inexpensive sequencing of a patient’s genome, hence rendering its incorporation into clinical routine practice a realistic possibility. Methods This study exploited thoroughly characterized in functional level SNVs within genes involved in drug metabolism and transport, to train a classifier that would categorize novel variants according to their expected effect on protein functionality. This categorization is based on the available in silico prediction and/or conservation scores, which are selected with the use of recursive feature elimination process. Toward this end, information regarding 190 pharmacovariants was leveraged, alongside with 4 machine learning algorithms, namely AdaBoost, XGBoost, multinomial logistic regression, and random forest, of which the performance was assessed through 5-fold cross validation. Results All models achieved similar performance toward making informed conclusions, with RF model achieving the highest accuracy (85%, 95% CI: 0.79, 0.90), as well as improved overall performance (precision 85%, sensitivity 84%, specificity 94%) and being used for subsequent analyses. When applied on real world WGS data, the selected RF model identified 2 missense variants, expected to lead to decreased function proteins and 1 to increased. As expected, a greater number of variants were highlighted when the approach was used on NGS data derived from targeted resequencing of coding regions. Specifically, 71 variants (out of 156 with sufficient annotation information) were classified as to “Decreased function,” 41 variants as “No” function proteins, and 1 variant in “Increased function.” Conclusion Overall, the proposed RF-based classification model holds promise to lead to an extremely useful variant prioritization and act as a scoring tool with interesting clinical applications in the fields of pharmacogenomics and personalized medicine. Supplementary Information The online version contains supplementary material available at 10.1186/s40246-021-00352-1. Conclusion: Overall, the proposed RF-based classification model holds promise to lead to an extremely useful variant prioritization and act as a scoring tool with interesting clinical applications in the fields of pharmacogenomics and personalized medicine. Keywords: Machine learning, Computational approaches, Functional prediction, Pharmacogenomic variants Background Various patient-specific factors (i.e., ethnicity, age, coexisting conditions, co-administered medications) have been associated with deviations between the expected and the observed effects owing to a specific medication. In addition, a significant percentage of these differential drug responses has been attributed to genetic variants located in genes involved in the processes of pharmacokinetics, pharmacodynamics, or even in genes coding for enzymes of the immune system (i.e., HLA genes), commonly described as pharmacogenes [1][2][3]. This genetically determined diversity of drug effects, as well as its exploitation toward tailoring the medication scheme is the primary focus of pharmacogenomics (PGx), and an integral component of personalized medicine. To this end, genotyping platforms, such as DMET™ plus by Affymetrix, can be used to detect well-characterized, common genetic variants [4]. Alternatively, next-generation sequencing (NGS), either whole exome sequencing (WES), whole genome sequencing (WGS), or even targeted resequencing, can be also used for this purpose, thus providing a more comprehensive idea of an individual's genomic composition [5][6][7]. To date, 15% of the approved drugs by the EMA (European Medicines Agency) in the period 1995-2014 [8], and 7% of the drugs approved by the American Food and Drug Administration (FDA), are accompanied by pharmacogenomic recommendations [9]. Interestingly, relevant PGx biomarkers can be either germline variants in pharmacogenes, mostly single-nucleotide variations (SNVs) or copy number variants (CNVs), or somatic variants in cancer cells that affect tumor's response to antineoplastic drugs, as well as epigenetic modifications of histones and DNA, which could potentially affect the drug response [3]. The effects of these PGx variants might range from altered drug exposure and hence modified efficacy or side effects, to idiosyncratic reactions [1][2][3]. The results of large-scale NGS analyses unravel several challenges, thus complicating the interpretation of the effects of PGx variants on protein function. For example, a large volume of novel, rare (minor allele frequency: MAF < 0.5%), population-specific SNVs, which could affect protein function has been detected within protein coding genes. These genes appear to be enriched in potentially damaging variants, owing to the combination of rapid population growth and weak action of purifying selection [10]. Similar observations were applied when focusing on 202 genes, the products of which are molecular targets for drug action [11]. Regarding the genes coding for phase I metabolic enzymes (CYPs) and drug transporters (UGT, ABC genes), the majority of the identified SNVs within these genes is ultra-rare (MAF < 0.1%) and non-synonymous, while variants that affect splicing sites or lead to loss of the termination codons, as well as nonsense changes are less common [12,13]. Furthermore, the evaluation of organo anion transporter (OATP) transporter sequences provided by the Genome Aggregation Database (gnomAD) has underlined once again the importance of including novel, rare mutations (MAF < 1%) in the pharmacogenomic assays [14]. Taken together, NGS analyses have the potential to identify a very large number of PGx variants, most of which are novel, rare, and with no biochemical or clinical evidence for their impact on protein function. Performing functional expression assays for such large numbers of variants is not always feasible; hence, why the evaluation of predictions derived from in silico tools is an alternative approach to this end. The majority of computational methods used to assess the functional effect of variants in protein level are intended to distinguish neutral from deleterious variants, based on either a hypothesis (SIFT [15], PROVEAN [16]) or the evaluation of a set of properties, including secondary structure, functional sites, protein stability, and sequence conservation (PolyPhen-2 [17], MutPred [18], GERP++ [19]). More recently, a number of algorithms using unsupervised learning (Eigen, Eigen-PC [20]), as well as gene-level scores (LoFtool [21]) and ensemble approaches that integrate the predictions and training features of other tools have been also made available (DANN [22], Revel [23], MetaLR/MetaSVM [24]). However, pharmacogenes and the respective PGx variants tend to differ from genes and variants implicated in disease. The suitability of features considered by the available algorithms is questionable, since genes coding for phase I and II metabolizing enzymes appear to be less conserved evolutionary [25], possibly due to their limited role in endogenous processes and the fact that only a mild modification of the pharmacokinetics and pharmacodynamics can lead to significant results [3]. Nevertheless, the development of an improved framework for the evaluation of pharmacogenomic variants, by combining different classifiers and appropriately adjusting their prediction thresholds, has led to promising results [26]. Herein, we propose a comprehensive model for the assessment of PGx variants by evaluating in silico protein prediction scores with the use of machine learning (ML), and thus highlighting the PGx variants that are most likely to alter the protein function and consequently have a PGx impact. Results The current study focuses on exploiting publicly available and human variation data with well-defined protein-level functional consequences, to train a predictive model for the targeted classification of coding SNVs with regards to their protein function effects. The assigned protein function effect scores were based on the integration and assessment of in vitro biochemical assays, in vivo evidence and clinical data. Four different algorithms (AdaBoost, XgBoost, RF, multinomial logistic regression) were trained with a training set consisting of 190 variants, which were located across 11 pharmacogenes and assessed with 5-fold cross validation. Finally, and as an attempt to utilize the method for real-world data, we assessed the applicability of the optimal model in NGS data, either whole genome or targeted sequencing data. Performance metrics for the machine learning models toward the functional assessment of PGx variants The performance of the classifiers, which were constructed with variables recommended by the recursive feature elimination (RFE) method, was advantageous regardless of the limited sample size of the training set (N = 190 variants in 11 genes). More precisely, the metrics computed for the assessed machine learning models were as follows: random forest (RF)accuracy Interestingly, multinomial logistic regression led to higher AUC and prAUC values compared to the tree-based approaches, while the achieved accuracy was the lowest among the assessed models. RFs were selected as the final approach to be used for the described classification task, since the respective model presented overall improved performance (i.e., accuracy, sensitivity, specificity, and precision) across all four functional classes. Regarding the "Decreased function" variants, RFs were more sensitive and precise than the other assessed models, although AdaBoost achieved equal specificity values (Fig. 1). All models performed impressively well toward the "Increased function" category and led to very similar outcomes, while RFs appeared superior for the detection of "No function" variants and AdaBoost and multinomial logistic regression models were more sensitive for the "Normal function" class. The selected machine learning model proved to be highly specific (≥ 92%) for all 4 functional variant classes, with lower, but still favorable values of sensitivity (8-98%), precision (80-98%) and balanced accuracy (86-99%). With regards to identifying variants that could lead to proteins with unchanged (normal), reduced, or no function, we observed the lowest values of the metrics. The model was characterized by a better performance for "Normal function" variants (sensitivity = 0.8, specificity = 0.92, precision = 0.84, balanced accuracy = 0.86), followed closely by "No" function variants (sensitivity = 0.81, specificity = 0.93, precision = 0.81, balanced accuracy = 0.87), and finally "Decreased function" variants (sensitivity = 0.81, specificity = 0.95, precision = 0.80, balanced accuracy = 0.88). Interestingly, the classifier performs extremely well for the category of "Increased Function" variants, in which case all computed metrics were above 98% (Fig. 1). To better explain the performance of the RF classifier with respect to four variant classes, the distribution of the training variants for the scores suggested by RFE and included in the classifier is provided in Fig. 2. The improved performance toward "Increased function" can be explained by the better definition of these variants compared to the rest classes ("No," "Decreased," and "Normal function"), which are characterized by a substantial extend of overlapping values, that could complicate their accurate classification. We also attempted to assess the variables that could significantly affect the presented machine learning model. More specifically, when it comes to the variable importance, the highest-ranking positions were occupied by these features that RFE suggested as the most informative ones for the classification task. In the present instance, LoFtool emerged as the prominent for the categorization of a variant according to its effect on protein function ( Figure S1, Supplementary Data). Comparing the RF model against other broadly used in silico tools As a further step, we assessed how different, commonly used functionality prediction algorithms would classify the 190 variants that were included in the final training set. Toward this end, ClinPred [27], Condel [28], FATHMM [29], Fathmm-XF [30], LRT [31], MetaLR [24], PolyPhen-2 [32], PROVEAN [16], and SIFT [33] were selected and the corresponding predictions, as provided by VEP, are presented in Fig. 3. Of these scores, only FATHMM-XF can be also applied to non-coding variants, while the rest are intended for use in coding, non-synonymous SNVs. In addition, ClinPred, Fathmm, and MetaLR classify variants as either "Tolerated" or "Damaging"; Condel as "Neutral" or "Deleterious"; FATHMM-XF and PROVEAN as "Neutral" or "Damaging"; LRT as "Deleterious," "Neutral," or "Unknown'; PolyPhen-2 as "Benign," "Possibly Damaging," or "Probably Damaging"; and, finally, SIFT as "Tolerated," "Tolerated with low confidence," "Deleterious with low confidence," and "Deleterious." As a first observation, none of these tools covers variants that could lead to gain-of-function. Although this functionality is provided by B-SIFT [34], it is not available through VEP, and thus, it could not be included in the analysis. Regarding increased function variants, all algorithms, except LRT categorize these variants as either "Damaging" or "Deleterious." In addition, there is apparent discordance among the tools' classification of "decreased" and "normal" function variants, while most algorithms can identify variants leading to non-functional proteins. First case study (WGS data) To further demonstrate the prediction performance of the final RF model, we tested its applicability in "unseen" NGS data, namely those data that have not been previously used to train the machine learning algorithm. We first tested its applicability in WGS data from a patient diagnosed with coeliac disease. From this process, 1808 variants, including 3 novel, within the 10 pharmacogenes of interest (DPYD, CYP2C19, CYP2C9, SLCO1B1, NUDT15, RYR1, CYP2B6, UGT1A1, CYP2D6, TPMT) were identified. Of these, only six missense variants had adequate information, i.e., no missing values in the incorporated functional prediction scores, to be to be further processed by the RF model. With regards to the observed allele frequency, four were found to be common (rs1801159, rs2306283, rs4149056, rs35364374), one had intermediate frequency (rs3745274), and one was ultra-rare (rs762454967) with MAFs based on Gno-mAD genomes. Of these 1808 analyzed variants, we did not identify any variants categorized as loss-of-function variants (LoF). Table 1 presents these variants, alongside with their predicted functional impact, as defined by the majority Fig. 1 Metrics showing the performance of the different classifiers, namely AdaBoost, multinomial logistic regression, random forest, XGBoost. More specifically, the sensitivity, specificity, positive predictive value (pos.pred.value), precision, F1 metric (harmonic mean of precision and recall), and balanced accuracy of the classifiers are provided for each protein function effect class vote of the individual decision trees. For example, a random forest containing 1000 distinct decision trees was built. If most of those votes recommend that the variant belongs to "No function" variants, then this is the class that is attributed to the variant. In addition, the probability of being classified in each class, as based on the votes of all trees of the random forest built, is also provided (Table 1). This computational process led to the confirmation of 2 missense variants (located within the SLCO1B1 and CYP2B6 genes, respectively) that could potentially lead to proteins with decreased functionality and 1 missense variant classified as "increased function" (located in RYR1). The remaining two variants were predicted to lead to no changes in the protein function (i.e., normal). The rest of the PGx variants had a high rate (over 85%) of missing values in the features of interest and were mostly (N = 1765 out of 1803; 97.89%) located within intronic regions ( Figure S2, Supplementary Data). The latter were followed by variants in 3′ prime UTRs (N = 20; 1.11%), missense (N = 6; 0.33%), and synonymous (N = 11; 0.61%) variants. Interestingly, DPYD which encodes for a drug-metabolizing enzyme accumulated more than 1000 intronic variants. Regarding the potential clinical actionability of these 6 variants (rs1801159, rs2306283, rs4149056, rs35364374, rs3745274, and rs762454967), we retrieved additional information from the PharmGKB database. rs1801159 and rs2306823 were not associated with any predicted changes in the protein function or changes in the dosing guidelines (i.e., normal, or low-level changes respectively). However, changes in treatment were recommended for individuals with the rs4149056 variant genotype, while also stating that any additional risk factors should be considered for statin-induced myopathy. Moreover, rs3745274 carried multiple levels of CPIC evidence, for a variety of drugs such as efavirenz, nevirapine, propofol, imatinib, cyclophosphamide, doxorubicin, mitotane, methadone, and 3,4-methylenedioxymethamphetamine. No PGx clinical information could be retrieved for rs35364374 and rs762454967 within RYR1, which were both predicted as "increased function" variants. The dataset of 343 variants included 195 known and 148 novel variants, of which 86 novel and 70 known PGx variants (156 in total) were evaluated by the final RF model (data available upon request). The evaluated variants were mostly missense (i.e., 149 "missense," 7 "missense/splice region"). Of these, 71 variants led to "Decreased" function proteins, 41 variants to "No" function proteins, 1 variant in "Increased" function protein, and 43 variants have no effect on protein functionality (i.e., "normal" function) (Fig. 4). To further estimate the potential clinical actionability of the 156 PGx variants, as evaluated by the RF model, additional clinical and variant information was retrieved from PharmGKB. rs1801159, rs1801158, rs2297595, and rs1801160 were not associated with any predicted changes in the protein function, according to the variant annotation by PharmGKB, which constitutes an observation in concordance with the assigned prediction classes by the RF model (i.e., "normal" function class). Moreover, rs67376798 was associated with decreased catalytic activity based on evidence from PharmGKB, thus further confirming the prediction class of the RF model (i.e., "decreased" function class). Similar observations were applied for the variants, namely rs4149056, rs116855232, and rs3745274, for which the following prediction classes were assigned by the RF model: "decreased," "no," "decreased," respectively. PharmGKB provides multiple levels of clinical evidence for these variants, the majority of which were associated with decreased protein activity, therefore confirming the presented model results. Discussion Conventional genetic testing and clinical guidelines focus solely on a small number of well-studied variants or star alleles in pharmacogenes, while the application of NGS techniques provides the possibility to detect a much wider range of (PGx) variants. Recent studies have demonstrated that coding variants are rare, populationspecific and a significant proportion of them could potentially affect the protein product (based on in silico assays and metrics) [10][11][12][13][14]. At the same time, the role of copy number variants (CNVs) within pharmacogenes [35], as well as variants in non-coding regions, is gaining more attention, with more than 90% of the polymorphisms detected in GWAS pharmacogenomic studies being non-coding [36]. Owing to the limited number of thoroughly documented PGx variants and the incredibly large number of identified genetic mutations that should be experimentally validated, the initial evaluation of these variants found must be performed via the use of in silico tools. The study's main aim was the assessment of the utility of in silico-derived scores, commonly used for variant annotation, toward the characterization of the potential protein function effects of SNVs identified within pharmacogenes. Among the assessed algorithms (AdaBoost, XGBoost, RF, multinomial logistic regression), RF presented superior performance and was selected as the final classifier. RFs have been also proven to be robust in the presence of outliers or noise, effective, even without configuration, and useful in cases where the number of available "-omics" data is limited, when compared to the number of available variables [37,38]. The final classifier required minimum hyperparameter tuning and integrated 7 scores, stand-alone or ensemble ones, and 2 custom created variables. The overall accuracy was equal to 0.85 (95% CI: 0.79, 0.90), with an area under the curve of 0.92 and an area under the precisionrecall curve (PR AUC) of 0.73. The by-class performance for variants of Normal, Decreased, and No Function classes is efficient enough, although there is still room for improvement, especially in terms of sensitivity (0.80, 0.81, and 0.81 respectively). Interestingly, the model appears to be efficient, given the fact that most of the incorporated features are used to distinguish between damaging and benign variants, specifically when it comes to identifying increased function SNVs. Furthermore, LoFtool, an approach that evaluates the tolerance of a gene to loss-of-function mutations, emerged as the most significant determinant of the classification task. The superior performance of the model in identifying "Increased" Function PGx variants, combined with the observation that this specific class in the training dataset represents only two pharmacogenes, might partially justify the importance of the variable. Although there is limited published work in this specific area, the possibility of using PGx variants so as to develop classification tools has been previously explored, without however progressing any further due to the limitations and difficulties that accompany this field [39]. Firstly, the most frequently examined properties in such classifications tools are the degree of evolutionary conservation, which is observed in lower levels in pharmacogenes [3] and therefore its usefulness is debated by a series of studies [26,39,40], as well as parameters regarding the structure of the respective proteins, which have been observed to lead to small increases in the efficiency of the classifiers produced [39]. Overall, such factors could influence the quality Table 1 Classification outcomes (prediction and probabilities) for WGS data using the final RF model. The predicted class is determined based on a majority vote from the individual decision trees of the random forest classifier, while the presented probabilities depict the corresponding percentage of decision trees voting toward a functional class of the output results in classification models, as the one presented herein. In addition, the training sets used to train computational models are usually comprised of common polymorphisms against variants (mostly SNVs) related to disease-causality, while in terms of drug response, the modifying effect of common genomic variants cannot be ruled out. Moreover, the resulting scores evaluate the pathogenic potential of the examined variants and classify them into two usually categories according to certain applied thresholds. In contrast, PGx researchers usually focus on the induced change in protein function, which can be distinguished at several levels (e.g., increase, decrease, no change, complete loss of activity), while the differential drug response is not a disease, but a phenotype that occurs under specific conditions (i.e., administration of a specific drug). For example, in a recent study, the adaptation of the proposed classification thresholds and the subsequent integration of selected algorithms, which could provide optimal results for the creation of a comprehensive score, led to a tool with exceptional sensitivity and specificity [26]. However, this work focused exclusively on the distinction between loss-of-function and neutral variants, hence ignoring PGx variants that would result in a protein product of increased activity, and which are of interest in PGx field. The novelty of our recommended approach lies in the computational "design" of the classifier Specifically, starting from a VEP annotated .vcf file as the input, the classifier quickly leads to a list of PGx variants that could harbor a protein function effect and hence a potential clinical PGx impact. Unlike disease-related variants, there is no state-of-the-art procedure so far-to the best of our knowledge-which can be used to interpret variants implicated in drug response [41]. Taken together, the originality of the presented model lies both in the variant analysis process automatization and the incorporation of available in silico scores for the evidence-based assessment of pharmacovariants. Given the challenges and implications for the prediction of functional impact of PGx variants, as well as the complexity of the involved biological processes [42], the findings of this study should be interpreted with caution. For example, discrepancies have been observed not only among different algorithms, or between in silico predictions and in vitro activity [43], but also when comparing in vitro and in vivo observations. A characteristic example is that of CYP2D6*35, which has not been associated with reduced activity, despite the experimental evidence of reduced hydroxylation capacity of tamoxifen [44,45]. Moreover, researchers should keep in mind that the same variant may affect the response to different drugs in different ways. For example, although the CYP2C8*10 and CYP2C8*13 alleles have been found to affect the N-deethylation of amodiacin, the hydroxylation of paclitaxel-which is also metabolized by CYP2C8-remains unaffected [46]. As mentioned earlier, the presented model has demonstrated promising results, despite the limitations of this computational research field. However, there is still space for further improvements toward a more efficient and robust version of the presented model. More specifically, it would be useful to examine and compare the performance of other machine learning (ML) approaches, supervised or not. Furthermore, significant advantages are expected to emerge from the collection and curation of larger training sets, consisting of larger numbers of variants and covering an additional number of pharmacogenes. Furthermore, the computed metrics demonstrate a difficulty in distinguishing between normal and decreased/no function variants, thus making debatable the suitability of the used features for the characterization of these PGx variants. Moreover, the integration of CNVs and non-coding variants, although promising, is often difficult to achieve owing to the limited number of available tools and approaches for CNV calling and for the functional assessment of non-coding variants. Emphasis should be also placed on the creation of well-characterized sets of PGx variants at the level of protein effects, both laboratory and clinical, as well as on the improvement of the existing databases to facilitate the export of the requested information. In addition, researchers should consider that an individual does not carry just one variant in one pharmacogene; therefore, the combination of PGx variants is often what results in the overall difference in drug response [25]. To this end, since the contribution of various factors to the response to a given drug is nondebatable, a more comprehensive approach through systemic genomics would be particularly useful, thus incorporating a variety of different -omics data [47]. Conclusions The novelty of the computational model presented herein lies in the fact that a ML approach was used to classify PGx variants, particularly novel and rare variants, by consequently assigning a protein activity prediction. Overall, the presented model prioritizes annotated PGx variants in different variant effect classes and then assigns a protein function classification after stringent computational assessment and ML processes. Its utility was further showcased by using two real-life datasets to further support the applicability of this model as a clinical support decision tool. Indeed, a validated, methodical prioritization of the multitude of genomic variants stemming from NGS analyses, as the one presented herein, has the potential to positively contribute toward the large-scale clinical application of pharmacogenomics and facilitate the translation of a patient's genomic profile into actionable clinical information. Collecting the training data An appropriate training set of variants was manually curated using the PGx gene-specific information tables, created under the collaboration between PharmGKB and CPIC and was subsequently supplemented by additional variants from PharmVar [48]. This training set consists of 262 variants located across 12 pharmacogenes, with well-defined protein-level functional consequences, as based on the integration and assessment of in vitro biochemical assays, in vivo evidence and clinical observations. After careful data examination and owing to the high percentages of missing values, 190 variants within 11 pharmacogenes (Table S1, Supplementary Data) remained and were used as our training set. The observed functionality is classified into 5 levels (excluding Unknown/Uncertain function): "Increased", "Normal", "Possibly Decreased", "Decreased" and "No function". However, owing to the limited number of observations harboring the levels of "Possibly Decreased" and "Decreased" functions and after careful examination of the available information for those categories, these two levels were combined in one class (Decreased function) ( Table 2). Variant annotation The curated set of pharmacogenomic variants was annotated using the web interface of Ensembl's variant effect predictor (VEP) tool, for the GRCh38 human assembly, as well as the 4.1.a version of the dbSNFP database [49], which is also provided through VEP. The majority of the retrieved information is available for variants located within protein coding regions and includes: a detailed characterization at a protein level (i.e., database identifiers, codons, amino acids, coordinates, protein domains, computational scores, etc.), overlapping known variants, observed frequencies in different populations (i.e., via the 1000 Genomes Project, the genome Aggregation Database, the Exome Aggregation Consortium data and the Exome Sequencing Project), any related phenotypes (e.g., OMIM, Orphanet, GWAS catalog) or clinical significance (ClinVar), as well as literature references [50]. Furthermore, the attributed consequence, described by using terms as developed in collaboration with Sequence Ontology (SO) [51], and the corresponding impact of a variation are also provided. Features and variants with a high percentage of missing values (≥ 40%) were excluded, while the remaining values were imputed by using k-nearest neighbors algorithm (kNN) [52] with default values for k-neighbors (equal to 5) and inverse weighted mean Gower distances [53]. In addition, a step of backwards variable selection through RFE using Bagged Trees was performed, which recommended the use of 7 out of the 45 variables (LoFtool [21], DEOGEN2_score [54], MPC_score [55], BayesDel_ addAF_score [56], integrated_fitCons_score [57], FATHMM_score [29], LIST.S2_score [58]). Furthermore, two binary variables were constructed and included in the analysis: one indicating whether the variant was located within a protein functional domain (according to InterPro [59] annotation) and one representing high impact SO consequences (splice acceptor or donor variants, stop gained, frameshift variants, stop or start lost), enriched for loss-of-function (LoF) changes, as defined by MacArthur and coworkers (2012) [60]. Training of the machine learning model All preprocessing and ML-related analyses described in this work were performed using the R language for statistical programming (version 4.0.2) [61]. To exploit the abilities of the abovementioned features toward explaining potential protein function effects of variants derived from NGS analyses, a variety of tree-based methodologies was assessed, alongside with a special case of a neural network acting in a multinomial logistic regression manner. More specifically, random forests [62,63], multi-class AdaBoost [64,65], XGBoost [66], and a neural network striped from its hidden layers and activation functions (multinomial logistic regression) [67,68] were used via the caret package [69]. For the selected tree-based models, hyperparameters were tuned based on the optimization of the accuracy metric, while in multinomial logistic regression, the default parameters were used (Table S2, Supplementary Data). Evaluation of the machine learning models The predictive performance of the created models was assessed via the 5-fold cross validation (CV) method. During n-fold CV, the data are divided to create n equal-sized subsets; n-1 of these are used to train a model and the remaining 1 is used to test its performance. This process is repeated n times, until all subsets have been used to test the model, while the computed metrics in each iteration are averaged. More specifically, the metrics of interest include the accuracy, precision, sensitivity (true Positive rate), specificity (True Negative rate), balanced accuracy (average of precision and recall), and the F-measure (harmonic mean of precision and recall). Since this was a multi-class task, all metrics were computed for each class separately (according to the one-vs-all method), and the performance of the model was calculated using the corresponding weighted average values for each metric. Furthermore, a random forest classifier was trained with the total of 47 features and used to evaluate their predictive importance. Testing the applicability of the final machine learning model To further demonstrate the applicability of the machine learning model, we applied the classifier in data derived from NGS analyses. Τo this end, variant call format (.vcf) files comprised of the results from (i) a WGS analysis of a single individual of Greek origin diagnosed with coeliac disease, and (ii) a targeted pharmacogene sequencing analysis of 304 individuals of Greek origin diagnosed with psychiatric diseases [70]. Firstly, the provided variants were annotated, using the web interface of ensemble VEP tool, while the resulting data were preprocessed to select only these identified in the transcripts of interest. Then, these annotation data were used as an input to our final RF model and the Table 2 Description of the protein function effect classes of PGx variants, which are used as the training data for the final RF model. The functionality class is split in the following classes ("decreased," "increased," "no," "normal"), the number of the respective PGx variants per class is also provided, as well as which pharmacogenes are incorporated per each class corresponding prediction functionality classes and prediction probabilities were provided. Last, clinical and variant annotations found in PharmGKB (https://www.pharmgkb.org) were also curated to extract clinically relevant information for the PGx variants either assessed or missed by the presented RF model. Additional file 1: Figure S1. Annotation features examined as training variables in the machine learning model for the functional assessment of pharmacogenomics variants. These features are ranked according to their suggested interpretational significance from least (bottom) to most important (top). Figure S2. Distribution of PGx variants identified in the WGS data (first case study) that were not processed owing to many missing values. The graph presents the number of PGx variants, by gene, that were not processed any further by the machine learning model, according to the VEP consequence (i.e., 3' UTR variant, intronic variant, missense variant, splice region variant and synonymous variant). The pharmacogenes are color-coded according to the corresponding PGx group: genes encoding drug metabolizing enzymes or genes encoding drug transporters or other non-metabolizing enzymes. Figure S3. Sequence ontology consequences for the identified PGx variants, as derived from a Greek cohort of 304 individuals with psychiatric disorders (second case study). 343 PGx variants within the pharmacogenes of interest were identified in this cohort. Amongst the consequences are 'frameshift', 'missense', 'missense or splice region', 'splice region', 'start lost', 'stop gained' and 'synonymous' variants. Supplementary Table S1. List of the represented pharmacogenes, which were included in the training dataset of the assessed machine learning models (AdaBoost, Multinomial logistic regression, Random Forest, XGBoost). Table S2. Summary of the parameters and metric values for the tree-based models (AdaBoost, Random Forest, XGBoost), as tested in the present study. Parameters denoted with an asterisk (*) were tuned according to the achieved accuracy.
7,680.4
2021-05-05T00:00:00.000
[ "Computer Science" ]
Maize Precision Farming Parallel Management Technology and Its Application in Northeast China : Problems of extensive and low utilization of resources exist in the traditional maize cultivation. With the rapid development of computer technology and the internet of things, this paper proposed maize precision farming parallel management technology. This technology consists of three parts: management process, model system and digital support environment construction. The objective is to realize the quantitative decision-making of maize production target-control indicators-cultivation management scheme, and realize the analysis evaluation of production environment, production technology and farmers’ operating management behavior. This technology was tested and applied preliminary in Yian County, Heilongjiang province. The application results showed that the maize precision farming parallel management technology can provide digital and scientific decision support and can achieve high production efficiency with lower fertilizer inputs and no-decreased yield. Introduction Northeast China is comprised of Heilongjiang, Jilin, and Liaoning provinces, accounts for 30% of the total maize production in China and is important for ensuring China's food security.However, most of the farmers are doing the farming based on traditional knowledge.This plays a very big role in the crop production.This can't give full play to the potential of maize production, but also may cause serious waste of resources.The extensive application of information technology, especially the model and internet of things technology in the agriculture field, brings new opportunities for informatization and digitization of agricultural production management. Over the years, agricultural researchers have done a lot of work in resolving the relationship between environment-technical measures and crop physiological and ecological processes, and have established a series of agronomic mechanism models, knowledge models and knowledge rules [1,2,3,4,5] .Their works provide scientific theory, technical system and method means for describing the genetic characteristics of crop varieties, reflecting the interrelationship between crop and environment, as well as the purposeful, localized cultivation and management regulation, and then consequently laid good foundation for agricultural information research.Combining the crop cultivation management knowledge model with the crop growth model can achieve dynamic prediction and management decision of crop growth [6] . Internet of things is a technology that tends to connect various objects in the world to the internet [7,8] . Applications are developed based on Internet of things enabled devices to monitor and control various domains based on applications [9] .Internet of things technology can be used to increase the crop production effectively to meet the growing needs of increasing population.Application of internet of things technology can helpful to increase the quality, quantity, sustainability and cost effectiveness of agricultural production [10] . Proper cultivation management is very important to improve the maize production.This paper introduces the control theory and method of artificial society, computational experiment and paralle execution to complex system, and puts forward the structure of precision farming parallel management system, providing a new assessment can be controlled precision agricultural management decision-making. Technical specifications for parallel management of maize precision farming 2.1 Management process Maize precision farming parallel management process including four parts: planting plan and production target determination, cultivation scheme development before sowing, production warning and analysis and evaluation after harvest.○ 1 The planting plan and production target determination.According to the geographical information of the production field, the cultivation plan and the management level of the production, the maize production potential of the field was calculated by using the maize precision farming parallel management scheme design model.Then the maize production target, mainly yield target was determined.○ 2 Formulation of cultivation scheme before sowing.According to the yield target, the model of the parallel management plan of maize and the calculation method of the seed, fertilizer, water and pesticide were used to calculate and recommend the suitable varieties, sowing date, sowing density, and sowing depth, and then calculate the fertilization and irrigation schemes.○ 3 Warnings and regulations during maize growth stages.After sowing, real-time monitoring of maize growth and field environment is achieved by using Internet of Things technology and means, such as air temperature and humidity sensors, soil temperature and humidity sensors, multispectral sensors, video surveillance systems.Using design model of maize precision farming parallel management scheme to analyze whether the crop growth deviates from the optimal growth indicators. If there is any deviation or disasters occur, the computational test algorithm is used to optimize the management of irrigation and fertilization, and form a regulation scheme.○ 4 Analysis and evaluation after maize harvest.After crop harvest, the database of crop production field archives is updated.Based on the data of the current year, re-evaluation of varieties, meteorology, soil, production level and cultivation scheme and model parameter adjustment are conducted using the design model of maize farming precision parallel management scheme.On this basis, the production target and cultivation scheme for the next year can be formulated. Model system Maize precision farming parallel management model system consists of maize knowledge model, growth model and structural-function model.Maize structural-function model mainly used to optimize analysis and visualization output. The maize knowledge model is developed based on understanding, analysis, extraction and integration of experts' knowledge and experience, literature and experiment data for maize cultivation management, analyzation of the dynamic relationships of maize growth and management indices to variety types, ecological environments and production levels.The maize knowledge model is mainly designed to optimize the cultivation scheme, which includes yield target calculation, variety selection, sowing date, population density, fertilization strategy, irrigation management. The maize growth model is based on the theory of crop-environment-atmosphere continuum.It can reflect the dynamic relationship between the maize growth and climatic conditions, management technology, soil physicochemical properties and genetic traits.The optimal cultivation management scheme based on the maize knowledge model is used as the input of maize growth model to carry out the process simulation analysis to achieve re-evaluation of varieties, meteorology, soil, productivity levels and cultivation scheme, as well as the adjustment of model parameters.Based on this, the production target and cultivation scheme of next year can be formulated. The maize structure-function model is based on maize growth, morphology knowledge model and geometric model.It integrates the canopy radiation transfer model and three-dimensional visualization model, and realizes the canopy structure calculation, light interception, assimilation product distribution and organ growth simulation.The maize structure-function model is used to optimize the analysis and visualize the output. Digital support environment construction Maize precision farming parallel management digital support environment includes the construction of the model database and the construction of the Internet of Thing system for real-time data acquisition. The database construction includes geographic information, meteorological information, soil properties, variety information, agricultural materials information, farmers' management level and field management archival information, etc.The geographic information means the location of management plot.The meteorological information includes daily maximum/minimum temperature, air humidity, sunshine duration, radiation, precipitation, wind speed.The soil properties include soil type, bulk density, wilting point, field capacity, organic matter content, total nitrogen content, available phosphorus content, available potassium content and pH value.Farmers' management level is divided into three levels based on mechanization degree: high, medium and low.The field management archival information is the detail information of production process, including variety, sowing date, fertilizing/irrigate date and amount, etc. The Internet of Things system for real-time data acquisition includes air temperature and humidity sensors, soil temperature and humidity sensors, multispectral sensors, video surveillance system and visible light, multispectral and infrared sensors based on unmanned aerial vehicle platform.It can provide real-time information such as crop growth, crop diseases, meteorological disasters, etc. The maize precision farming parallel management's application in Heilongjiang province The maize precision farming parallel management technology was applied in Yian County, Heilongjiang Province.The demonstration base is located in Yian Provincial Agricultural Science and Technology Park.The maize cultivated land area is about 500 mu in the park.First, the digital support environment was constructed according to the maize precision farming technology of parallel management procedures (Fig. 1).The experiment field was 10 mu, the maize was planted based on two management scheme, one is according to the maize precision farming parallel management system and another is according to local farmers' traditional habits.The primary properties of the soil are presented in table 1 and 2. According to the meteorological data and soil properties of the field plot, the maize precision farming parallel management system was used to make cultivation decision.In 2015, the yield target was made to 750 kg mu -1 , and the cultivation decision was made at rain-fed conditions.The two management scheme was showed in table 3. Compared with the farmers' traditional scheme, the system scheme decreased the N input by 20 kg ha -1 , P2O5 input by 11 kg ha -1 , and increased K2O input by 11 kg ha -1 . The yield results of two schemes were showed in table 4. Maize yield and yield components under system scheme were superior to that under traditional scheme.The results showed that the yield under system scheme was 8% higher than that under traditional scheme. Conclusions The maize precision farming parallel management technology is established base on model system and Internet of Things.Its application in Northeast China proved that the maize precision farming parallel management technology is useful to improve maize production.The system scheme made from maize precision farming parallel management technology decreased cost input and increased farmers' profit compared with traditional scheme, and also alleviated the negative effect of chemical fertilizer on soil ecological environment.Therefore, the maize precision farming parallel management is a sustainable development technology that benefits to farmers and environment. Fig. 1 Fig. 1 Environmental data acquisition equipment installed at the experiment site Table 1 . Primary soil physical properties at the experimental site. Table 2 . Primary soil chemical properties of plough layer at the experimental site. Table 3 . The two comparable management schemes Table 4 . Maize yield and yield components in two different schemes
2,251.2
2017-08-12T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science" ]
An Assisted Workflow for the Early Design of Nearly Zero Emission Healthcare Buildings Energy efficiency in buildings is one of the main goals of many governmental policies due to their high impact on the carbon dioxide emissions in Europe. One of these targets is to reduce the energy consumption in healthcare buildings, which are known to be among the most energy-demanding building types. Although design decisions made at early design phases have a significant impact on the energy performance of the realized buildings, only a small portion of possible early designs is analyzed, which does not ensure an optimal building design. We propose an automated early design support workflow, accompanied by a set of tools, for achieving nearly zero emission healthcare buildings. It is intended to be used by decision makers during the early design phase. It starts with the user-defined brief and the design rules, which are the input for the Early Design Configurator (EDC). The EDC generates multiple design alternatives following an evolutionary algorithm while trying to satisfy user requirements and geometric constraints. The generated alternatives are then validated by means of an Early Design Validator (EDV), and then, early energy and cost assessments are made using two early assessment tools. A user-friendly dashboard is used to guide the user and to illustrate the workflow results, whereas the chosen alternative at the end of the workflow is considered as the starting point for the next design phases. Our proposal has been implemented using Building Information Models (BIM) and validated by means of a case study on a healthcare building and several real demonstrations from different countries in the context of the European project STREAMER. Introduction Energy efficiency in buildings is gaining more attention due to the high impact of buildings on the energy use and carbon dioxide (CO 2 ) emissions [1,2].As a result, nearly Zero Emission Buildings (nZEBs) are becoming a part of several countries' legislation and policies due to the promising potential of nZEB to reduce the energy use and the share of renewable energy [3][4][5][6].Healthcare facilities, being intensive energy consumers due to their special characteristics, are the focus of many studies due to their substantial fraction of the total energy consumed [7,8]. Generally speaking, an nZEB is an energy-efficient building that has a high energy performance [1].Nearly zero energy buildings have gained much interest during recent years and are now regarded as real and integral solutions for energy savings [1,5].Due to its ambiguity, many proposals have focused on the definition of a framework for defining nZEBs and to provide the metrics necessary to assess the building's impact on the environment [3,[9][10][11].The different approaches were surveyed by Marszal et al. [5]. Decisions made at early design stages have a significant influence on the energy performance during the operating stage of a building [12][13][14].Furthermore, the increasing number of requirements, stakeholders and regulations during the building design process are complicating the tasks of decision makers [15,16]; i.e., it is not easy to design a zero emission building that satisfies all of the constraints; additionally, many variables and parameters have to be analyzed to find an optimal trade-off between the user requirements and the nZEB constraints [5,17]. Designing buildings from scratch amounts to much work; i.e., the combination of parameters and variables to be considered may generate a vast amount of different designs, which can be assessed and compared side by side to identify the optimal design [17].Furthermore, the lack of decision support systems at early stages is causing designers to hesitate in their search for optimal energy-efficient buildings [18].Counting on automated processes and on recent technological advances shall help decision making towards devising nZEBs more easily [17,19,20]. We propose a workflow, supported by a collection of BIM-based tools [21,22], to assist decision making during the early design phases of healthcare buildings.Our proposal is intended to assist architects and designers, allowing them to consider informed decisions towards nZEBs.This framework considers the aspects that are available during the early design phase and that have an impact on the overall energy efficiency of the building [23], namely: its services, geometry, orientation and its building elements. The proposed workflow, which makes use of the labeling system for healthcare buildings defined in [24,25], is accompanied by a set of tools.It is composed of the following steps: (i) briefing, where the user requirements are defined in a table-based sheet; (ii) design rules' definition, where further user requirements are defined using a domain-specific language; (iii) design configuration, in which different design alternatives are automatically generated based on the user requirements and rules; (iv) model checking, in which user requirements and some rules of the building code are checked; (v) energy performance assessment, where an early energy performance estimation is calculated; and (vi) life cycle cost assessment, where early financial aspects of the building are calculated.Our proposal includes the use of a dashboard, where user-friendly results are shown, allowing the designers to easily make informed decisions during the early phases.The proposal is validated by means of a running example, which is described during the article.Furthermore, it is being validated in the context of the European project called STREAMER (http://www.streamer-project.eu/). The need for such a workflow is illustrated by the findings of Kohler and Moffatt [26], which indicated that the early design phase is where decisions have the greatest impact, while the costs of such decision are much smaller than when these are made during later phases of the design or building phase.Furthermore, Bragança et al. [27] also indicated that the difficulty with making decisions in the early design phase is that data to base such decisions on often have a very low level of detail, if data are present at all.The workflow as proposed in this article aims to increase this amount of data available to base decisions on in the early design phase, as is illustrated by Figure 1, which has been created based on Kohler and Moffatt's [26] and Bragança et al.'s [27] findings. The rest of the article is organized as follows: Section 2 describes the methodology followed to prepare this article; Section 3 briefly reports on the closely-related proposals in the literature; Section 4 presents our proposal and describes the steps on which it relies; Section 5 reports on the validation of our proposal and on our experimental results; and finally, Section 6 concludes our work and our future work. Methodology The goal of our work is to create an aligned workflow for assisting the stakeholders in a building project during the early design phases.We have followed a methodology similar to the one used by Attia et al. [28].The methodology started by collecting information on the current workflows and methodologies for designing healthcare buildings and on the aspects that are specific for such a kind of buildings.Then, we prepared the first version of the workflow, which was updated and polished based on the interviews with the architects and engineers involved in the design and building of healthcare buildings.Then, we compared our workflow with the most related approaches in the literature and checked that the important aspects and particularities of healthcare buildings were considered in our approach. The validation of the workflow and the accompanying Information and Communication Technology (ICT) tools was first tested by means of synthetic tests to check its performance and to tune the tools for an optimized configuration.Then, the workflow and its tools were tested on real cases in the STREAMER project to evaluate if the workflow and its tools are aligned and support the early design process.In this article, we used a running example throughout the paper, which shows the input and output of each step of the workflow. Related Work Modeling nZEBs is a challenging research issue, which can be reflected in the increasing number of proposals and approaches in the literature focusing on this field.It requires the combination and optimization of many design aspects (energy, cost and comfort), which results in a time-consuming and computationally-expensive task.In the following, we summarize the most important approaches in the literature that provide: (i) methodologies and recommendations for achieving nZEBs; (ii) life cycle cost models for achieving economic efficient designs of nZEBs; (iii) tools that are intended to assist building designers for achieving high energy performance nZEBs; and (iv) models for achieving user comfort in nZEBs.In the first approaches, we briefly describe the difference between our proposal and existing methodologies, whereas for the others, the main difference resides in the holistic representation of our proposed workflow for nZEBs; i.e., very few publications focus on defining an approach that considers the different aspects (brief, rules, energy and cost) together during the early phases of the project. Methodologies and Recommendations The process for designing nZEBs differs from conventional building projects [29].Few proposals in the literature provided methodologies and recommendations to achieve nZEBs [28,30], whereas many proposals have focused on designing nZEBs for a specific climate and on reporting on the methodology used and on the different alternatives recommended for this climate. Attia et al. [28] identified, modeled and proposed an integrated design process for achieving high energy performance buildings.The authors focused on three main aspects, namely: the process phases, the user roles and responsibilities in the building, the tools that are used for this purpose and the metrics used to identify high energy performance buildings.This proposal reported on an ongoing work that had to be refined, validated and needed to be supported by software.However, our approach is more complete; i.e., the fact that we focus only on healthcare buildings, the accompanying software tools and the validation on the real demonstration sites provides a more complete and validated proposal. A three-step methodology was proposed by Visa et al. [30] for transforming buildings with implemented renewable energy systems into nZEBs.These steps are as follows: (i) evaluate the current building status by studying the building parameters, site characteristics, implemented renewable systems and standardized indicators for renewable energy systems; (ii) identify tailored measures for reducing energy demand, such as changing the building envelop and equipment; and (iii) develop new on-site optimized renewable energy mixes and extending the existent ones, based on on-site technical and economic data.This proposal was later extended to a four-step method by the same authors [31], which are similar to the previous steps, but that include studying the current building demand and the evaluation of the energy produced by already implemented renewable energy systems.Compared to our proposal, both proposals are quite different since they are oriented towards the refurbishment of general buildings until achieving nZEBs, whereas our proposals focuses on defining a workflow for creating new nZEB healthcare buildings. Since different nZEB solutions are needed for different climate zones, many researchers have also studied how nZEBs can be achieved in different climate zones and regions [32][33][34][35][36][37][38].Such studies are intended to provide guidelines and optimal solutions that shall help designers, in the studied climate zones, to make decisions during their design process to achieve nZEBs.Compared to our proposal, each of these approaches is set on a single case study and a specific climate zone, whereas our workflow can be used for healthcare buildings, without a predetermined climate zone. Cost Many researchers have focused on providing economical assessment tools and frameworks for nZEBs by providing models and methodologies for providing cost-optimized models and economically-efficient designs for nZEBs [18,37,[39][40][41]. Hamdy et al. [40] provided a so-called multi-aid optimization scheme, which aims at supporting robust cost-optimal decisions on energy-performance levels of buildings.Kapsalaki et al. [39] proposed a methodology, accompanied by a calculation platform, that is intended to identify the economically-efficient design solutions for residential nZEB design, considering the local climate, the energy resources and the economic conditions. Ferrara et al. [41] provided a cost-optimal model for a single-family building typology in France.This model was created and calibrated using simulation tools and an optimization algorithm for minimizing the objective function and finding the cost-optimal building configuration.Kang [18] proposed a Life Cycle Cost (LCC) evaluation tool, including an optimization algorithm, for the early design phase.The authors intended to provide an idea about the economic benefits of the given nZEB during the early design phase and to guide the designer towards more effective design decisions, without spending much time and effort. A more complete survey was performed by Sesana and Salvalai [42], in which the authors studied the different research approaches for performing the economic assessment of nZEBs. Energy Researchers have also worked on providing decision-support software tools for nZEBs that try to choose the nZEB alternative with the most optimized energy [17,43]. Attia et al. [17] proposed a so-called, ZEBO toolkit, which is a simulation-based design support tool that is intended to allow informed decision making for nZEBs during early design phases in Egypt.ZEBO allows users to test the energy performance of different building configurations by using EnergyPlus [44] as a simulation engine.The results of evaluating the different building design configurations are reported by means of energy performance graphs, which shall help the designer make informed decisions. Lin and Gerber [43] proposed the so-called EEPFD (Evolutionary Energy Performance Feedback for Design), which is a framework that is intended to generate design alternatives, perform an energy evaluation of these alternatives, optimize them, choose the ones that have better energy performance and, finally, provide a trade-off study of all of the generated results for design decision makers. Comfort User satisfaction is also one of the main indicators for accepting a given nZEB design.Some researchers have studied the comfort parameters and provided some approaches and lists of factors to be addressed to improve user satisfaction in nZEBs [45][46][47]. Sartori et al. [45] briefly presented some comfort and energy performance recommendations and guidelines for nZEBs.Mlecnik et al. [46] studied the end-user satisfaction in nZEBs and provided some recommendations for the improvement of the quality and comfort of such buildings.Finally, Carlucci and Pagliano [47] proposed a modeling and optimization approach for designing comfort-optimized nZEBs. Our Proposed Methodology Our proposal goes beyond the approaches reported in the previous section by proposing an early design workflow, accompanied by a set of tools for early design decision support.The workflow allows using the advances in the Information and Communication Technology (ICT) field by creating and proposing different alternatives that match the initial requirements, validating these alternatives, performing energy calculations and estimating lifecycle costs.The workflow is accompanied by a set of tools that are prepared for working during the early design phases, which allows the designers to make informed decisions and not starting from scratch. Figure 2 shows an overview of the proposed workflow and tools: (i) it starts from the user requirements (brief) and constraints; then, (ii) preliminary design alternatives are automatically generated; these alternatives are then (iii) checked to detect inconsistent designs and issues; then (iv) an energy assessment is then performed to check if the buildings comply with nZEBs; and finally, (v) a cost analysis is performed for the validated alternatives.When the current design does not fulfil the given rules or the designer's expectation, the previous steps can be run again to create different designs.When it is not possible to find a solution that fulfills all of the given information, the initial steps should be repeated (dotted arrows).The resulting indicators are shown in a dashboard, and the designer can choose the design that best fits as an initial design for the next design steps.The whole proposal is developed using Building Information Models (BIM), which allow the interoperability among the tools.In the following subsections, we first describe the running example, which will be used throughout the article to show the input and output of each step of our proposal.Then, we briefly describe the labeling system and the labels used, which is an essential step for the proposed workflow and tools.Later, we describe each of the steps in our workflow reported in Figure 2, namely: briefing, creating early design rules, creating early design alternatives, checking the early design, performing early energy calculation and performing early cost estimation.In each subsection, we provide a description of the step, the proposed tool and how the running example is processed. Running Example In order to show how our proposal works, we consider a hypothetical running example of a small, but representative healthcare building.This example will be considered throughout our proposal and detailed at each step of the proposed workflow, whereas the obtained results will be discussed in the validation section.Our proposal shall help us define the optimal building location, orientation, geometry, envelope, spaces distribution and layout, to fulfil the user requirements, and some of the existing regulations for healthcare.These building configurations shall be optimized for achieving the most energy-and financially-efficient healthcare building during the early design phase. We assume that this healthcare building is intended to be built in Paris, France, and that this building is expected to be a two-story building, with a net and useful story area of approximately 1000 m 2 (excluding the area of the constructed walls and the corridors).In the next section, for each step of our proposed workflow, we provide the input and the output for this running example. Labeling Choices made during the early design phase heavily influence the final energy performance of the building [12,14].Unfortunately, early phases of a construction project generally have to deal with information on a high level of abstraction.It is not easy to perform energy calculations and cost estimation, during the early design phases, due to the fact that most of the commercially available energy calculation and simulation tools require a higher level of design details to perform these assessments, such as the building envelope material, Heating, Ventilation and Air Conditioning systems (HVAC) and lighting devices [17,48]. Defining semantic labels provides a strong theoretical basis for enriching space-related elements in the design and shall help to cope with the previous issues; i.e., defining semantic labels allows designers to assign additional properties and values to building elements.Using semantic labels during the whole project allows sharing the vocabulary, not only between stakeholders, but also among the tools and the existing projects [25].Furthermore, semantic labels allow performing energy and cost assessment of the early designs based on the labels' values and their associated properties.Please notice that creating semantic labels is largely based on the knowledge acquired from previous projects [49]. The labels used depend on the building type, whereas the label values and the information and knowledge linked to them are obtained from the knowledge acquired from previous projects; i.e., previous projects provide the value ranges and needs for the different properties against which early designs can be checked and compared.The same label may have different values depending on the space's function. Labels can be assigned at different levels, i.e., district level, building level, functional area level and space level [24,25], whereas assigning label values to spaces is not a simple task and needs experienced and specialized designers and a knowledge base containing the results from many previous projects [50].The values for the different labels assigned to a given space allow calculating the energy demand at an early design phase and to perform an early cost estimation, without performing an in-depth analysis of the whole building and without performing complex simulations. In our proposal, we use the healthcare labels defined in the STREAMER project [24,25,51].STREAMER defines five types of labels, namely: access and security, comfort class, construction equipment, hygienic class and user profile.These labels and their value ranges are briefly described in Table 1.The labels used can be extended by defining new labels and values, however, it is not advised to use a large set of labels to ensure their manageability [24].An example of the label assignment is shown in Table 2. Table 1.Healthcare labels defined in the STREAMER project. Access and security The access control level for a given space or area; example: who can access the given area A1-A5 Comfort class Level of comfort in a space; example: width of a corridor, story of a given space CT1-CT8 Construction Typology of the construction; example: story height and space width C1-C6 Equipment The electric power needed for a given area: office equipment or medical equipment EQ1-EQ6 Hygienic class The cleanliness level of a given area; example: sterilized operating theater or office H1-H5 User profile The period of the day during which a given area is used; example: all days from 8:00 a.m. to 14:00 p.m. U1-U4 Table 2.An example of a hospital's space labeling. Operating theater The space where operations take place A1, CT3, C1, EQ1, H1, U2 Waiting space The space where patients can wait to be attended to A2, CT2, C1, EQ1, H1, U2 Medical archive The space where medical archive files are stored A5, CT5, C1, EQ4, H5, U3 At the implementation level, including labels when using BIM and sharing them among the different BIM tools are not issues due to the openness of the Industry Foundation Classes standard (IFC); i.e., it allows adding new properties to the building model by using property sets [52]. Briefing Construction projects start with the planning stage, in which the clients have to examine their quantitative and qualitative wishes and needs and to translate them into a written document [53][54][55][56][57].This process, which is iterative and takes place at the very early stages of the project, is called briefing.It involves different stakeholders from the project, including the client and the designers.Briefing is generally performed by means of face-to-face meetings, supported by collaborative ICT toolkits [54,58].ICT has also found its way into briefing, and this can be noticed by the increasing number of software packages dedicated exclusively to construction projects' briefing.An extended comparison of briefing tools can be found in [59]. The output of briefing is stored in a so-called brief, or Program of Requirements (PoR).A brief has the form of a report and is generally expressed as a fact sheet accompanied by some natural text and notes.Briefs use a dictionary including the terms to be used for defining the spaces and further concepts in the construction project.A high-quality brief is essential for the effective delivery of the project [54]. We propose to use the labels and their values in the briefing step; i.e., given the previous labels and their range of values, the clients, with the help of the building designers, shall use these terms and values to define their needs.In this step of the workflow, we focus on the quantitative requirements expressed in the brief by only analyzing the facts sheet defined using a Comma-Separated Values (CSV) file.The client is asked to express the natural text by using a domain-specific language, which is described in the following step of our workflow. Table 3 shows the brief for the running example, including the number of spaces, their surfaces and their assigned labels.The planned healthcare building is expected to have a total of 79 spaces and a net area of 1018 m 2 .The space type column indicates the type of space, and its values are defined in the briefing dictionary.The amount column and the area define the quantity and surface for each space, whereas the columns HC (Hygienic Class), AS (Access Security) , UP (User Profile), EQ (Equipment), CO (Construction) and CC (Comfort Class) define the values of the labels used.The functional area column indicates to which functional area a space belongs, whereas its values are already defined in the briefing dictionary. Early Design Rules The next step in our proposed workflow is to complete the design brief by adding the requirements usually expressed in natural language.Unfortunately, processing natural language is still an open research problem, whereas none of the existing proposals is universal [60,61]. To cope with the issue of automatically processing texts expressed in natural language, a Domain-Specific Language (DSL) is used for defining the user requirements that cannot be expressed in the brief.DSLs are small high-level languages that focus on a particular aspect of a software system, tailored to specific tasks and intended to define a level of abstraction that helps end users be more effective in a specific domain [62,63].They represent a more natural, high fidelity, robust and maintainable means of encoding a problem [64]. This DSL is intended to allow the users to express their requirements by means of rules that use enriched objects and relationships.The expressiveness of such rules in the Architecture, Engineering and Construction sector (AEC) is still an open research issue [65,66]. BIM provides a huge amount of objects and properties, which can be extended by user-defined properties [66].Our DSL is based on the BIM objects and their properties and the labels used in the previous subsection, which enables users to handle a high detailed level of data and to express the quantitative and qualitative construction requirements in a user-friendly language.Furthermore, it supports the definition of different relationships between the spatial objects, which allows users define their requirements more easily. Since the detailed description of our DSL falls beyond the scope of this paper, we only provide a brief description of the rules that focus on circulation paths, their properties and the spatial relationship between spaces.The DSL can be extended by new definitions and can be adapted to the new properties that are added to the new versions of BIM.A similar DSL, called BERA (Building Environment Rule and Analysis language), was provided by Lee et al. [66]. The rule editor contains a semantic analyzer that is responsible for converting the inputs text rules, which are processed by the lexical and syntactic analyzer, into XML (Extensible Markup Language [67]).ANTLR (Another Tool For Language Recognition) [68,69] was used to implement the DSL and to create the parser that converts the rules into a machine-readable file; i.e., rules written using our DSL are parsed, and an XML file is created.This XML file is used as input for the Early Design Configurator (EDC), which is described in the following subsection.The whole solution has been packaged under a so-called rule editor, which provides an editor to create the user rules and to create the XML file for the next step in the workflow.Table 4 shows an example of a set of rules, which are defined for our running example.The first three rules in the table are more related to the functional area, whereas the last two are related to the healthcare spaces.As mentioned before, such rules allow defining spatial constraints and restrictions related to the layout of the spaces and the functional area.For instance, the first rule in the given table tries to fix the "admission" space in the lowest story.On the other hand, a priority has been also added to the DSL in order to be able to calculate a fitness function when a rule is not fulfilled in a given design.The priority value has a range between the value of zero, which is the lowest, to denote a recommended or an optional rule, and the value of nine, which is the highest and which denotes that the given rule should be strictly fulfilled by the current design. Rules priority = 9 Rule "Admission story rule": Functional area with (name equals "Admission") must be contained in the lowest story; priority = 3 Rule "testing rule 14": Functional area with (name equals "MedicalArchive") must be contained in the highest story; priority = 8 Rule "LowCareWard grouping rule": functional area with (name equals "LowCareWard") must be clustered horizontally and vertically; priority = 5 Rule "Traveling distance between PatientRoom and NursingStation": Traveling distance between space with (name equals "PatientRoom") and space with (name equals "NursingStation") is less than 20.0 m; priority = 6 Rule "testing rule 17": Space with (HygienicClass equals "H5") must be clustered horizontally and vertically; 4.5.Early Design Configurator: EDC Attia et al. [15] showed in a recent study for how the use of evolutionary algorithm has allowed resolving highly constrained optimization problems for building design and how the use of such algorithms shall be required in the nZEB design process.However, the authors mentioned that integrating these algorithms in the design process is still a research issue.We propose to integrate the Early Design Configurator (EDC) in our process, which shall fill in the gap between the building designers and the building performance optimization algorithms [15,70].The EDC is a software tool that is based on an evolutionary algorithm and that is intended to automatically generate design alternatives starting from the building requirements.The different alternatives can be exported into IFC format, which allows importing these from further tools as a starting point for the next design phases. The EDC takes the previously-defined requirements as input data, namely: the brief with a tabular list of project-specific space specifications and their labels and the design rules.Then, the designer is asked to create the building geometry by using an editor that allows creating the outer shell of the building.The user can choose where the building shall be placed on a graphical map, which is rendered with tile data from OpenStreetMap (https://www.openstreetmap.org).This allows defining the building size, geometry, orientation and checking the different services (power plants, public transport) and the availability of spaces for installing renewable energy plants near the construction site. At the core of the EDC lies an evolutionary algorithm, which is used to generate the layout alternatives.The current best layout is visually represented to the designer.An example of the different alternatives, created by the EDC, for our running example with different geometries and space distributions, is shown in Figure 3.The rectangles are the rooms inside the current story and building.The algorithm runs iteratively, where in each iteration, a room is randomly moved to a new position in the layout.If the resulting layout has a lower rating than the layout of the previous iteration, then the change is undone.The EDC works on several layouts in parallel, and the worst layout is regularly reset completely and restarted from scratch.Each layout is rated by calculating a rating value with a fitness function, which gives a score, based on the constraints weight, on how close a given layout is to the optimal solution.The value itself is calculated from the sum of the output values from different constraints, which are either built-in functions or created from the imported design rules.In the case of constraints created from the design rules, the output value of a constraint is weighted by the priority of the design rule.Each constraint contributes to the fitness of the whole layout with a value between zero (best) and ∞ (worst), but in the default case, normalized values between zero and one are used.There are three types of constraints, namely: A hard constraint is a Boolean condition that can be true or false; i.e., the violation of such a constraint results in an unacceptable value, which may be either one or even a higher value for cases where a layout should be discarded since the constraint is violated.An examples of these kinds of constraints is: • Space A must be within 20 m of Space B. A soft constraint results in a value that gets increasingly worse the more the constraint is violated.This kind of constraint requires a border value to normalize the output value, which in most cases is the maximum value of the input value.In special cases, where the border value is not the maximum value, the satisfaction may be above one.Examples for soft constraints are: • Space A needs to be as close as possible to Space B • The walking distance between Space A and Space B must be as short as possible. A combined constraint combines both previous constraint calculation methods such that either a soft constraint is used inside the range of the border value or the result of the constraint is a bad value.Here are some examples cases: • Space A needs to be at least within 10 m of Space B. • There need to be at least five spaces of Type C within N meters from Space A. Using weighted values according to the importance of the constraint allows defining design goals by preferring constraints that accommodate the goal. The evolutionary algorithm is run until it is interrupted either by the user or by meeting the termination criteria (maximum number of iterations).The next step is to either manually edit the layout or to change the priorities of the constraints and then continue to run the optimization algorithm.This can be repeated until the designer considers that one of the created designs is a good solution.However, the designer is also able to choose to continue the development of a given layout in two different directions, by cloning the layout and working with the copy.The copy can again be manually edited or its design rule priorities can be changed again. To forward the chosen alternative to the following steps in the early design workflow, the created model needs to be exported into an IFC file.An example of how such a file looks can be seen in Figure 4. Early Design Validation: EDV The increasing number of stakeholders that participate in the design and construction of a building project and the high number of changing requirements may increase the number of conflicts and may violate the requirements, constraints and the building code of the project [71,72].It is essential to detect, as early as possible, inconsistencies and conflicts in the current design [73]. Eastman et al. [74] has recently provided a survey and a framework for comparing rule-checking systems.The authors proposed a structure for implementing a rule-checking and reporting system, in which the authors included the following four stages, namely: (i) rule interpretation where rules are structured for their application; (ii) building model preparation where the necessary information for checking is prepared; (iii) the rule execution stage where checking is carried out; and (iv) the reporting stage where the checking results are created and reported. The next step of our workflow performs a design validation by checking that the current building designs fulfill the user requirements and the building code.Since the EDC may violate some of the rules, this step shows the designer which of the rules and requirements are not fulfilled and performs an early check to see if any of the building codes are violated.For this step, we propose an Early Design Validation tool (EDV), based on a reasoning rule engine, for validating the early design.Since code checking requires much more detail, we only include the building codes that can be verified at early phases of the building. We have followed the stages proposed by Eastman et al. [74] for developing our system.In the following, we provide the modules in the EDV that implement these stages (cf.Figure 5).The first stage is the rule interpretation, in which the Early Design Validator (EDV) reads the user-defined rules, written in different formats (tables and text) and converts them into machine-processable format.The EDV takes the brief (table-based format) and the design rules (XML format) and translates them into an object-oriented language that can be interpreted by the rule engine.Furthermore, we included a part of the building code for healthcare buildings (may changes between different countries) to check that the building fulfills the legislation.This stage is done by the rule importer module that creates the validation rules, which can be interpreted by the rule engine in the third stage. The second stage works on preparing the building model for its checking [75].During this stage, unknown design parameters are deduced from valid and known properties of the design objects in the model.This stage is essential since it allows inferring missing data and at the same time avoids inconsistencies [74]; i.e., it is possible to ask users to include all of the information for checking a given model, such as windows' surfaces, but this would cause erroneous data in the model.Furthermore, at early design stages, the information in the model is limited, which makes this stage necessary.The model importer reads the building model in IFC, the enriching rules and the labels and tries to derive new information and to extend the existing building model with further information.To summarize, enriching rules calculate implicit properties such as volume and size, derive new models and relationships such as spatial and graph models and add information from the label definition to the spaces with assigned labels.At the end of this stage, an enriched building model, ready for validation, is created. The third stage brings together the enriched building model with the validation rules that are applied to it.This stage is performed by an object-oriented rule engine [76], which is the core of the EDV.Since the rules have been created in a computable format and the model is already prepared, the rule execution is straightforward.However, only the rules with sufficient information available for their preconditions are run; i.e., the rule engine runs a rule when the existing information fulfills the rule's preconditions. The last stage reports on the validation results.The EDV reports on the building objects or conditions that did not pass the given rules, whereas the building objects that satisfy the validation rules are discarded from the final report.For reporting, we have chosen the Open BIM Collaboration Format (BCF) [77,78], proposed by BuildingSmart (http://buildingsmart.org/),which is an open file XML format that supports workflow communication in BIM processes.BCF allows reporting on the conflicting objects, addressing the conditions (rules) that are not fulfilled and adding a viewing camera at a given location describing the issue, which allows one to effectively communicate the detected issues to end users. Table 5 lists some of the rules extracted from the healthcare building code in France.For demonstration purposes, we have considered a list of similar rules, which have been then codified using the DSL proposed in the previous section.Please notice that we have only considered the rules that can be validated at the early design phase, with the limited level of details.Table 5.An example of the healthcare code to be checked in the validation step. Rules The single patient room area is between 14 m 2 and 18 m 2 . The minimum width of doors is 1.10 m.The minimum width of pathways is 1.40 m.The distance between any point in the building and an emergency exit or a staircase is less than 40 m. Early Simulation: TECT Once the design has been validated, the next step in our workflow performs an energy performance assessment of the validated designs.Despite the importance of counting on an informative decision support tool during the early design, current design and decision support tools are inadequate to support and inform the design of nZEBs, especially during early design phases, where the level of detail is still very low [79]. To overcome the shortcomings of the existing tools, we have created an early design energy calculation tool, called TECT (TNO Energy Calculation Tool), to be used in this step.TECT allows calculating the energy demand and consumption of a given building in the early design phase.It takes as input a building design enriched with the defined labels, a configuration file containing default, but configurable values and climate conditions, following the EN ISO 16798-1 [80] standard.This standard defines how to establish and define the main parameters to be used as input for building energy calculation and short-and long-term evaluation of the indoor environment. The energy calculations in TECT are based on the standards related to the Energy Performance of Buildings Directive 2010/31/EU (EPBD) [6] and using the harmonized ISO standards: EN ISO 52016-1 [81] and EN ISO 52010-1 [82], recommended by the European legislation.These calculations are performed at an hourly base and at space level.Its added value resides in its ability to provide an estimation of the energy consumption for a given building, counting on a very limited amount of information in its BIM model, applying European and standardized methods, and in integrating the results in the building IFC model, which allows its importation and viewing in further BIM tools.Figure 6 briefly describes the input and output of our simulation tool TECT.The input for the energy calculation tool is a BIM model (in IFC), created by the EDC or another early design tool, with the necessary information at the space level.Besides using the geometric information, TECT is able to use the labels, such as ComfortClass, UserProfile and Equipment, for energy calculation.These labels are used to infer the missing semantic information that is necessary for energy calculations; i.e., based on the given labels to the building spaces, missing information in the early design model is added to the model, which allows performing the energy calculations.If the labels are not available in the input model, configurable default values are used.The default values used in the energy calculation tool can be changed in a configuration file and are then used for all of the spaces with missing labels.Figure 7 illustrates how labels can be assigned to the different spaces and integrated with the IFC file as the property set.The building installations are characterized by an overall efficiency of the emission system, the distribution system and the generation system for heating and cooling.The characterization of the installations is defined in the IFC by a property set at the space level.For the characterization of the facade for spaces connected to the outside environment, the same approach is used.The property set contains an identifier for the selected system used for (i) generating heat; (ii) emitting heat; (iii) generating cold; (iv) emitting cold; (v) ventilating and the (vi) facade.For the efficiency of the distribution systems for heating and cooling, a default value (100%) is used.Each identified system represents one or more properties used as input to the calculation.The energy demand is the net amount of energy needed for heating and cooling of the space, whereas the energy consumption is the energy input to the heating and cooling generation system.In the early design phase, this is the minimum level of information needed to perform the energy calculations. The results of the energy calculations are integrated into the BIM file (cf. Figure 8).To do so, a new property set is created where the following properties are available: Heat demand, cold demand, energy consumption by cooling system and energy consumption by the heating system (all values in MJ/year).Furthermore, the properties Max Power Heat Demand and Max Power Cold Demand in Megawatts (MW) are available in this property set.The needed power for heating and cooling of the spaces is not limited in the energy calculation tool; i.e., the temperature requirements (set points) are fulfilled by the system immediately.This way, no delay, undershoot or overshoots of the desired temperature will occur.In practice, however, there will be an offset caused by the dynamics and power limitations of the system.This can result in an overestimation of the needed power related to the power required in the detailed design.Table 6 shows the simulation results for the two alternatives in our running example calculated by TECT.These values have been obtained based on the created designs from the EDC, the space assigned labels and using weather data from Paris, to perform the energy performance calculations. Early LCC Performing an economic analysis of a construction project is essential to evaluate the financial feasibility and performance of the project [42].This is performed by assessing the Life Cycle Cost (LCC) of the current design.LCC is a commonly-deployed method for finding a cost-optimal design, where its results are critical for choosing one design or another; i.e., its analysis provides the investor with more realistic information about the investment and the return on investment by the current project [18]. LCC is a method of assessing all costs that are expected to take place, during the entire life of the building, aiming at providing an overview of the economic performance of a building over its entire life, as well as a year-by-year financial performance and balance.This includes costs of design, construction, operation, maintenance and the disposal of the building, to mention a few. Our proposed workflow includes an LCC step in which the provided alternatives from the previous steps can be assessed economically.Since the design alternatives are compared among each other, the LCC-calculations are tailored towards this specific purpose: some factors of an LCC-calculation can be left out when the comparison is performed on a level playing field; i.e., the same method of LCC-calculation is used for all alternatives that are to be compared. An overview of the factors that are included and excluded is provided below. We propose to use the following costs: • Investment costs (also known as CAPEX): initial and capital costs.This is the amount of money that is initially invested and is capitalized on the financial balance. • Operational costs (also known as OPEX): the costs in the operation and maintenance phase of the building, including: energy, water, cleaning, maintenance, security, general management and technical support.These costs are based on the ISO 15686-5:2008 [83]. The CAPEX (Capital Expenditure) and OPEX (Operational Expenditure) are standard components in accounting, and both are needed in order to be able to execute an LCC calculation.However, the following costs have been excluded in this LCC calculation step: • Demolition and major renovation costs, since it is not easy to predict these costs in the early design phase, especially since they depend on other factors not related to the building itself. • Financing costs (the financing of investments, for example interest on loans), since they vary per organization, country and economic climate. • Revenue (the income the building generates), since it strongly depends on the way a healthcare building is exploited. • Residual value (it is assumed that the building has no residual value at the end of its life). To calculate the LCC for early designs, we propose the use of the Decision Support Tool (DST), which is based on BIM and allows performing LCC for different designs.DST supports three different methods for calculating the LCC, including the factors described before.These calculation methods depend on the Level Of Detail (LOD) and are explained below. The LCC with the lowest LOD is calculated by combining data that can be derived from the BIMs generated by the EDC and ballpark cost figures.From the BIM, the gross story area of all spaces is multiplied with a cost figure per square meter to attain a first indication of the investment costs, while the operational costs are based on square meter data from practice based on empirical studies [84].The empirical studies depend on the country, and default values are used in this method. The second method allows using country-specific indices for both investment and operational costs to adjust the ballpark figures and make them suitable for usage in different countries' contexts.Such a method allows having more realistic data for the country where the project is being planned. The third LCC calculation method is more precise since it allows one to perform a calculation based on types or components instead of ballpark figures.The labels, described earlier, provide the foundation for such a system, where each label corresponds with both an investment and operational cost figure.However, such an approach requires counting on a large and complete dataset of cost figures, which cannot be easily acquired. The DST allows the users to supply their own cost library to be used within the LCC-calculations, which makes it possible to use it in different construction projects and under different conditions.Table 7 shows the LCC calculation for the two design alternatives created by the EDC for our running example. Dashboard It is essential to count on a user-friendly tool for displaying assessment results [18,79].A dashboard is a possible solution for showing these results.Such a dashboard allows one, in a user-friendly interface, to view and manage the information and design alternatives created during the proposed workflow, as well as to choose the design that best fits [85].We propose to use a dashboard, which is part of the DST, to complete our workflow. The dashboard is the platform where the previous steps and the data that have been generated within those steps come together and are reflected upon.Aligned with the purpose of the software tools discussed in this paper, the following definition is applied: a dashboard is a visual representation of the most important information captured on one single screen, which enables a person to understand the required information at a glance in order to control the information or to achieve other goals. The key principle behind the dashboard is to present this increased amount of available data transformed into user-friendly information to thereby facilitate a comparison between design alternatives and decision making.This is done through a two-part process, which is described below: • First, the BIM that is associated with each design alternative can be displayed in an integrated BIM viewer (cf. Figure 9).This BIM viewer is configured for the evaluation of the buildings generated by the EDC; i.e., it contains functionalities to visualize and filter spaces in the building according to the labels exported to the BIM by the EDC.These functionalities will help stakeholders to (functionally) evaluate the spaces and the relations between them for each design alternative.• Secondly, the design alternatives with their generated data during the workflow are evaluated by the dashboard; i.e., the data that are generated during the proposed workflow are retrieved from the associated BIMs and are visualized in the dashboard through Key Performance Indicators (KPIs).A KPI is a method of transforming (multiple sources of) data and normalizing them to a uniform scale.This facilitates the weighting and aggregation of multiple sources of data and the easy visualization of such data to the user by rating a KPI from one to 10.These KPIs are displayed in the dashboard as gauges, as can be seen in Figure 10.After the design proposals are evaluated in the dashboard, it is possible that a clear best solution (design alternative) has arisen from the assessment.In this case, the transition towards a detailed design can be made, where the chosen design alternative will be able to function as a blueprint, a foundation, to build upon.However, it is also possible that none of the analyzed design alternatives comply with the wishes and requirements of the user.In this case, more design alternatives can be generated, but the fact that the requirements were not met could also lead to a more extensive re-evaluation of the choices made in previous steps; it could for example lead to changes in the brief or the application of different design rules.This illustrates that by using a dashboard, an active feedback mechanism is created in the proposed workflow. Results Discussion In order to test the validity and usability of the workflow and the tools, we took two measures: (i) we used a case study based on the running example as a hypothetical design project to discuss our proposal and the obtained results in this article, and (ii) we tested our proposal on four real demonstration cases within the STREAMER project. For the running example, the EDC has generated many alternatives based on the brief and user-defined rules, of which we have only kept the ones with the highest fitness value.Then, the EDV has allowed us to discard further designs and to only keep two alternatives (cf. Figure 3) and to discard other ones generated. The energy calculations of the two design alternatives at the building level are given in Figure 11, whereas the LCC results have been shown in the dashboard section (cf.Table 8).Based on the results of the energy calculations (energy consumption per m 2 ), it can be noticed that the early design Alternative 1 is more energy efficient (15.2%) than the early design Alternative 2. The total energy consumption of Alternative 1 is also 8.3% lower than Alternative 2. This is mainly caused by the much higher energy demand end consumption (43%) for cooling.The results shown in Figure 11 are at the building level; however, this information is also available at the space level in the enriched IFC file, as previously shown in Figure 8.When analyzing the results of the LCC-calculation (cf.Table 7), we can conclude that the design Alternative 1 is also less expensive (4.2%) over its entire life cycle.This can be attributed to the more compact building layout with a smaller total story area, which is reflected in the lower investment costs and lower maintenance costs.Additionally, the superior energy performance of the design Alternative 1 is also a cause of the lower operational costs.However, the LCC difference between both design alternatives is relatively small when compared to the differences in energy performance. The proposal has also been validated in the context of the European project STREAMER by means of four real demonstration sites in four different countries.The final results, which are currently being prepared, are promising, but have not been published, yet.However, an analysis of one of the demonstration sites and the validation scenarios has already been published [51,86]. Conclusions and Limitations Much effort is being spent for achieving nZEBs, including healthcare buildings, which are being targeted due to their high energy consumption.Designing such buildings is still challenging due to their special conditions and to the increasing number of parameters to be considered to optimize the energy, cost and comfort.Furthermore, assessing early design decisions is essential for achieving optimal building designs due to the impact of the decisions made at early design stages on the final building performance. We propose an early design workflow for healthcare buildings, accompanied by early design software tools, to guide designers towards optimizing different objectives and towards achieving energy-and cost-optimal early designs.Our proposal allows making informed early decisions and avoids starting new healthcare building designs from scratch. The workflow starts with the building brief and rules, where user requirements and constraints are defined.Then, these requirements are used to create different design alternatives using an evolutionary algorithm, which tries to satisfy all of them.Code checking and design validation are then performed on the best alternatives, and only the ones that satisfy all of the rules are forwarded for the next steps.Then, energy and cost assessments are performed, and the obtained data are shown in the dashboard, which allows the designer to make informed decisions at the early design phases and to start the next design phases from an optimal design. There are some limitations in our proposed workflow that need to be investigated in future work, namely: (i) adapting the workflow for different kinds of buildings, and not only healthcare buildings; (ii) the user comfort indicators are not being evaluated in this workflow, and this could be a step to be added just after the cost assessment and after analyzing the user comfort indicators; and finally; (iii) because of the freedom IFC offers, the open standard digital representation of a building that is used in this proposed workflow, all stakeholders in the workflow need to be aware of which information resides where in this building model.However, for the last limitation, this places an emphasis on coordination, which can be achieved by using Model View Definitions (MVD).MVDs can define which information, for each step of the workflow, is required to be present within the IFC file. Figure 1 . Figure 1.Available data for the proposed assisted workflow (figure based on data provided by Kohler and Moffatt [26] and Bragança et al. [27]). Figure 6 . Figure 6.Input and output of TECT. Figure 7 . Figure 7. Labels assigned to a given space in TECT and integrated in the IFC properties. Figure 8 . Figure 8. Energy simulation results at the space level integrated into BIM. Figure 9 . Figure 9. Spaces with the same space labels visualized in the DST BIM viewer. Figure 10 . Figure 10.Visualizing the key performance indicators (above) and underlying performance indicators (below) for a design proposal in the dashboard. Table 3 . Brief and assigned labels in our validation example. Table 4 . Example of rules implemented in the DSL. Table 6 . Energy calculations for both alternatives created by TECT. Table 7 . LCC calculation results for the running example. Table 8 lists both the energy KPI (obtained by TECT) and the LCC KPI, calculated by the LCC module in the DST. Table 8 . KPI values for the running example.
12,982.2
2017-07-13T00:00:00.000
[ "Computer Science", "Engineering" ]
Evaluating low-cost substrates for Crypthecodinium cohnii lipids and DHA production, by flow cytometry Crypthecodinium cohnii growth was studied on pure carbon sources (glucose, acetate, glycerol) and low-cost complex carbon sources (sugarcane molasses, crude glycerol and vinegar effluent) for lipid and DHA production. Among the pure substrates, glucose induced the highest lipid content (14.75% w/w DCW) and DHA content (7.14 mg g−1 DCW). Among the low-cost substrates, the highest lipid and DHA content were observed for the crude glycerol assay (14.7% w/w DCW and 6.56 mg g−1, respectively). Molasses induced the highest proportion of DHA of total fatty acids (49.58% w/w TFA) among all the substrates studied. Flow cytometric analysis revealed that the vinegar effluent induced the highest proportion of C. cohnii cells with injured membrane (92.8%). These results foresee the possibility of using these low-cost substrates at a larger scale for C. cohnii DHA and biodiesel production, aiming at zero wastes and process costs reduction. Introduction The marine microalga Crypthecodinium cohnii, a heterotrophic non-photosynthetic dinoflagellate, accumulates significant amounts of lipids (20-50% of its cell dry weight) with a high fraction of docosahexaenoic acid (DHA), a ω-3 polyunsaturated fatty acid which is a component of neural and retinal tissues, a key fatty acid component in breast milk, and it is necessary for brain development in infants. This compound has well known benefits on the human health, having currently several nutritional and pharmaceutical applications, and a growing market size (Diao et al. 2018). DHA can be obtained from marine fish sources; however, its production from microalgal source shows benefits over DHA obtained from fish, since the pure microalgal oil is odourless, non-dependent on fish stocks, does not contain ocean-borne contaminants, and its vegetarian nature attracts young people (Lopes da Silva et al. 2019). Under certain cultivation conditions, C. cohnii cells accumulate less than 1% of the other type of PUFAs, which is a clear advantage for the downstream DHA purification process. On the other hand, the presence of more than 50-60% content of fatty acids with 16 and 18 carbon atoms (C16-C18) in C. cohnii total fatty acids makes this microalga a potential source for biodiesel production. The carbon source is the most expensive component of fermentation media. In the late 1990s and early 2000s, several pure carbon sources were studied to grow C. cohnii, such as glucose (De Swaaaf et al. 1999), acetic acid (De Swaaf et al. 2003), ethanol (De Swaaf et al. 2003) and pure glycerol (Hosoglu and Elibol, 2017a). However, despite these carbon sources induce high lipid and DHA productivities, they are expensive to be used at large scale (glucose 16 € kg −1 ; ethanol 1.82 € kg −1 ; acetic acid 0.45 € kg −1 , www.alibaba.com). In addition, ethanol and acetic acid are dangerous compounds that are difficult to handle and transport. On the other hand, the increasing environmental awareness about the circular economy rules has boosted the necessity to use low-cost or zero-cost wastes/byproducts/effluents as nutrients in media formulations for microbial growth. In fact, in recent years, substrates as food waste such as carob pulp syrup (Mendes et al. 2007), rapeseed meal hydrolysate mixed with crude molasses (Gong et al. 2015) and cheese whey with corn steep liquor (Hosoglu and Elibol 2017b) have been used in media formulations for C. cohnii ω-3 compounds production. Nevertheless, despite the low cost of these substrates, they often contain inhibitory compounds that affect the cell metabolism and reduce the process yield. Therefore, the cell physiological status monitoring during the bioprocess development is crucial, particularly when wastes/byproducts/effluents are used as substrates, to evaluate the cell stress response. Indeed, a high proportion of damaged/dead cells in the broth will have a detrimental impact on the process performance, since those cells do not participate in the biotransformation, thus reducing the process yield; moreover, if this information is obtained near real time during the process development, it can be used to change the process control strategy, in order to enhance the process efficiency. However, most of the published works reporting the use of low-cost substrates to grow C. cohnii uses conventional monitoring techniques to monitor the microalgae cultivations (i.e., optical density, dry cell weight, cell count) which do not give any information on cell status. The present work evaluates C. cohnii ATCC 30772 growth and lipid production on the pure carbon sources most used (glucose, acetate and pure glycerol) and low-cost complex carbon sources containing these compounds (sugarcane molasses which contains sucrose, glucose and fructose; vinegar effluent which contains acetate and crude glycerol which contains glycerol). Flow cytometry (FC) was used to evaluate the microalgae cell stress response to the different substrates, monitoring the enzymatic activity, membrane integrity and ROS production. The carbon sources glucose (Scharlau), sodium acetate (Merck), pure glycerol (Labsolve), sugarcane molasses, vinegar effluent and crude glycerol were added to the culture medium at the same concentration (0.67 mol carbon atoms per litre). Crude glycerol was previously distilled to reduce the methanol concentration. The complexes substrates characterisation is displayed in Table 1. Volumes of 250 mL of sterile culture medium containing yeast extract (2 g L −1 ), sea salt (25 g L −1 ) and 0.666 mol of carbon per litre of each carbon source were transferred to 500-mL Erlenmeyers, inoculated with 10% v/v of inoculum and incubated in the darkness at 27°C and 150 rpm to 8 days. For all media, the initial pH was adjusted to 6.5 before sterilisation using NaOH and HCl solutions and autoclaved at 121°C, for 20 min. For the medium containing glucose, the solution containing the salt and yeast extract was previously autoclaved separately from the glucose solution and eventually mixed under sterile conditions to avoid caramelisation. For the vinegar effluent assay, the medium was sterilised by filtration, in order to prevent acetic acid evaporation. The sugarcane molasses was previously hydrolysed to break down sucrose into glucose and fructose by decreasing the pH to 3 with HCl and storing it at 50°C for 24 h. Growth parameters The dry weight was determined by filtering 3 mL of culture through pre-weighed 0.45-μm nylon filters (Millipore, Germany) under vacuum, which were subsequently placed in an oven at 100°C up to constant weight, over 18 h. The filtrate was collected and frozen at − 18°C for the later analysis of the carbon source concentration in the medium by high performance liquid chromatography (HPLC). The carbon sources were analysed by HPLC (LaChrom Merck/Hitachi, Germany). Two different chromatographic columns were used: Aminex HPX-87P (Bio-Rad, USA) used to detect all carbon sources; sugarcane molasses was analysed using a SugarPack (Waters, USA). The chromatograms were analysed using the ChemStation for LC 3D Systems Rev. B.01.03 software (Agilent Technologies, USA). The pH was determined using a Consort C3021 potentiometer (Consort, Belgium) which was calibrated regularly. Lipid quantification Lipids were extracted according to the protocol of Lopes da Silva et al. (2006) with modifications: The microalgal biomass collected after the broth centrifugation was freeze-dried. Approximately 100 mg of freeze-dried biomass were transferred to a vial under nitrogen atmosphere and transmethylated at 80°C for 1 h, with 2 mL of a methanol/acetyl chloride mixture (95:5 v/v) and 0.2 mL of heptadecanoic acid (17:0) (5 mg mL −1 petroleum ether, boiling point 80-100°C) as an internal standard. Afterwards, the vial contents were cooled, diluted with 1 mL water and the lipids were extracted with 2 mL of n-heptane. The organic phase was separated from the aqueous phase, dried using sodium sulphate (Na 2 SO 4 ) and placed in a vial adequate for gas chromatography analysis. The methyl esters were then analysed by gas-liquid chromatography, on a Bruker Scion 436-GC (Germany) equipped with a flame ionisation detector. Separation was carried out on a 0.32 mm × 30 m fused silica capillary column (film 0.32 mm) Supelcowax 10 (Supelco, USA) with helium as carrier gas at a flow rate of 3.5 mL min −1 . The column temperature was programmed at an initial temperature of 200°C for 8 min, then increased at 4°C min −1 to 240°C and held there for 16 min. Injector and detector temperatures were 250°C and 280°C, respectively, and the split ratio was 1:50 for 5 min and then 1:10 for the remaining time. The column pressure was 13.5 psi. Peak identification and response factor calculation were carried out using known standards (GLC 459 and GLC 463, Nu-chek-Prep, USA) The quantities of individual fatty acids were calculated from the peak areas on the chromatogram using heptadecanoic acid (17:0) as the internal standard. Each sample was prepared in duplicate and injected twice. Flow cytometry (FC) Flow cytometry (FC) analysis was performed in a BD FACSCalibur cytometer (Becton Dickinson, USA) equipped with a blue laser, FSC/SSC light scattering detectors and four fluorescence detectors FL1-FL4. For the analysis in the flow cytometer, 3 mL of culture were removed under sterile conditions and stored in Falcon tubes. These were subsequently subjected to ultrasound treatment for 15 s to disintegrate cellular aggregates. Samples were diluted in McIlvaine buffer (pH 4.0) for the simultaneous detection of cellular enzymatic activity and membrane integrity and were diluted in PBS buffer (pH 7.4) for ROS detection. The fluorescent dye carboxyfluorescein diacetate (CFDA, Invitrogen) and propidium iodide (PI, Invitrogen) were used for the simultaneous detection of enzymatic activity and membrane integrity. CFDA is a non-fluorescent compound which penetrates the cells by passive diffusion. Once inside the cell, if esterases are active, they will hydrolyse CFDA to a fluorescent compound. Therefore, if a cell is stained with CFDA, it means that its enzymatic system (esterases) is active and its membrane is intact; if the cell is not stained with CFDA, it means that its enzymatic system is not active, or its cell membrane is not intact as permeabilised membrane allows the fluorescent compound to exit the cell. Thus, when using CFDA to evaluate the cell status, it is convenient to simultaneously use a dye for membrane integrity detection. Propidium iodide (PI) was used for membrane integrity detection. PI binds to DNA but cannot cross an intact cytoplasmic membrane. A double staining protocol was used to monitor simultaneously C. cohnii enzymatic activity and membrane integrity during all the microalgal assays. CFDA was detected in the FL1 detector, and PI in the FL3 detector. Dihydrorhodamine 123 (DHR, Invitrogen) was used to evaluate C. cohnii oxidative stress response, by monitoring the reactive oxygen species (ROS) intracellular generation, as oxygen peroxide (H 2 O 2 ). DHR is specifically responsive to H 2 O 2 and passively diffuses across cell membranes. Once inside the cell, it can be oxidised, mainly by H 2 O 2 , to form cationic rhodamine 123 (Prado et al. 2012) which is a fluorescent compound, usually localised in the mitochondria, emitting a bright fluorescent signal with a maximum emission at 529 nm, detected in FL1 channel. For control purposes, unstained samples were recorded to take into account the autofluorescence of the cells. For CFDA/ PI double staining, the following amounts of dyes were sequentially added, per 495 μL of cell suspension, followed by 15 min incubation in the dark: 3 μL of CFDA 10 mg mL −1 stock solution, followed by 2 μL of PI 1 mg mL −1 stock solution. CFDA and PI were detected in the FL1 and FL2 channels respectively. For DHR staining, 3 μL of DHR (Invitrogen) stock solution (5 mM in DMSO) were added to 497 μL of cell suspension, and incubated at room temperature in darkness for 15 min before analysis. DHR fluorescence was detected at the FL1 channel. At least 2 replicates were made for each different dye used. The obtained cytograms were processed and imaged using the Flowing Software (version 2.5.0, Perttu Terho). Figure 1 shows the biomass profiles for C. cohnii cultivations on different substrates, as a semi-log plot. Acetate and pure glycerol assays attained the higher maximum biomass concentrations (7.03 and 6.33 g L −1 ). Glucose assay attained the lowest maximum biomass concentration (2.66 g L −1 ) among the assays that showed cell growth. The assay containing the vinegar effluent did not allow cell growth. The assays containing the complex substrates, sugarcane molasses and crude glycerol, attained lower maximum biomass concentrations (3.91 and 5.05 g L −1 , respectively) than the acetate and pure glycerol assays (7.03 and 6.33 g L −1 , respectively), but higher than the glucose assay (2.66 g L −1 ). It was also observed that C. cohnii cells entered the exponential phase immediately after the inoculation for all the assays, except for the glucose and vinegar effluent assays (Fig. 1). Results The slow biomass increase observed after the exponential phase for glucose and molasses assays suggests oxygenlimiting conditions, as a result of a low oxygen transfer rate, at that stage ( Fig. 1). Figure 2 shows the substrate consumption profiles during all microalgae cultivations. There was no acetate consumption during the vinegar effluent assay; thus, it was not included in the graph. Glucose, acetate, glycerol and crude glycerol were completely exhausted after t = 180 h, being acetate depletion faster comparing to other carbon sources. During the sugarcane molasses assay, glucose was completely exhausted after 96 h, but fructose was not consumed. Figure 3 shows the kinetic data for C cohnii assays. The highest specific growth rate (μ) was observed for the acetate assay (0.025 h −1 ) followed by pure glycerol (0.019 h −1 ), crude glycerol (0.018 h −1 ), glucose (0.017 h −1 ) and sugarcane molasses (0.013 h −1 ). The highest substrate uptake volumetric rate (r s ) was observed for the acetate assay (0.125 g L −1 h −1 ) and the lowest for the glucose assay (0.099 g L −1 h −1 ). Flow cytometry (FC) Flow cytometric controls/CFDA-PI Previous flow cytometric controls were carried out using C. cohnii cells at different physiological conditions, in order to evaluate the efficiency of the flow cytometric protocol in association with the CFDA/PI mixture and DHR staining procedures. These controls were then compared with data obtained during the microalgal cultivations. Figure 4 shows the dot plots for FL1 (CFDA fluorescence intensity)/FL3 (PI fluorescence intensity) concerning the controls for C. cohnii cells. Unstained cells (autofluorescence, Fig. 4a), exponentially growing cells (Fig. 4b), aged cells (Fig. 4c) and heat-treated cells (incubated in a water bath at 100°C for 20 min, Fig. 4d) stained with CFDA/PI mixture were analysed by FC. The quadrants were defined based on the unstained cells population (Fig. 4a). Concerning exponential growing cells stained with the CFDA/PI mixture (Fig. 4b), a major population B comprised 88.89% of C. cohnii cells stained with CFDA, but not with PI (CFDA+, PI−). These cells have active esterases and intact membrane, considered as metabolically active cells ("healthy cells"). Aged cells (collected during advanced stationary phase) dot plot revealed that 33.9% of C. cohnii cells (subpopulation A) were not stained with CFDA, neither with PI (CFDA−PI−), indicating that these cells have intact membrane but inactive esterases. The same plot also shows that 51.0% of the cells were stained with CFDA but not with PI (metabolic active cells, subpopulation B). Moreover, 5.9% of the cells (subpopulation C) were stained with CFDA and PI (CFDA+PI+), indicating that these cells, despite having active esterases, have also injured membrane; 3.9% of the cells were not stained with CFDA, but were stained with PI (CFDA−PI+, subpopulation D), thus having injured membrane and inactive esterases. This dot plot shows various C. cohnii subpopulations, demonstrating the cell physiological states heterogeneity that exists in microbial cultures. Heat-treated C. cohnii cells dot plot is shown in Fig. 4d). Most of the cells were stained with PI (97.6%, sum of subpopulations C and D), meaning that these cells have damaged membrane, although 35.9% of the cells showed active esterases (subpopulation C) and 61.7% have not enzymatic activity. Flow cytometric controls-DHR Figure 4e-f show the dot plots FL1/FSC concerning the controls for C. cohnii cells stained with DHR for ROS detection. These plots show the DHR fluorescence intensity detected in FL1 versus FSC (FSC signals give information on cell size, Lopes da Silva and Reis 2008). The regions R1 and R2 were defined based on the cells autofluorescence (Fig. 4e). Exponential growing cells stained with DHR dot plot shows a single population E (99.7%) composed of cells low ROS production, as expected. Aged cells stained with DHR dot plot shows a major subpopulation E (99.3%, Fig. 4g) indicating that age did not induce ROS production. When the heat-treated cells were stained with DHR, a major population (DHR+, 88.2%, placed in R2 region, Fig. 4f) was detected, composed of cells with high ROS production, demonstrating that the heat-treatment induced intracellular ROS production. These results demonstrated that FC, in association with the double staining CFDA/PI mixture and DHR staining, was an efficient method to differentiate C. cohnii cell physiological status concerning cellular enzymatic activity, membrane integrity and intracellular ROS production, which characterise the microalga stress response to adverse environments. This information is crucial when the microalga is grown on industrial effluents. C. cohnii cell stress response Enzymatic activity and membrane integrity (CFDA/PI) Esterase activity and membrane integrity are a measure of microbiological activity and viability and have also been associated with cell stress response (Amariei et al. 2020). Figure 5 shows C. cohnii cells subpopulations percentages during the cultivations on different substrates, pure and complexes. For glucose assay (Fig. 5a), the proportion of subpopulation B, composed of intact and metabolically active cells, attained 74.3% at t = 48 h, afterwards decreasing up to 44.1% at t = 92 h. These variations were accompanied by a concomitant increase in stressed cells (subpopulation A, composed of intact cells without enzymatic activity, which reached 44.0% at t = 92 h, and subpopulation C, composed of permeabilised cells with enzymatic activity, which reached 19% at t = 144 h). The decrease in subpopulation B (thus, C. cohnii cells with active esterases) may reflect the oxygen-limiting conditions that cells might have experienced during the exponential phase and early stationary phase (Fig. 1), when the cellular oxygen requirements are higher due to cell growth. As above referred, the oxygen availability in shake flasks cultures is short. Indeed Takaç et al. (2010) demonstrated that the excess of dissolved oxygen in the stationary phase of growth enhanced Candida rugosa esterase activity. The increase in subpopulation B percentage, followed by a plateau, observed during t = 92 h and t = 164 h, could be due to the higher oxygen availability as a result of the lower cellular oxygen requirements during the stationary phase, since growth slowed down at that stage. The lowest percentage of subpopulation B at the end of the cultivation (34.6% at t = 190 h) was probably due to the carbon exhaustion (Figs. 2 and 5a). A decrease in subpopulation B (metabolic active cells) until t = 168 h was also observed for the molasses assay (Fig. 5b), accompanied by a steadily increase in subpopulation C (stressed cells), which attained 83.5% at t = 48 h. Again these variations were attributed to the low oxygen availability in the medium, as observed for the glucose assay, although the cell stress response was stronger for the molasses assay than for Specific growth rate (μ), biomass volumetric rate (r x ), substrate uptake volumetric rate (r s ) and biomass yield (Yx/s) for C. cohnii cultivations on pure and complex substrates. The specific growth rate (μ) was calculated as the slope of the biomass natural logarithm curve versus time. The biomass volumetric rate (r x ) was calculated as follows: where X f is the biomass concentration determined at the end of the assay, X 0 is the biomass concentration determined at t = 0. The substrate consumption volumetric rate (r s ) was calculated as follows: S f − S 0 /t, where S f is the substrate concentration determined at the end of the assay, S 0 is the substrate concentration determined at t = 0 h. Error bars show the standard deviation (n = 2) the glucose assay. Two factors might have contributed for this situation: as molasses could have increased the broth viscosity, the oxygen transfer rate could be even lower in the medium containing molasses than in the medium containing glucose. On the other hand, it is well known that sugarcane molasses contains anti-microbial compounds such as phenolics (Takara et al. 2007) which may explain the higher proportion of cells with injured membrane observed during this assay. Cells percentages profiles observed during glycerol and crude glycerol experiments are depicted in Fig. 5c, d, respectively. It can be seen that both profiles are similar. At t = 24 h, both cultures displayed a higher proportion of metabolically active cells (subpopulation B, 88.0 and 85.1% for glycerol and crude glycerol, respectively). As the cultures developed, the proportion of subpopulation B decreased up to 37.1 and 21.1%, respectively, at t = 72 h, with the concomitant increase in the proportion of the stressed subpopulation A (intact cells without enzymatic activity) up to 56.8 and 74.3%, respectively, as a result of oxygen-limiting conditions. After t = 72 h, the proportion of subpopulation B increased in both cultures, reaching 40.0 and 60.0% at t = 189 h, for glycerol and crude glycerol, respectively. The subpopulation B proportion increase observed for the crude glycerol at t > 166 h, higher than that observed for pure glycerol assay, demonstrated that this substrate was not toxic for the alga. Concerning the acetate assay, subpopulation B followed the same trend as the previous assays (Fig. 5e). After t = 87.2 h, an abrupt decrease in this subpopulation was observed, with a concomitant increase in the stressed cells subpopulations C and D. This was attributed to the marked medium pH increase observed for the acetate assay broth for t > 92 h, reaching pH = 9.4 at t = 189 h. Indeed, the acetate medium pH showed a higher variation during the acetate experiment, comparing to the other medium pH assays. Since no algal growth was observed during the vinegar effluent cultivation, the assay was concluded at t = 96 h. At that time, the proportion of subpopulation B (intact cells with enzymatic activity) was only 4.4% (Fig. 5f). Most of the microalgal cells had intact membrane but showed no enzymatic activity (59.7%, subpopulation A); 33.1% of the cells have injured membrane and no enzymatic activity (subpopulation D) and 2.9% displayed enzymatic activity but their membrane Reactive oxygen species (DHR) Microbial cells produce ROS through oxygen reduction by the action of reducing agents such as NADH and NADPH, with the support of electrontransfer enzymes or through redox-active chemical species such as quinones and transition metals. Oxidative stress is a physiological response detected in microorganisms exposed to adverse conditions that occurs when the ROS generation exceeds the capacity of their antioxidant defences. Low concentrations of ROS facilitates signal transduction, enzyme activation and other cellular functions, but high concentration of ROS damages DNA, proteins or lipids and can lead to irreversible cellular damages (Amariei et al. 2020). Figure 6 shows the percentage or cells that produced ROS during all the C. cohnii cultivations. The highest proportions of cells with ROS were observed for glycerol and crude glycerol assays, which attained 24.2 and 17.3% at t = 45 h, respectively. Again, crude glycerol assay displayed lower stressed cells proportion than pure glycerol assay. These results are consistent with the CFDA/PI results for the glycerol and crude glycerol cultivations (Fig. 5c, d) since the production of ROS coincided with the reduction of esterase activity (subpopulation A increase). The remaining cultures showed less than 7% of cells with ROS throughout the experiments. Lipid and DHA production Table 2 shows C. cohnii lipid production assessed at the end of all assays. The highest lipid and DHA content was observed for the glucose assay (14.7% and 7.15% w/w DCW, respectively) and for the crude glycerol assay (14.7% and 6.56% w/w DCW, respectively). The remaining assays yielded a lipid content around 12% (w/w DCW) and the DHA content varied between 2.23 and 5.51% w/w DCW, while the lipid productivity was highest for the acetate assay (3.89 mg L − 1 h − 1 ) and crude glycerol (3.19 mg L −1 h −1 ). Acetate assay displayed 12.43% w/w of lipids DCW and a DHA content of 5.24 mg g −1 , despite the high proportion of stressed cells (subpopulations A and C; found at t > 96 h); nevertheless, at that time, the microalgae culture had already reached the stationary phase during which the cells synthetize storage lipids. Since there was no cell growth on the vinegar effluent assay, no lipid and DHA production were detected for this assay, which was in accordance with the significant proportion of C. cohnii cells without enzymatic activity (subpopulation A) at the end of the assay, as revealed by FC (Fig. 5f). The highest DHA percentage of total fatty acids (TFA) was detected for the molasses assay (49.58% w/w TFA) followed by glucose assay (48.45%). Glycerol showed the lowest proportion of DHA (25.04% w/w TFA). The lower lipid productivity observed for the glucose assay (1.89 mg L −1 h −1 ), despite the higher lipid content (14.7%), resulted from the lower biomass concentration observed at the end of this assay (2.4 g L −1 , Fig. 1). The highest DHA productivity was detected for the acetate and crude glycerol assays (1.64 and 1.43 mg L −1 h −1 , respectively). The highest DHA concentration was observed for the acetate (31.18 mg L − 1 h −1 ) and crude glycerol (26.96 mg L −1 h −1 ) assays. Fatty acid profiles and lipid classes Concerning the fatty acids profiles (Fig. 7a), the dominant fatty acids present in C. cohnii 30772 biomass collected at the end of the assays were the lauric (12:0), myristic (14:0), palmitic (16:0), oleic (18:1ω9) and DHA (22:6ω3). In most of the assays, the major fatty acid was DHA, comprising more than 40% w/w of total fatty acids (TFA), except for the pure glycerol assay, in which the microalgal biomass contained only 25% DHA w/w of TFA. The biomass produced at the end of the sugarcane molasses assay showed the highest DHA percentage (49.57% w/w TFA). The collected microalgal biomass from the assays containing the pure carbon sources (glucose, acetate, pure glycerol) had higher proportions of myristic acid (14:0) (> 18% w/w TFA), while the biomass collected from the assays containing complex carbon sources (sugarcane molasses and crude glycerol) contained 11.27 and 16.05% w/w of TFA of 14:0, respectively. For the assays containing sugars (sugarcane molasses and glucose), the palmitic acid (16:0) attained lower proportions (< 16%) comparing with the remaining assays which reached~22% w/w of TFA. In all the assays, the oleic acid (18:0) proportion varied between 6 and 9% w/w TFA and the lauric acid (12:0) varied between 2.88 and 6.81% w/w TFA. Concerning the unsaturation level of C. cohnii lipids, the collected biomass from the pure glycerol assay contained the highest proportion of saturated fatty acids (SAT) (65.68% w/ w of TFA) (Fig. 7b). In all the assays, the final biomass contained low percentages of monounsaturated fatty acids (MONO-UNSAT) (~8% w/w TFA). As expected, the biomass obtained from the sugarcane molasses assay showed the highest proportion of polyunsaturated fatty acids (PUFA) (50.84% w/w of TFA) since it contained the highest proportion of DHA, the major fatty acid. Discussion Crypthecodinium cohnii growth on several pure and complex substrates has been previously reported, in order to reduce DHA production process costs. In this work, pure (glucose, glycerol and acetate) and complex substrates containing the pure carbon sources (sugarcane molasses, crude glycerol and vinegar affluent) were tested. Acetate and pure glycerol assays produced the highest biomass concentration. This result is supported by De Swaaf et al. (2003) who reported a superior performance of the acetic acid-grown C. cohnii 30772 cultures, relatively to the glucose grown cultures. This could be due to C. cohnii glucose metabolism which involves a number of steps-glucose uptake, glycolysis, transport of pyruvate into mitochondria, conversion of pyruvate through the citric acid cycle-contrarily to acetate assimilation, which directly feeds the pool of acetyl-CoA, which in turns feeds the citric acid cycle, crucial for energy production needed for cell growth and lipid synthesis. Hosoglu and Elibol (2017a) reported better C. cohnii CCMP 316 growth and lipid production on glycerol rather than on glucose. Moreover, in the study in which C. cohnii was grown on different carbons sources including glucose and glycerol, Safdar et al. (2017) also reported the highest microalgal biomass concentration for the assay containing glycerol. Crypthecodinium cohnii glycerol assimilation is comparable to glucose assimilation, since the pathway is similar, being pyruvate directly yielded by the conversion of 3phosphoglycerol and further converted to acetyl-CoA, although glycerol metabolism saves a few steps in the glycolysis pathway, comparing to glucose assimilation (Hilling 2014). This may explain C. cohnii better performance when grown on acetate and glycerol than on glucose. The specific growth rate observed for C. cohnii cultivations on glucose (0.02 h −1 ) was lower than that one reported by Swaaf et al. (1999) (~0.05 h −1 ) who also cultivated C. cohnii ATCC 30772 cells in shake flasks, but used higher glucose concentrations (25-75 g L −1 ), which can explain the higher reported specific growth rate. Safdar et al. (2017) also reported a specific growth rate of 0.03 h −1 for C. cohnii cultivations on glucose, developed in a 1-L fermenter. The higher specific growth rate reported by these authors is probably due to the use of the fermenter, which usually allows better microalgae performance due to higher mass transference and control of the medium pH, stirring and aerations rates ensured by these systems, avoiding drastic medium pH variations and mass transference limitations that exist in shake flasks cultivations, and negatively affect the cell growth. The alga did not grow on vinegar effluent, probably due to the presence of polyphenols, known to be present in vinegars and to inhibit microbial cells (Sengun et al. 2019). During the cultivations, all the substrates were exhausted, except for the molasses assay. It has been previously reported that C. cohnii CCMP 316 did not consume fructose when carob pulp syrup (containing glucose and fructose) was used as carbon source (Mendes et al. 2007). Nevertheless, Okuda et al. (2013) reported that D-fructose promoted the growth of strain D31, a related species of C. cohnii. In the present work, during the sugarcane molasses assay, glucose was completely exhausted at t = 96 h, but fructose was not consumed. Since molasses was previously hydrolysed to glucose and fructose, the microalgae glucose uptake might have inhibited the fructose assimilation, as it was reported for Saccharomyces cerevisiae and Saccharomyces uvarum (carlsbergensis) when grown on glucose and fructose (D'Amore et al. 1989). As far as the authors know, this is the first work reporting the use of FC coupled with the stains CFDA and DHR, to analyse C. cohnii enzymatic activity and ROS production, as a measure of C. cohnii cells stress level, when grown on different carbon sources. Indeed, Prado et al. (2012) used DHR to studied ROS intracellular levels for the freshwater green microalga Chlamydomonas moewusii after 96 h of exposure to different concentrations of the herbicide paraquat. The FC results (Figs. 5 and 6) showed variations in C. cohnii subpopulations proportions as a result of the environmental conditions they were experiencing, near real time. The most notorious fall in the proportion of intact cells with enzymatic activity (presumably "heathy cells", subpopulation B) during C. cohnii cultivations was observed during the acetate assay after t = 96 h (Fig. 5 e), and might be related to the C. cohnii cell morphology change observed at that time, as a result of the medium pH increase. Indeed, according to Tuttle and Loeblish (1975), the optimal pH for C. cohnii (Seligo) strain growth is 6.6. The medium pH increase during the acetate assay resulted from the gradual CH 3 COO − anions removal, and their replacement by OH − and other anions, resulting in the generation of NaOH, a stronger base than CH 3 COONa, which was responsible for the medium pH increase, according to Chalima et al. (2019). At high medium pH values, the microalga cells tend to aggregate and, consequently, to precipitate, which explains the pronounced loss of enzymatic activity and membrane integrity for a significant proportion of C. cohnii cells during this assay, detected by FC analysis. In fact, aggregated cells are exposed to nutrient and oxygen starvation conditions due to nutritional diffusion limitations that exist inside the aggregates, reducing the cell viability. Therefore, when C. cohnii cells grow on acetate, the medium pH should always be maintained near the optimal pH (Ratledge et al. 2001;Chalima et al. 2019), which is not so crucial for the remaining studied substrates, since the cell viability did not fall, as it fell for the acetate assay. However, the medium pH maintenance requirement implies additional costs and equipment. Ratledge et al. (2001) have used an efficient pH-auxostat culture bioreactor system in which a low initial concentration of sodium acetate was used in the initial growth medium, and acetic acid was used to maintain a constant medium pH value and supply a further carbon source for growth. Comparing the CFDA/PI and DHR results obtained by FC, it seemed that CFDA/PI double staining method is more sensitive to C. cohnii cell physiological states variations than DHR staining method. This observation was supported by the flow cytometric controls, which showed a stronger microalgal stress response to age when cells were stained with CFDA/PI than they were stained with DHR (Fig. 4). The FC results described in this work described C. cohnii behaviour when grown on different carbon sources, allowing understating the physiological response of the microalgae to the different environments. Concerning C. cohnii lipid and DHA production, and comparing the pure with the complex substrates, glucose assay displayed higher lipid and DHA content than molasses assay (Table 2), possibly due to the presence of a higher proportion of cells from subpopulation B (intact cells with enzymatic activity) throughout the cultivation time course, contrarily to the molasses assay (Fig. 5a, b), in which a higher proportion of permeabilised cells (subpopulation C) might have negatively affected the microalgal lipid and DHA synthesis during that cultivation. Indeed, dead or stressed cells cannot participate in the biotransformation in the same way as metabolically active cells do. Crude glycerol assay displayed higher lipid and DHA content than glycerol assay, possibly due to the higher proportion of metabolically active cells (subpopulation B) during the crude glycerol assay development, particularly during the stationary phase, when the cells produce storage lipidic materials. Since the proportion of stressed cells was always high during the vinegar effluent assay time course (92.8%, sum of subpopulation A and D), no biomass and no lipids were produced by the microalgae. Contrarily, the acetate assay displayed a high proportion of metabolically active cells until the stationary phase, allowing cell growth and lipid synthesis. Gong et al. (2015) studied DHA production by the marine dinoflagellate C. cohnii ATCC 30772, using the low-cost substrates rapeseed meal hydrolysate (RMH) and molasses, as alternative feedstock, added to a basal medium similar to that used in this work, in shake flasks. They found that, in the batch fermentations using media composed of diluted RMH (7%) and 1-9% waste molasses, the highest biomass concentration and DHA yield reached 3.43 g L −1 and 8.72 mg L −1 , respectively. The algal biomass produced from RMH and molasses medium also contained (22-34%) DHA in total fatty acids. These results, obtained in similar conditions as the results described in the present work, are lower than those here reported. The high proportion of SAT and PUFA in C. cohnii biomass, observed in all the assays, suggests that the lipid fraction containing PUFA (ω-3 fatty acids) can be eventually separated from the remaining microalgal lipids (SAT+ MONO-UNSAT), to obtain a rich ω-3 compounds fraction with applications in pharmaceutical/food areas, and the remaining fraction can be directed for bioenergy or biodiesel production. Such approach will take advantage of the various lipidic products synthesized by the microalgae, therefore maximizing the value derived from the whole process, with a desired minimal environmental impact as all fractions are valorised. In this way, the economics of the process may be greatly improved, as the high value-added product (DHA) may sustain the microbial biodiesel production. Conclusions Among the studied low-cost substrates, the highest lipid, DHA content and lipid productivity were observed for the crude glycerol, while molasses induced the highest proportion of DHA of total fatty acids (49.58% w/w TFA). As these substrates are widely available as sugar and biodiesel industries byproducts, they can be further converted and valorised towards the production of DHA, with an additional possibility of co-producing a lipidic fraction composed of the remaining saturated and monounsaturated fatty acids that can be directed towards biodiesel purposes. In this way, the overall process tends to produce zero waste. Moreover, the higher DHA proportions observed for the molasses and crude glycerol assays represents an advantage for the DHA purification step. The present work evaluated C. cohnii growth, lipid and DHA production on pure and low-cost complex sources using FC, to evaluate the microalga cell stress response. CFDA/PI double-staining method was more sensitive to C. cohnii cell physiological states variations than DHR staining method. This information, obtained near real time, allows understating the microalga cell response to the environmental conditions, and also allows changing the process control strategy during the process development, facilitating the process scale-up step.
8,747.2
2020-10-26T00:00:00.000
[ "Engineering" ]
Quaia, the Gaia-unWISE Quasar Catalog: An All-sky Spectroscopic Quasar Sample We present a new, all-sky quasar catalog, Quaia, that samples the largest comoving volume of any existing spectroscopic quasar sample. The catalog draws on the 6,649,162 quasar candidates identified by the Gaia mission that have redshift estimates from the space observatory’s low-resolution blue photometer/red photometer spectra. This initial sample is highly homogeneous and complete, but has low purity, and 18% of even the bright (G < 20.0) confirmed quasars have discrepant redshift estimates (∣Δz/(1 + z)∣ > 0.2) compared to those from the Sloan Digital Sky Survey (SDSS). In this work, we combine the Gaia candidates with unWISE infrared data (based on the Wide-field Infrared Survey Explorer survey) to construct a catalog useful for cosmological and astrophysical quasar studies. We apply cuts based on proper motions and colors, reducing the number of contaminants by approximately four times. We improve the redshifts by training a k-Nearest Neighbor model on SDSS redshifts, and achieve estimates on the G < 20.0 sample with only 6% (10%) catastrophic errors with ∣Δz/(1 + z)∣ > 0.2 (0.1), a reduction of approximately three times (approximately two times) compared to the Gaia redshifts. The final catalog has 1,295,502 quasars with G < 20.5, and 755,850 candidates in an even cleaner G < 20.0 sample, with accompanying rigorous selection function models. We compare Quaia to existing quasar catalogs, showing that its large effective volume makes it a highly competitive sample for cosmological large-scale structure analyses. The catalog is publicly available at 10.5281/zenodo.10403370. 1. INTRODUCTION Quasars are powerful tools for many fields of astrophysics.They are key probes of accretion physics (e.g.Sunyaev & Zeldovich 1970;Yu et al. 2020), which informs the evolution of active galactic nuclei (AGNs).The evolution of quasars and their host galaxies are intertwined, giving insight into supermassive black hole growth (e.g.Hopkins et al. 2006) as well as massive galaxy formation (e.g.Kormendy & Ho 2013).Studies of the quasar distribution can also be used to understand black hole evolution (e.g.Powell et al. 2020) and halo masses and environmental effects (e.g.DiPompeo et al. 2017).Quasars can also be utilized as background sources for cosmic phenomena such as gravitational lenses (e.g.Claeskens & Surdej 2002), and quasar spectra encode the properties of the intergalactic medium via the Lyα forest (e.g.Rauch 1998). Corresponding author: Kate Storey-Fisher<EMAIL_ADDRESS>are key tracers for large-scale structure cosmology.They reside in peaks of the dark matter distribution and their clustering can be used to measure cosmological parameters, including the growth rate of structure f σ 8 (e.g.García-García et al. 2021;Alonso et al. 2023), the Hubble distance D H (e.g.Hou et al. 2020), primordial non-Gaussianity (e.g.Leistedt et al. 2014;Castorina et al. 2019;Krolewski et al. 2023), and the baryon density Ω b (e.g.Yahata et al. 2005).Crosscorrelations between quasars and other tracers provide measurements of key cosmological quantities, such as with photometric galaxy samples to measure the baryon acoustic feature (e.g.Ata et al. 2018), with cosmic microwave background (CMB) lensing to constrain quasar bias and the growth of structure (e.g.Sherwin et al. 2012), and with foreground galaxies as a probe of weak lensing (e.g.Ménard & Bartelmann 2002;Scranton et al. 2005;Zarrouk et al. 2021).They can also be used as standardizable candles to measure the expansion rate of the universe (e.g.Setti & Woltjer 1973;Risaliti & Lusso 2015;Lusso et al. 2020).Finally, given the large volume typically covered by quasar samples, the quasar distri-bution provides a test of the cosmological principle of isotropy and homogeneity (e.g.Secrest et al. 2021;Dam et al. 2022;Hogg et al. 2024). Many surveys have observed and cataloged quasars, with around 1 million spectroscopically identified and several million when including photometric samples.The Sloan Digital Sky Survey (SDSS) Data Release 16 includes a highly complete catalog of 750,414 quasars with spectroscopic redshifts (Lyke et al. 2020).Photometric surveys observe a much larger number of quasars, at the expense of low redshift accuracy; nearly 3 million quasars with reliable photometric redshifts have been cataloged (Kunsági-Máté et al. 2022), including with the Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010), which imaged the entire sky and Pan-STARRS, (Chambers et al. 2019) which observed three-quarters of the sky.Shu et al. (2019) combined photometry from Gaia DR2 and unWISE (Lang 2014) to identify 2.7 million AGN candidates and estimate their photometric redshifts.Upcoming surveys will observe even more quasars: the Dark Energy Spectroscopic Instrument (DESI; Aghamousa et al. 2016) expects to obtain spectra for 3 million quasars, and the Rubin Observatory's LSST will photometrically observe upward of 10 million quasars (Ivezić 2016).However, none of these quasar catalogs is both all-sky and contains precise redshift information.The recently released Gaia DR3 quasar candidates (Gaia Collaboration et al. 2023a) constitute a new sample that promises to fill this gap. The Gaia quasar sample presents a new opportunity to explore these science topics.While the Gaia satellite was designed to map stars in the Milky Way (Gaia Collaboration et al. 2016), it broadly observes bright objects in the sky, which includes many extragalactic sources.Previous work identified a small number of quasars in earlier Gaia data releases, including identification based solely on their astrometric properties (Heintz et al. 2018(Heintz et al. , 2020)).In DR3, the Gaia collaboration released a sample of 6,649,162 quasar candidates that were incidentally observed during the survey (Delchambre et al. 2023;Gaia Collaboration et al. 2023a,b).The sources cover the entire sky and have Gaia blue photometer (BP)/red photometer (RP) spectra, low-resolution spectra covering the wavelength range of 330-1050 nm.These spectra allow for redshift estimates of the sources, with 86% having a precision of |∆z/(1 + z)| < 0.01 compared to SDSS redshifts when no processing issues affect the redshift estimation (flags qsoc = 0 or flags qsoc = 16), which is the case for 20% of the sample; for the full sample including sources with redshift warning flags set, this percentage of high-precision redshifts decreases to 53%.While not as precise as high-resolution spectroscopic redshifts, they are significantly better than photometric redshifts.The median redshift of the sample is z = 1.67.The Gaia quasar candidate sample was constructed for completeness over purity, and has an estimated purity of 52%; the Gaia Collaboration also suggests criteria for a higher purity (∼95%) subcatalog of ∼1.9 million quasars.Overall, the sample presents an unprecedented resource for quasar science and cosmology. There are two main issues with this raw Gaia sample.First, the sample contains a large number of non-quasar contaminants.Second, a significant fraction of the redshift estimates are catastrophic errors, due to emission line misidentification given the limitations of the lowresolution spectra.Understanding and eliminating sample contaminants matters greatly in identifying the most extreme (e.g.brightest or most luminous) quasars, which has been addressed in the AllBRICQS catalog (Onken et al. 2023) that draws on Gaia quasar candidates.In this work, we construct a clean quasar catalog across the full magnitude range with lower contamination and improved redshift estimates, with the particular goal of building a catalog appropriate for large-scale structure analyses as well as other quasar science.For both of these, we rely on crossmatches with WISE observations of the quasars (Wright et al. 2010), which adds key infrared (IR) information.To filter out contaminants, we apply color cuts based on the Gaia and WISE photometry, as well as a proper motion cut.To improve the redshifts, we identify quasars that are also observed by SDSS, for which we have highly precise spectroscopic redshifts, and train a k-Nearest-Neighbors (kNN) model based on their photometry and Gaia redshift estimates.Further, the Gaia quasar candidate sample has strong systematic imprints from various observational effects, such as Galactic dust.To model these systematics so that their effects can be mitigated in analyses of the catalog, we fit a model for the selection function based on observational templates using a Gaussian process.We release both the catalog and selection function as publicly accessible data products. This paper is organized as follows.In §2, we describe the initial data sets used in the construction of the catalog.The construction of the catalog is detailed in §3.In §4, we present the final catalog and perform verification and comparisons to other samples, and outline the data format.We summarize the catalog and describe the access to the data in §5. INITIAL DATA SETS 2.1. Gaia DR3 quasar candidate sample While performing its all-sky survey of the Milky Way, the Gaia satellite (Gaia Collaboration et al. 2016) also observed millions of extragalactic objects.These sources-both quasar and galaxy candidateswere first released in Gaia DR3 (Gaia Collaboration et al. 2023a,b).Gaia obtained BP/RP spectra of the sources, which are low-resolution spectra with relatively narrow wavelength ranges; BP covers 330-680 nm and has 30 ≤ R ≤ 100 and RP covers 640-1050 nm (Carrasco et al. 2021) with 70 ≤ R ≤ 100.The raw spectra are not released by Gaia (besides a small subsamplethe rest will be released in Gaia DR4), but redshift es- timates and other derived information are contained in the catalogs. The quasar candidates were selected based on multiple classifiers and criteria, described in detail in Gaia Collaboration et al. (2023a).The majority (5.5 million) of the quasar candidates were identified with the Discrete Source Classifier (DSC) module (detailed in Delchambre et al. (2023), a machine-learning model that takes as input the source's BP/RP spectrum, G-band magnitude, G-band variability, parallax, and proper motion, and outputs a class label trained on SDSS spectroscopic classifications.Given these SDSS labels, the results of this module will inherit many of the same selection effects as SDSS.DSC is estimated to have a completeness of over 90% and a purity of around g24% for quasars.Another machine learning model selected over 1 million sources based on their variability, as active nuclei have time-variable accretion; the model inputs were statistics of time series data in all Gaia bands as well as photometric and astrometric quantities, as detailed in Rimoldini et al. (2023).Additionally, a set of nearly 1 million sources was selected based on their surface brightness profile; this selection used existing major quasar catalogs to compile an initial list of sources, which were then processed by the Gaia surface brightness profile module (Ducourant et al. 2023).This module included quasars in the candidates catalog which passed certain criteria, including having Gaia observations covering > 86% of the source's surface area and a confident assessment (positive or negative) of host galaxy presence.Finally, the 1.6 million sources used to define the Gaia-CRF3 celestial reference frame were contributed, which are based on crossmatches of Gaia to external quasar catalogs.A large fraction of sources are identified as quasars by multiple of these methods; the overlapping contributions are shown in Figure 3 of Gaia Collaboration et al. (2023a).The full quasar candidate sample contains 6,649,162 sources1 , selected for high completeness, but with a low purity estimated to be around 52% (Gaia Collaboration et al. 2023a).We show the overlaps between this Gaia quasar candidate sample and other samples and subsamples used and constructed in this work in Figure 1. Most of the quasar candidates (6,375,063) are assigned redshifts using the Quasar Classifier (QSOC) module, which uses a chi-squared approach on the quasars' BP/RP spectra compared to composite spectra from SDSS DR12Q (Delchambre et al. 2023).We refer to these Gaia redshift estimates as z Gaia .Many of these redshifts are determined by a single line due to the narrow spectral range, resulting in aliasing issues when lines are misidentified (see Figure 15 in Delchambre et al. 2023).An estimated 63.7% of the redshifts have |∆z| < 0.1, increasing to 97.6% for quasar candidates with no redshift warning flags (this is the case for nearly 80% of quasars with G < 18.5, but decreases to less than 20% for G > 19.5). Gaia Collaboration et al. (2023a) provide a query to select a purer subsample of the quasar candidates.It requires higher quasar probability thresholds from the various classifiers and excludes surface-brightnessselected galaxies that have close neighbors.This results in 1,942,825 sources with an estimated purity of 95%; 1.7 million of these have Gaia redshifts.The sky distribution of this sample, which we call the Gaia DR3 'Purer' sample, is shown in Figure 2. The Gaia DR3 'Purer' sample has a low density in the Galactic plane; we speculate that this is largely due to dust extinction making sources too faint to observe at low Galactic latitudes.Gaia DR3 'Purer' also has significant overdensities around the LMC and SMC, as the sample still contains stellar contaminants. For our analysis, we start with the full quasar candidate sample, rather than the Gaia DR3 'Purer' sample or cutting on other Gaia pipeline flags, to allow greater completeness and minimize reproducing biases; we compare our catalog with the Gaia Collaboration et al. (2023a) Gaia DR3 'Purer' subsample in §4.3.We construct a superset of our catalog (which is a subset of the Gaia quasar candidates sample) that contains all the information needed for catalog construction: we require that sources are in the Gaia quasar candidates table, have Gaia G, BP , and RP measurements, unWISE W 1 and W 2 observations (described in §2.2), Gaia-estimated QSOC redshifts, and a maximum G magnitude of G < 20.6.This magnitude cut was chosen to be slightly deeper than our desired catalog magnitude limit of G < 20.5, in order to provide a buffer for redshift estimation.This results in a superset with 1,518,782 sources.We call our final catalog Quaia, so we refer to this as the Quaia superset. unWISE Quasar Sample We use the unWISE reprocessing (Lang 2014;Meisner et al. 2019) of WISE (Wright et al. 2010) to contribute IR photometry to Gaia sources.The unWISE coadds combine data from NEOWISE (Mainzer et al. 2011) with the original WISE survey, providing a time baseline 15 times longer.Compared to the original AllWISE catalog, unWISE has deeper imaging and improved modeling of crowded fields.The unWISE catalog (Schlafly et al. 2019) contains measurements in the W 1 (3.4 µm) and W 2 (4.6 µm) bands for over 2 billion sources.We do not use the W 3 and W 4 bands as these do not go as deep as we need.We perform a crossmatch of the Gaia quasar candidate sample to unWISE sources within 1 ′′2 .We also crossmatch the SDSS training and validation samples ( §2.3, §2.4) to unWISE. When combined with optical photometry, unWISE IR color information is very useful to identify quasars and distinguish them from contaminants.This photometry also contains useful redshift information; recent approaches to estimate redshifts from photometry with neural networks achieve a mean |∆z| ∼ 0.22 (Yang et al. 2017;Jin et al. 2019;Kunsági-Máté et al. 2022).In our case of redshift estimates from narrow-range BP/RP spectra, we expect IR photometry to add information that can break line identification degeneracies in order to improve estimates.We incorporate the W 1 and W 2 bands into both our quasar selection ( §3.1) and redshift estimation ( §3.2) procedures. SDSS DR16 quasar sample The Sloan Digital Sky Survey released the largest spectroscopic quasar catalog in DR163 (Lyke et al. 2020).It combines new sources from the extended Baryon Oscillation Spectroscopic Survey (eBOSS), part of SDSS-IV, with previously observed sources from earlier SDSS campaigns.The catalog contains 750,414 quasars, with an estimated 99.8% completeness (compared to the SDSS-III/SEQUELS sample of Myers et al. 2015, which has higher signal-to-noise spectra) and 98.7-99.7%purity.We remove sources with redshift warnings, ZWARNING!=0, as well as a handful of sources with unreasonably low or negative redshift estimates (z < 0.01).This results in 638,083 sources, which is the sample shown in Figure 1.We crossmatch these with the Gaia catalog, as well as unWISE ( §2.2), using a maximum separation of 1 ′′ on the sky.We remove sources with fewer than five observations in BP (phot bp n obs) or RP (phot rp n obs), following Bailer-Jones (2021), as well as sources that are duplicated in the SDSS star or galaxy samples ( §2.4).This results in 343,074 sources with both Gaia and unWISE observations that pass these criteria. We use these to calibrate the cuts to decontaminate our sample ( §3.1); for this purpose, we only keep sources that are also in the Quaia superset (sources that are in the Gaia quasar candidates table, have all necessary Gaia and unWISE photometry, Gaia-estimated QSOC redshifts, and G < 20.6).This sample contains 246,122 quasars.We also use this sample (after applying the cuts described in §3.1) to train our redshift estimation model ( §3.2).While this spectroscopic sample has quite high completeness and accurate redshift information, we note that it is still imperfect, contains selection effects, and represents only a particular definition of a quasar; these issues will propagate to our catalog. Contaminant samples: galaxies and stars To guide the decontamination of our catalog ( §3.1), we compile known contaminant samples, namely galaxies and stars.For the galaxy sample, we use SDSS spectroscopic galaxies from DR18 4 .Following Bailer-Jones (2021), we include all galaxies with class label GALAXY in the SpecObj table, exclude galaxies with subclass labels AGN or AGN BROADLINE, and exclude sources with redshift warnings, zWarning=0.We crossmatch these with Gaia DR3 and unWISE with a 1 ′′ radius, and remove sources with fewer than five observations in BP or RP , as for the SDSS quasars.We also remove apparent stellar contaminants from the galaxies sample with the cut in G-RP and BP -G from equation (1) of Bailer-Jones et al. (2019), and additionally remove sources duplicated in the SDSS quasar or star samples.This leaves 600,897 crossmatched SDSS galaxies in our sample; 1,316 of these are in the Quaia superset. For the star sample, we also use SDSS DR18 sources, selecting objects with class label STAR in the SpecObj table.As for the quasars and galaxies, we crossmatch these with Gaia DR3 with a 1 ′′ radius and remove sources with fewer than 5 observations in BP or RP, and remove sources duplicated in the other samples.This results in a stellar sample with 482,080 crossmatched SDSS-Gaia stars, with 2,276 of these in the superset. For the decontamination procedure, we also compile a sample of sources in or near the LMC or SMC, as most of these will be stellar contaminants but have different properties than the SDSS star sample.To do this, we select all sources in the Gaia quasar candidates table that are within 3°of the center of the LMC or 1.5°from the center of the SMC.While this may include stars not actually in the LMC or SMC, we have chosen these fairly narrow radii in order to capture mostly LMC and SMC stars and few potential quasars.Additionally requiring that these have unWISE photometry, this gives 11,770 LMC-and SMC-adjacent stars; 9,927 are in the superset. CATALOG CONSTRUCTION 3.1. Decontamination with proper motions and unWISE colors The full Gaia quasar candidate sample is known to contain a significant fraction of contaminants (stars and other non-quasars, such as galaxies).The stellar contaminants might include sources such as brown dwarfs, which have similar colors as high-redshift quasars, and potentially blue horizontal branch stars, blue stragglers, and white dwarfs, which are UV bright like lowerredshift quasars.To remove stellar contaminants, we make an initial cut on proper motion µ, as quasars should have negligible proper motions due to their large distances.The value of µ has a dependence on G, so we make a cut in this space.To guide this cut, we use labeled sources: SDSS quasars, SDSS galaxies, SDSS stars, and Gaia LMC-and SMC-adjacent stars, as described in §2.3 and §2.4.The G-µ distributions of these sources are shown in the top panel of Figure 3.In the middle panel, we show the intersection of these labeled sources with our Quaia superset, which consists of sources in the Gaia quasar candidates table that have Gaia redshift estimates, complete Gaia and unWISE photometry, and are below G < 20.6.We see that the SDSS quasars tend to have much smaller proper motions than the other types of sources, with a very linear edge to the G dependence at the high proper motion side of the distribution.Based on this, we choose the cut µ < 10 0.4 (G−18.25)mas/yr . (1) At G = 18.25, this corresponds to µ ≲ 2.5 mas yr −1 , and allows for less severe cuts at deeper magnitudes given the typically less precise astrometry.This is related to the proper motion uncertainty as a function of G, which has been quantified by Gaia (Gaia Collaboration et al. 2021).We show this cut overlaid on the Quaia superset in the lower panel of Figure 3; based on the labeled data, we can clearly pick out the populations.The proper motion cut excludes 39470 sources, 2.6% of the superset. Next, we determine the color cuts based on Gaia and unWISE photometry.Generally, stars and galaxies are dim in redder, IR wavelengths compared to AGN.For instance, the eBOSS quasar target selection (Myers et al. 2015) involved linear cuts in the optical-IR, involving the SDSS g, r, and i bands and the WISE W 1 and W 2 bands. In Figure 4, we show color-color distributions for the same samples as in Figure 3.The left panel shows W 1-W 2 vs. G-W 1 color, and the right column shows G-RP vs. BP -G color.The top row, with the full labeled samples, shows that different types of sources tend to be localized to different areas of this parameter space (we show only a subset of each type for clarity).In particular, the colors involving unWISE (left panel) separate out the source types relatively clearly, demonstrating the importance of the unWISE crossmatch: SDSS quasars have very red W 1-W 2, and intermediate G-W 1 color, while galaxies have bluer W 1-W 2 and redder G-W 1 compared to quasars, and stars (both SDSS stars and stars near the LMC and SMC) are bluer in both colors.In Gaia color-color space, galaxies tend to have bluer BP -G and redder G-RP colors than the other types of sources.In the middle row of Figure 4, showing the intersection of the labeled sources with the Quaia superset, we see that the superset restrictions have eliminated many of the sources, especially SDSS galaxies and stars, though a significant number remain.(We note that it is possible that some of these SDSS galaxies do host AGN though they were not classified as such by SDSS.)The Quaia superset is shown in the bottom panel; we can see clear populations of quasars, stars, and galaxies lining up with the labeled sources.Importantly, we can see the effect of the stricter SDSS color selection in the red (high G-W 1) region of parameter space into which the Gaia quasar candidates extend, but are not represented in the SDSS sample in the above panels. We choose to apply linear cuts in these colors to decontaminate the sample.While other works (e.g.Hughes et al. 2022) train classifiers to determine which objects are true quasars using SDSS-classified quasars as labels, we opt for simpler cuts for ease of reproducibility and to mitigate the propagation of SDSS selection effects, which may include color-and magnitude-dependent effects.We choose four cuts based on the distribution of sources in color-color space.The first is in W 1-W 2, which has been shown to be useful for distinguishing quasars; for instance, Nikutta et al. (2014) demonstrated that a small crossmatched SDSS quasar sample has very red W 1-W 2 = 1.2 ± 0.16, while other types of objects-namely star-forming and AGN galaxies, luminous red galaxies and stars-have bluer W 1-W 2. Stars tend to have the bluest W 1-W 2, with a mean of W 1-W 2 = −0.04 ± 0.03, so a cut in W 1-W 2 is a reliable way to filter out stellar contaminants.We add a cut in G-W 1 to filter out the bulk of the stars (including the LMC and SMC), and another in BP − G to cut out the galaxy contaminants.Finally, we find that these single color cuts were not sufficient to remove all of the LMC and SMC, so we add an additional diagonal cut in W 1-W 2 and G-W 1, choosing a reasonable slope. We optimize the values (intercepts) of these four cuts with a grid search, trying values spaced out by 0.1 mag.We note that while we show the full samples in Figure 4, in practice we make the proper motion cut before optimizing the color cuts.We choose the color cuts that maximize our objective function L, where N q is the number of true quasars that make it into the catalog, N s SDSS stars, N g SDSS galaxies, and N m LMC and SMC stars, and the λ parameters balance the relative ratios of each.We choose λ s = 3, λ m = 5, and λ g = 1. The optimal cuts for the objects to keep in the catalog are These are shown as the black lines in all panels of Figure 4, with the gray shading indicating exclusion regions.These cuts, as well as the proper motion cuts described above, exclude ∼7% of the superset, resulting in 1,414,385 quasars in our decontaminated sample.We apply an additional magnitude cut of G < 20.5 to reduce edge effects in our redshift estimation; this constitutes our deep sample, with 1,295,502 sources.We refer to this as Quaia in the rest of this work.However, the catalog becomes less clean and reliable as we push to deeper magnitudes-due to less precise measurements and stronger systematics, notably the Gaia scanning pattern-so we produce a version of the catalog with G < 20.0 to ensure a cleaner sample.This brighter catalog has 755,850 sources, and we report most of our results on this sample throughout the rest of this work.BP -G color.The black lines show the cuts we make; the shaded gray region is excluded from the catalog.The rows have the same samples as in Figure 3, except that in the top row, only 20,000 of each type of SDSS source is shown for clarity.In both color-color projections, the labeled sources are mostly localized in particular regions of parameter space, and we can see these populations somewhat clearly in the Quaia superset. Spectrophotometric redshifts with unWISE and SDSS We use unWISE and SDSS data to improve the redshift estimation of the sources.Figure 5a shows the redshifts estimated by the Gaia QSOC pipeline z Gaia compared to the SDSS redshifts z SDSS for a test sample of sources from Quaia with G < 20.5; note that the 2D histogram is plotted in log-space to show the outliers more clearly.We find that of the Gaia redshifts z Gaia , 82% (81%) agree to |∆z/(1 + z)| < 0.2 (0.1).A significant fraction of z Gaia are highly precise: 75% agree with SDSS to |∆z/(1 + z)| < 0.01.We also clearly see bands of incorrect estimation due to line aliasing issues.Additionally, in the crossmatched sample, nearly all of the very high z Gaia estimates (z > 4.5) are shown to be incorrect in comparison to SDSS.We note that the redshift estimation is much more accurate for sources that have no redshift warning flags set (flags qsoc=0), as discussed in §2.1, but this is only true for 21% of the sources in Quaia (G < 20.5), and even including sources with flags qsoc=16 this leaves only 39% of sources. We train a kNN model on Quaia sources to estimate improved redshifts.(We also tested other models including XGBoost and a multilayer perceptron, and found that the kNN outperformed both by a small margin.)We include all sources in our decontaminated catalog ( §3.1) which goes out to G < 20.6, in order to have a buffer beyond our desired G < 20.5 sample to reduce edge effects from the training set.(We find that including the rest of the photometry does not make a difference in the results.)The reddening is determined with the Corrected Schlegel, Finkbeiner, & Davis (SFD) dust map introduced by Chiang (2023), which corrects the standard Schlegel et al. (1998) dust map by subtracting off the contribution from the cosmic infrared background (CIB).(We also include the appropriate correction factor given by Schlafly & Finkbeiner (2011).)5 .The labels are the SDSS redshifts, z SDSS . We use as our labeled data sources from the crossmatched SDSS DR16Q sample ( §2.3) that are also in our decontaminated catalog Quaia, so that we train on sources drawn from the same distribution to which we will apply the model; this is 243,206 sources.We apply a 70%/15%/15% train/validation/test split.We build a k-d tree on the training set features using the KDTree implementation of sklearn.At the prediction stage, we access the K nearest neighbors of each input feature vector, first excluding neighbors with zero distance in feature space (i.e.neighbors that are in the training set).We assign the predicted label to be the median z SDSS of the K nearest neighbors, and the uncertainty to be the symmetrized inner 68% error of those neigh-bors.We use the validation set to tune K, and choose the value that maximizes the fraction of predicted redshifts with |∆z/(1 + z)| < 0.1, which is K = 27; we note that this value only varies at the ∼1% level for values 15 < K < 50, and is similar for other choices of |∆z/(1 + z)|.Finally, we apply the model to the full Quaia and output kNN redshift estimates, z kNN , for each source. The results are shown in Figure 6, which shows the cumulative distribution of errors |∆z/(1 + z)| for z kNN compared to that of z Gaia (with z SDSS as the truth) for the test set with G < 20.0.(The shapes are similar for G < 20.5, just shifted to somewhat lower accuracy.)We find that the z kNN estimates have far fewer outliers than z Gaia .However, the z Gaia estimates tend to be more precise, as they use the full spectral information, while the kNN is essentially smoothing over the likeliest neighboring sources in feature space.We thus choose to combine the properties of both of these redshift estimates to obtain our final spectrophotometric (SPZ) redshifts z Quaia in the following way.For sources for which z Quaia and z Gaia agree to |∆z/(1 + z)| < 0.05, we assign z Quaia = z Gaia to preserve the precision of the Gaia estimate.For sources for which z Quaia and z Gaia differ by |∆z/(1 + z)| > 0.1, we assign z Quaia = z kNN to preserve accuracy.In between these thresholds, we apply a smooth, linear transition to avoid hard features in our estimates.These z Quaia estimates are also shown in Figure 6 compared to the "true" (spectroscopic, taken as truth for our purposes) SDSS redshifts, and we can see that these achieve nearly as high precision as z Gaia while maintaining the high accuracy of z kNN . Our z Quaia results for the test set are shown in Figure 5b compared to z SDSS , shown here for the full catalog depth G < 20.5.We find that 91% (84%) of our SPZ redshifts agree to |∆z/(1 + z)| < 0.2 (0.1), and 62% highly agree to |∆z/(1+z)| < 0.01.We also give the bias (mean redshift error) and scatter (σ 68 , the symmetrized inner 68% region of the redshift errors) of |∆z/(1 + z)| in the figure; our SPZ redshifts significantly decrease the bias and scatter.The SPZ estimation corrected all of the very high-z Gaia estimates, and some of the intermediateoutlying aliasing effects.We still have some catastrophic outliers due to line aliasing, but with our SPZ redshifts, we find a reduction in the number of |∆z/(1 + z)| > 0.2 (0.1) outliers by ∼3× (∼2×) compared to the Gaia redshift estimates. We investigate the dependence of the redshift error on the G-band magnitude in Figure 7.The fraction of redshifts with an error above various thresholds is shown as a function of samples with the given cut on G.The errors are lowest at a bright magnitude cut of G <∼19.0; in this sample, sources with SPZ redshift estimates inaccurate to |∆z/(1+z)| > 0.2 (0.1) comprise only 3% (4%) of the sample, and to the more stringent requirement of |∆z/(1 + z)| > 0.01, 12%.This outlier fraction increases steadily as fainter sources are included.For G < 20.0, will depend on the nature of the analysis and its sensitivity to outliers.We note that our finding that the unWISE IR information significantly improves redshift estimates, compared to only the optical information used in the Gaia QSOC estimates, is consistent with other photomet-ric redshift work.For instance, DiPompeo et al. (2015) showed that including WISE mid-IR photometry in the redshift estimation of SDSS-imaged quasars results in a significant improvement on the estimates, even more so than including both GALEX near-and far-UV data and UKIDSS near-IR data.More recently, Yang & Shen (2023) compiled a photometric quasar catalog from the Dark Energy Survey (DES) DR2, combining DES optical photometry with near-IR photometry as well as unWISE mid-IR photometry; they obtained photo-zs with 92% having |∆z/(1 + z)| < 0.1 when all IR bands are used compared to 72% with only optical data.Additional photometric information at other wavelengths could further improve our estimates (as well as catalog decontamination), but is not currently available for enough sources in our Quaia catalog to be worthwhile.For instance, for the UV all-sky survey GALEX (Martin et al. 2005), crossmatches to Quaia sources are only available for 32% of the Quaia objects for near-UV observations, and when including far-UV only 16 %; this significant discrepancy is largely due to the faint end of Quaia, where GALEX observations do not reach deep enough.The Pan-STARRS1 survey (Chambers et al. 2019) covers only three quarters of the sky, with crossmatches to 75% of Quaia sources.We tested adding Pan-STARRS1 data to the redshift estimation feature set and found only a small improvement, and thus chose to prioritize keeping the full sky span of Quaia, though we make note that incorporating Pan-STARRS1 may be useful for certain applications. Selection Function Modeling Observational and astrophysical effects impact which sources we observe and their properties; this is known as the selection function.As Gaia is a space-based mission, it avoids many of the observational issues of groundbased surveys, such as seeing and airmass.However, there are still significant selection effects: for our model, we consider dust, the source density of the parent surveys, and the scan patterns of the parent surveys. We fit a selection function model to a particular version of the catalog, namely, a particular maximum G.For the fiducial selection function we work only in terms of sky position.We make a healpix map of the catalog with NSIDE = 64 and count the number of observed catalog sources in each healpix pixel.We choose this NSIDE, which results in 49,152 pixels each with an area of ∼0.84 deg 2 , to balance constructing a map with reasonably high resolution with ensuring a sufficient number of sources in the pixels for stable fits, as well as fitting within memory limitations for the Gaussian process fit.In the case of no selection effects (and under the assumption of isotropy), we would expect each pixel to contain roughly the same number of sources.Our goal is to model the dependence between the number of sources per pixel and the various systematics. The systematics maps (templates) we use are shown in Figure 8.We use the dust map of Chiang (2023), and convert it to a healpix map of NSIDE = 64.To do this, we evaluate the reddening E(B − V ) at the centers of pixels of a high-resolution NSIDE = 2048 healpixelization of the sphere, and apply the 0.86 correction factor proposed by Schlafly & Finkbeiner (2011).We convert these to extinction values by multiplying by R V = 3.1, and then take the mean of all of these values within each healpixel target NSIDE = 64 map.This produces a smoothed dust extinction map on the desired scale.The result is shown in Figure 8a; the extinction is highest around the Galactic plane, with structure extending outward. For the stellar distribution, we randomly select ∼10.6 million Gaia sources with 18.5 < G < 20, the magnitude range of most of our quasar sample.The vast majority of these will be true stars.(While this sample will contain some other types of objects, including possibly some quasars and other extragalactic sources, these will be orders of magnitude less numerous than stars.)We count the number of stars per NSIDE = 64 healpixel; this is shown in Figure 8b.We also include a template of the unWISE source distribution, for which we randomly selected ∼10.6 million unWISE sources (1% of the catalog) that have flux in both W 1 and W 2, and have primary status (Prim=1).We count the number of these sources per NSIDE = 64 healpixel, as shown in Figure 8c. In initial fits we found that the regions of the LMC and SMC are particularly poorly modeled, and that the fit is improved by including separate templates of just the LMC and SMC source density for both the Gaia and unWISE sources; this gives the model the freedom to assign different coefficients to these regions than to the overall survey source density.(The need for different coefficients could be for a number of reasons, such as a difference in stellar density, contamination, or magnitude distribution; we leave a deeper investigation of this to future work and just use this empirical finding to improve our model.)For the LMC/SMC templates, we cut out a wide region around the LMC and SMC (9°in radius around the LMC and 5°around the SMC), and subtract the background, which we approximate using the region at the same latitude but opposite longitude (mirrored across the l = 0°line) of the given source distribution map.We don't show these maps here as they are visually similar to the stellar and unWISE source density maps in the LMC and SMC regions (though with the background subtracted). For the Gaia completeness, we use the quantity M 10 introduced by Cantat-Gaudin et al. (2023)6 .M 10 is the median magnitude in a given sky patch of the Gaia sources with ≤ 10 transits across the Gaia field of view; it incorporates the effects of both the scanning law and source crowding.The actual completeness map derived by Cantat-Gaudin et al. ( 2023) depends on both M 10 and G-band magnitude; this completeness is very close to 1 for nearly all of the sky for G = 20.0, with some non-negligible incompleteness for G = 20.5.However, this completeness model is based on the full Gaia source catalog, while we expect the selection function of our quasar sample to be different.We thus use the M 10 map directly in our fit to capture the effects of the Gaia scanning law and source crowding specific to Quaia.We downsample the map to NSIDE = 64; this is shown in Figure 8d. For the unWISE scanning law, using the ∼10.6 million unWISE sources described above, we take the mean number of single-exposure images in the coadd in W 1 for the sources in each NSIDE = 64 healpixel.This is shown in Figure 8e; we can see that the scan is in strips of constant ecliptic latitude, and that there is a significant increase in observations at the ecliptic poles. To model the selection function we use a Gaussian process, a flexible machine-learning method for regression; for a detailed treatment, see Rasmussen & Williams (2005).(We first tried a linear model and found that it gave a very poor fit, because there are significant nonlinearities between the systematics and the catalog number density.)We first scale the data: for the labels (number of Quaia sources per pixel) we work in their logarithm, and only fit for the pixels with a nonzero number of sources.For the Gaia stellar distribution, the unWISE source distribution, the unWISE scan pattern, and LMC/SMC map templates, we also take the log of the number of quasars per pixel; for the LMC/SMC map, we first replace zeros with a very small value.For all of the input feature maps, we take the meansubtracted systematics values.We assume a Poisson error on the labels (and apply the appropriate log transformation).For the Gaussian process, we use the george software package (Ambikasaran et al. 2016).We use an exponential squared kernel k of the form where r is the distance between points in feature space. We train the Gaussian process on all of the data, optimizing the parameter vector using the BFGS solver (Fletcher 1987); this includes fitting for the mean of the labels.We finally evaluate the predicted number of sources in each pixel.Where there were no Quaia sources in the label map, we fix the prediction to zero. To convert this to a selection function in terms of the relative completeness, we first identify "clean" pixels in the map having low dust extinction (A V < 0.03 mag), low star counts (N stars < 15), low unWISE source counts (< 150), no stars or unWISE sources in the LMC or SMC, and high M 10 (M 10 > 21 mag) and unWISE coadds (> 150); this results in 479 pixels.We take the mean predicted number of quasars in these clean pixels, and add two times the standard deviation in these pixels to encompass the scatter.We then normalize the predicted source numbers by this value, which ensures that all final values end up being less than 1 for clarity.The result is a selection function map in terms of the relative probability of a source at a given location being included in the catalog.We emphasize that this is relative; we have not normalized it to an absolute probability so as not to make the selection function map extremely sensitive to the maximum value.We also note that this fit must be redone for each version of the catalog because it depends on the particular number density and distribution of sources. There will be a dependence of the selection function on the G-band magnitude, as well as other quantities such as redshift.While we do not include these in our modeling or fiducial selection function map, we do release selection functions for a redshift split version of the catalog, using two redshift bins, which is important for certain cosmological analyses.The code to generate the selection function for any input catalog is also provided so that users can construct maps that meet their needs. CATALOG: RESULTS AND VERIFICATION 4.1. Properties of the catalog Quaia, the Gaia-unWISE Quasar Catalog, consists of 755,850 (1,295,502) quasar candidates with G < 20.0 (20.5).The sky distribution of Quaia for each of these magnitude limits is shown in Figure 9.The catalog covers the full sky, besides the Galactic plane, including the southern sky-most of which is not well covered by other surveys (discussed further in §4.3).The sky distribution is remarkably uniform, and the nonuniform imprints visually follow the selection effects that we incorporated into our selection function map, most notably the dust distribution (Figure 8a).Quaia also does not show an obvious overdensity around the LMC and SMC (as the Gaia DR3 'Purer' sample does) because we have removed these with our decontamination procedure.In fact, there is now a slight underdensity of sources near the LMC; this makes sense because some quasars in that sky region are obscured by dust and confusion in the LMC, though it is possible we have also somewhat overcorrected for this and removed some true quasars. The dearth of quasars in the Galactic plane is due largely to dust extinction and stellar crowding, as well as the fact that the SDSS training set quasars (for both the original Gaia DR3 quasar candidates sample and our decontamination procedure) are not representative of quasars in this dust-reddened region.If we exclude the regions with very high extinction A V > 0.5 mag, the quasars nearly uniformly cover the remaining sky area, which comprises 30277.52 deg 2 (f sky = 0.73).Based on this area we can also compute the effective volume V eff covered by the quasars, which depends on the number density as a function of redshift and the power spectrum value P (k), integrated over the physical volume.We assume a P (k) of 4 × 10 4 (h −1 Mpc) 3 , based on the value for the eBOSS clustering catalog of quasars at around k ∼ 0.01 (Mueller et al. 2021).This gives an effective volume of 7.67 (h −1 Gpc) 3 (3.19(h −1 Gpc) 3 ) for the G < 20.5 (G < 20.0) sample. We show a 3D map of the Quaia catalog in Figure 10, using our z Quaia redshift estimates converted to spatial coordinates with a fiducial Planck cosmology.We also show a 3D map of the full SDSS quasar sample for comparison; Quaia spans a much larger volume than SDSS.We note that for SDSS large-scale structure analyses, the eBOSS quasar clustering catalog is used, which contains fewer sources than the full SDSS catalog as it spans only the intermediate (UV-excess) redshift range and is Figure 10.Left: a projection of the 3D map of the full Quaia catalog (G < 20.5).Right: the same projection for the quasars in SDSS DR16Q, the largest spectroscopic quasar catalog (note that it is a superset of SDSS quasars from multiple campaigns and as such is not intended to be uniform).The color bar shows the redshifts of the quasars (zQuaia for Quaia, zSDSS for SDSS), which have been converted to distances with a fiducial cosmology.Quaia spans a significantly larger volume than the SDSS sample.A rotating animation of this image is available in the online journal, and at the link in arXiv comment field. designed to be uniform across the sky (described in more detail in §4.3). We show the redshift distribution of Quaia in Figure 11.The distribution of our Gaia-unWISE-SDSS spectrophotometric redshift estimates, z Quaia , for the full G < 20.5 catalog is shown in black.We compare this to other samples, cut to the same G limit where relevant: the Gaia redshifts z Gaia for the same sample; z Gaia for sources in the full Gaia quasar candidates sample with G < 20.5 (that have redshift estimates); z Gaia for sources in the Gaia DR3 'Purer' sample with G < 20.5 (that have redshift estimates); and z SDSS for the SDSS DR16Q sources that have Gaia crossmatches, with G < 20.5.We see that the Quaia SPZ redshifts have a smoother distribution than the others, with a clear peak around z = 1.5; the median value is 1.47.These SPZ estimates have also greatly reduced the high-z tail present in the Gaia redshifts.There are still a significant amount of intermediate-z objects; 10% (N = 132, 417) of the sources in the full G < 20.5 Quaia catalog have z > 2.5 (for the G < 20.0 catalog, this is also 10% (N = 77, 337) of sources).We note that the z Gaia redshift distribution for the Gaia DR3 'Purer' sample is very similar to those same redshift estimates for Quaia; this is partially because a very high fraction of the objects in Quaia are also in the larger Gaia DR3 'Purer' sample (see Figure 1). We see a slight bump in the z Quaia distribution around z ∼ 2.3, the same location as the peak in the SDSS DR16Q quasar distribution.In the SDSS distribution this feature is most prominent in the SDSS-III campaign quasars (see Figure 6 of Lyke et al. 2020), which targeted higher-redshift sources.To check the robust-ness of our redshift estimation, we reconstruct the sample and retrain the redshifts using the eBOSS quasar clustering catalog (Ross et al. 2020).This is the sample used for large-scale structure clustering analyses (e.g.Mueller et al. 2021;Rezaie et al. 2021), which has a smooth redshift distribution peaked around z = 1.5.It does still have a slight step around z ∼ 2.3.We find that the z Quaia redshift distribution does not change significantly when trained on this sample, and that the feature at z ∼ 2.3 remains.We hypothesize that this feature is thus a real feature of Gaia-selected quasars, rather than an imprint from the training set, likely related to details of the optical color selection around that redshift.We also find that compared to the full SDSS-trained sample, the sample trained on the eBOSS quasar clustering catalog produces a redshift distribution that is less smooth at low redshifts, possibly because of the lower number of low-z eBOSS quasars; similarly, the high-z tail is shorter.For these reasons, we choose to use the full SDSS sample (as described in §2.3) for the spectroscopic quasar training sample for our fiducial Quaia catalog, but confirm that the redshift distribution (and the source selection) is broadly robust to this choice. We show the G-band magnitude distribution of Quaia in Figure 12, in comparison to the other Gaia and SDSS quasar samples described above.We see that our catalog (as well as the Gaia DR3 'Purer' sample) has removed all of the sources with excessively bright (for quasars) magnitudes G < 12.5 that are present in the full Gaia sample, as well as many sources with 12.5 < G < 16.For the Gaia DR3 and SDSS samples, the number of quasars drops off sharply after G ∼ 20.75; to avoid the complicated selection effects at these depths, we limit our catalog to G < 20.5 as shown.We also note that the SDSS DR16 quasars do not extend as bright as Quaia, and this extrapolation past the training set could bias the results in this regime, though in practice this affects very few sources. We note that some of the Quaia sources may technically be considered lower-luminosity AGNs, or Seyfertlike galaxies, rather than quasars.We estimate the fraction of these sources using the criterion of Schneider et al. (2010): sources are considered true quasars if they have SDSS i-band luminosity M i brighter than M i = −22.0.To estimate the i-band magnitude for our Gaia sources, we compute the median G − i color for the subset of Quaia sources with SDSS crossmatches, where G is the Gaia G band, and then subtract this value from the G-band magnitudes to obtain an effective i-band magnitude for all Quaia sources.We convert these to absolute magnitudes M i assuming a flat ΛCDM cosmology with H 0 = 70 km s −1 Mpc −1 , Ω m = 0.3, and Ω Λ = 0.7, following Schneider et al. (2010), and assuming a value of dust reddening of A v /E(B − V ) = 1.698 corresponding to the SDSS i band and R v = 3.1.We find that a small fraction, 8%, of Quaia sources have effective M i < −22.0 and thus do not meet this standard luminosity criterion for being true quasars.This distinction may be important for certain studies, though may not be relevant for others, and should be kept in mind for analyses of Quaia. Selection function model We show the results of our selection function modeling ( §3.3) for the G < 20.0 catalog in Figure 13.The Around the edges of the Galactic plane the residuals show a slight bias to positive values (meaning the completeness there was predicted to be higher than it actually is); in the region around zero Galactic longitude just above the Galactic plane, the residuals are slightly biased to negative values (meaning the completeness there was predicted to be lower than it is).These discrepancies indicate that our templates are not fully capturing the selection effects in these regions.As these are largely limited to the region around the Galactic plane, the issue could be circumvented by applying a latitude cut for sensitive analyses.The underdensity around the LMC is well modeled by the selection function, with no clear residual in that region.The selection function map for the G < 20.5 catalog (not shown) is similar with some moderate differences, and is also provided as a data product. The selection function may change more significantly for different subsets of the catalog, such as redshift bins. The selection function should be re-fit for a given sample to be analyzed; we provide code to fit the selection function for any other subset of the catalog.We note that depending on the subsample, certain regions may be more poorly modeled, and in particular, the regions around the LMC and SMC; users should check the residuals and may choose to mask the regions around the LMC and SMC to be more conservative. Comparison to existing quasar catalogs We compare Quaia to other existing quasar catalogs: Projections of these catalogs are shown in Figure 14.We show the Gaia DR3 'Purer' sample (Figure 14a); a crossmatched catalog of WISE and Pan-STARRS (WISE-PS1), a current leading large-area photometric redshift quasar sample (Figure 14b); the SDSS DR16Q catalog, the current best spectroscopic sample of quasars (Figure 14c); the eBOSS quasar clustering catalog, the subsample of SDSS DR16Q intended for clustering analyses (Figure 14d); and Milliquas, a meta-catalog compiling confirmed quasars from the literature (Figure 14e). The Gaia DR3 'Purer' sample is described in §2.1; here we include only sources with QSOC redshift estimates (z Gaia ).The WISE-PS1 sample was constructed by Beck et al. (2022), based on the Source Types and Redshifts with Machine learning (STRM) algorithm by Beck et al. (2020).The quasar catalog with updated photometric redshifts is presented by Kunsági-Máté et al. (2022); here we include only those quasars with redshifts labeled "reliable", which is 59% of the sample.The SDSS DR16Q quasar catalog is the one described in §2.3, from Lyke et al. (2020), which compiles sources from eBOSS as well as previous SDSS campaigns (and is intended as a superset of SDSS quasars rather than a uniform sample).The eBOSS quasar clustering catalog is detailed in Ross et al. (2020); it is a subsample of SDSS DR16Q selected for large-scale structure clustering analyses, and as such is much more homogeneous than the full catalog.For the eBOSS clustering catalog, we have included both eBOSS and legacy SDSS quasars (IMATCH=1 or 2) and applied the clustering cuts of requiring sectors to have > 0.5 completeness (COMP BOSS) and redshift success rate (sector SSR); we have additionally removed sources with ZWARNING!=0.The Milliquas catalog was compiled by Flesch (2021); a significant portion of the sources are from SDSS and AllWISE.For each of these samples, we have shown quasars brighter than a limit- ing magnitude of G ∼ 20.5; for the non-Gaia catalogs we convert to G from the survey's r-band magnitude using the conversion in equation ( 2) of Proft & Wambsganss (2015), which is based on the SDSS r ′ band.While this should give a reasonable estimate for the SDSS sample (using r SDSS ) and the WISE-PS1 sample (using r PS1 which is very similar to r SDSS ), it may not be as reliable for Milliquas which catalogs "red" magnitudes from various sources, as well as for sources with z > 3 which were not included in the Proft & Wambsganss (2015) fit.A summary of the catalogs is shown in Table 1, for the full catalogs (limited to sources with reliable redshifts) as well as the G eff < 20.5 subsamples.We exclude Milliquas from this comparison given its very heterogeneous nature ; we do include SDSS DR16Q, though it is also not intended to be uniform, to show the comparison of Quaia to this large spectroscopic catalog of quasars.For these quantifications, we exclude areas that have A V > 0.5 mag, as well as healpixels with no quasars.For the sky fraction f sky , we see that Quaia and Gaia DR3 'Purer' are limited only by the dusty regions, and cover over 30% more area than WISE-PS1 (which is limited by Pan-STARRS), nearly 3× that of SDSS DR16Q, and over 5× that of the eBOSS quasar clustering catalog.Compared to the Gaia DR3 'Purer' sample, Quaia has a slightly smaller number of sources, but due to its redshift distribution gives a slightly higher effective volume.The on-sky number density is similar for all of the catalogs when limiting them to similar magnitudes, with WISE-PS1 slightly higher because it has a similar number of objects to the Gaia catalogs but over a smaller area, and SDSS DR16Q and the eBOSS clustering catalog slightly lower.When including faint sources for the catalogs, WISE-PS1 has 2.5× the on-sky number density as Quaia, and SDSS DR16Q and the eBOSS clustering catalog have 1.5 − 2×. For the volume comparison, we compute two different volumes.The first is a simple "spanning" volume, V span , which is just the comoving volume in the sky area of the survey (as given by f sky of the full sky area) in a redshift range 0.8 < z < 2.2, a typical redshift range for clustering analyses (taken from the range of the eBOSS quasar clustering catalog).Thus it compares in the same way as the survey areas, but gives an idea of the physical volume the catalogs span.The second is the effective volume, described in §4.1; we use that same P (k) = 4 × 10 4 (h −1 Mpc) 3 for the volume calculation for all catalogs.We see that the effective volume of WISE-PS1 is much larger (nearly 3×) than that of Quaia as a result of its larger number of sources, though when considering samples with the same limiting magnitude, WISE-PS1 and Quaia have comparable effective volumes.The effective volume of Quaia is nearly twice as large as that of SDSS DR16Q, and 6× for the magnitude-limited sample; compared to the eBOSS quasar clustering catalog, the effective volume of Quaia is over twice as large, and 7× for the magnitude-limited sample. The catalogs all have a similar median redshift, of around 1.4 < z < 1.7, extending to 1.77 for SDSS DR16Q when including faint sources.However, they have significantly different redshift precision; in Table 1 we show outlier fractions estimated from comparisons to spectroscopic redshifts.We see that both of the Gaia catalogs have a similar fraction of high-precision redshifts (|∆z/(1 + z)| < 0.01), but Quaia has a much higher fraction of redshifts that are not strong outliers (|∆z/(1 + z)| < 0.1) compared to Gaia DR3 'Purer'.WISE-PS1 falls between Quaia and Gaia DR3 'Purer' in terms of strong outliers, but has an extremely low fraction of high-precision redshifts as it is a photometric survey.We note that for both Gaia DR3 'Purer' and WISE-PS1, the redshift precision is significantly lower when considering the full catalog compared to samples limited to G eff < 20.5 like Quaia; we show both for a fair comparison.The SDSS DR16Q catalog and the eBOSS quasar clustering catalog have spectroscopic redshifts, so these are almost all very high precision; Lyke et al. (2020) estimated from a visual inspection that less than 1% of the SDSS DR16Q redshifts are outliers with ∆v > 3000 km s −1 (|∆z| > 0.01), independent of redshift; note that this is a slightly different sample than the eBOSS clustering catalog, but we can expect it to be similar.The SDSS DR16Q quasar sample has typical statistical redshift errors of |∆z| ∼ 0.001. To give more of an idea of the redshift precision of Quaia, we compare it to existing all-sky photometric galaxy catalogs.A common statistic to summarize photometric redshift uncertainty robust to outliers is the SMAD, scaled median absolute deviation, defined as 1.4826×med(|∆z −med(∆z)|), where ∆z = z phot −z spec (the scaling factor adjusts the MAD such that SMAD is approximately equal to the standard deviation for normalized data).The SMAD of the full Quaia catalog (G < 20.5) is SMAD(∆z) = 0.023, and the normalized SMAD of the redshift errors with the (1 + z) factor divided out is SMAD(∆z/(1 + z)) = 0.008.For comparison, the WISE × SuperCOSMOS catalog of 20 million galaxies with z med = 0.2 (Bilicki et al. 2016) has an SMAD(∆z) of ∼ 0.04 and an SMAD(∆z/(1 + z)) of ∼ 0.035.The Two Micron All Sky Survey Photometric Redshift (2MPZ) catalog has around 1 million galaxies with a similar median redshift (Bilicki et al. 2013), which have an SMAD(∆z) of ∼ 0.015.Quaia thus falls in between these common photometric galaxy samples in terms of overall redshift precision; however, we note that it is difficult to capture the redshift error of Quaia in a single statistic, given both its large number of highly precise redshifts and non-insignificant number of outliers. We also note that the ongoing DESI survey (Aghamousa et al. 2016;DESI Collaboration et al. 2023) will observe a high density of quasars over a large sky Table 1.Comparison between Quaia and Other Existing Quasar Catalogs, Detailed in the Text.We show the quantities for the full catalogs (for sources with reliable redshifts) as well as the catalogs limited to G < 20.5 or the rough equivalent converted from another band.For all quantities and catalogs shown, we exclude areas with high dust extinction (AV > 0.5 mag); this excludes ∼5% of sources for Quaia and Gaia DR3 'Purer', ∼18% of the full WISE-PS1 sample, and a negligible number of sources for SDSS DR16Q and the eBOSS clustering catalog.We note that the SDSS DR16Q catalog is a superset of quasars from many SDSS campaigns and is not intended to be uniform, which should be considered in particular for the sky fraction and spanning volume quantities.We show the number of sources N , the fraction of sky area covered f sky , the mean number density per square degree n, the spanning volume between 0.8 < z < 2.2 Vspan, the effective volume V eff , the median redshift z med , and the fraction of objects with |δz| ≡ |∆z/(1 + z)| < 0.01 and < 0.1 (where applicable).area (Chaussidon et al. 2023), which will be competitive with and complementary to Quaia. Catalog format The complete Quaia catalog contains our decontaminated quasar sample with computed redshift information, relevant Gaia properties, and crossmatched catalog information.The complete catalog format with column names, units, column descriptions, and an example entry is shown in Table 2.Additional information for the sources can be obtained by joining the catalog with the relevant data source with the associated identifier (Gaia or unWISE).We include only sources with G < 20.5 in the catalog; we also publish a version limited to G < 20.0, along with the selection function models fit to each ( §4.2) and "random" catalogs generated from the selection functions.The catalog includes our SPZ redshifts z Gaia along with 1σ redshift errors, sky position, Gaia photometry, unWISE photometry, and proper motion information.The catalog is in FITS format (Wells et al. 1981), and units and descriptions are provided for each column. Limitations While the Quaia catalog presents a highly useful quasar sample, it does have various limitations.We reiterate and discuss the main ones here. We estimate spectrophotometric redshifts for the quasars, which are generally more accurate than the Gaia estimates, but are still low precision compared to spectroscopic redshifts.The uncertainties on these redshifts should be taken into account for any measurements, and the rate of catastrophic redshift errors (not necessarily captured by the redshift uncertainty) should be considered when thinking about possible uses of the catalog. The selection function model has multiple potential limitations.While it broadly captures the selection effects that affect the quasar sample, it has significantly lower accuracy around the galactic plane; precision measurements may require masking this region.The regions around the LMC and SMC are also more poorly modeled; users may want to mask this area.We also note that we are not fitting the healpixels with zero quasars, which may result in a slight bias toward populated regions, and fixes the zero-probability region of the selection function.Our selection function map depends only on sky position and not other properties such as magnitude or redshift (besides fitting it to the appropriate subsample); a treatment incorporating these dependencies may be important for certain uses.The gold standard for completeness estimation is data injection and recovery tests.Unfortunately, the Gaia instrumentation has black-box elements, such as onboard image segmentation, onboard object detection, and onboard downlink prioritization, that make it impossible to perform endto-end injection tests, so we rely on a data-driven ap-proach, which may be less robust and more sensitive to modeling choices.Given this, it is possible that we are overfitting the selection function.Finally, the selection function depends on the assumption of isotropy, which we know to be broken to some extent by the kinematic dipole (Stewart & Sciama 1967;Secrest et al. 2021); we will explore and measure this in an upcoming work (see §4.6).Users employing the selection maps or generating their own selection function for some subset of the catalog should take note of these potential issues. Generally, Quaia has relatively low number density (e.g. compared to the SDSS sample).This means that it may not be ideal for certain cosmological measurements, which may be shot noise dominated. Finally, we note that this catalog is based on the Gaia quasar candidates sample, and it will inherit many of the limitations of that sample (Gaia Collaboration et al. 2023a).We are also limited to the Gaia-derived properties (e.g. the Gaia redshifts that are a feature for our estimates).In upcoming Gaia data releases, the collaboration will release more BP/RP spectra and we will have the opportunity to work directly from the spectral data to improve the catalog. Potential applications Quasars are highly biased tracers of the cosmic web that trace the matter distribution at higher redshift than galaxies and in the mildly nonlinear regime.Given the Quaia catalog's sampling of quasars to deep magnitudes and across a large volume, and its reduced systematic contamination allowed by space-based observations, Quaia lends itself to large-scale structure analyses, many of which are currently ongoing. Thanks to its large volume and well-characterized selection function, Quaia is perhaps the best current sample for testing homogeneity and isotropy in the Universe (Hogg et al. 2024), and relatedly for measuring the dipole in the quasar distribution (Williams et al. 2024), which recent measurements have consistently found to be in mild tension with the kinematic interpretation in the ΛCDM model.Quaia's volume also makes it a good sample for a measurement of the matter-radiation equality scale, k eq (e.g.Bahr-Kalus et al. 2023). The catalog is particularly well suited for crosscorrelations with other all-sky observations of projected tracers of the large-scale structure, which are less sensitive to redshift errors compared to 3D ones.Examples of this are the CMB, the CIB, or maps of the thermal Sunyaev-Zel'dovich effect.Alonso et al. (2023) used the cross-correlation between CMB lensing and Quaia to constrain the growth of matter fluctuations via the parameter S8, achieving competitive constraints as well as showing that Quaia can break the degeneracy between Ω m and σ 8 .An analysis of primordial non-Gaussianity (parameterized by f N L ) from this cross-correlation with CMB lensing is also underway.Analyses of the crosscorrelation with CMB temperature to measure the In- tegrated Sachs-Wolfe effect, and with the CIB to constrain the star formation history at high redshifts (e.g.Jego et al. 2023), are currently under investigation.Another measurement enabled by the catalog is the crosscorrelation of quasar proper motions with the largescale structure, which gives a direct estimate of the cosmological quantity Hf σ 8 (Duncan et al. 2023).Additionally, cross-correlations of Quaia with galaxy surveys may allow for measurements of the baryon acoustic feature (Patej & Eisenstein 2018;Zarrouk et al. 2021) and quasar environments (Padmanabhan et al. 2009;Shen et al. 2013).Quaia is also useful for void studies, including constraining core cosmological parameters with the void size distribution; this investigation is underway (Arsenov et al. 2024).The catalog is additionally relevant to astrophysical analyses of quasar properties, given its large sky coverage and multiband photometry, such as the role of galaxy interactions on AGN activity.Quaia sources may also be used to study the potential of quasars as standard candles.Further, Quaia provides perhaps the best quasar coverage of the southern sky, which may be important for a variety of applications such as identifying interesting sources there, adding new information to known sources, or calibrating surveys in that sky region.Finally, while a 3D clustering analysis of Quaia may be limited by the catalog's relatively low number density and moderate redshift precision, a careful analysis may yield useful constraints, especially using techniques targeted at wide-field surveys (e.g.Lanusse et al. 2015). The latter is comparable or better than other stateof-the-art galaxy and quasar samples used in large-scale structure analyses, but not enough to necessarily allow an accurate interpretation. SUMMARY AND DATA ACCESS We have constructed a new quasar catalog, Quaia, the Gaia-unWISE Quasar Catalog, designed for cosmological studies, derived from the Gaia DR3 quasar candidates sample and using unWISE photometry to remove contaminants and derive precise redshifts.Our key contributions and the features of the catalog are as follows: • We have decontaminated the Gaia DR3 quasar candidates sample with proper motion cuts and optimized color cuts based on Gaia and unWISE photometry.This reduced the number of known contaminants by ∼4×, while only excluding 1.2% of known quasars with respect to the superset of Gaia quasar candidates (that have unWISE pho- Figure 1 . Figure 1.A summary of the overlaps between the various data sets and subsamples used in this work.The values describe the fraction of objects in each column's sample that are in each row's sample.Note that we only list unWISE as a row because the inverse is not relevant to this work. Figure 2 . Figure 2. Sky distribution of the quasar candidates in the Gaia DR3 'Purer' quasar sample, in Galactic coordinates and displayed using a Mollweide projection. Figure 3 . Figure3.Proper motion µ vs. G magnitude for two different sets of sources.The black line shows the cut we make; the shaded gray region is excluded from the catalog.Top: the sources for which we have labels (SDSS data as well as sources near the LMC and SMC in the Gaia quasar candidates sample) that are also in the Quaia superset (Gaia DR3 quasar candidates that have all necessary photometry, Gaia redshift estimates, and G < 20.6).Middle: sources in the top row that are also in the Quaia superset.Bottom: the superset of quasar candidates from which Quaia is constructed.The proper motion cut includes nearly all SDSS quasars in the superset while excluding a large number of stars. Figure 4 . Figure 4. Color-color plots of three different sets of sources.The left column shows W 1-W 2 vs. G-W 1 color, and the right column shows G-RP vs.BP -G color.The black lines show the cuts we make; the shaded gray region is excluded from the catalog.The rows have the same samples as in Figure3, except that in the top row, only 20,000 of each type of SDSS source is shown for clarity.In both color-color projections, the labeled sources are mostly localized in particular regions of parameter space, and we can see these populations somewhat clearly in the Quaia superset. Figure 5 . Figure 5. (a) Gaia redshift estimate zGaia vs. SDSS ("true") redshift zSDSS for a test set of sources in our quasar catalog Quaia with G < 20.5.(b) Our estimated spectrophotometric (SPZ) redshifts zQuaia, which are based on a kNN model, vs. zSDSS for the same sample.The bias (mean redshift error) and scatter (σ68, the symmetrized inner 68% region of the redshift errors) of the redshift estimates compared to zSDSS are shown in the panels.The zQuaia redshifts significantly decrease both the bias and scatter, as well as catastrophic outliers and unreasonably high redshift estimates.The one-to-one line (perfect accuracy) is shown in gray; note that the color bar is on a log scale, and that a majority of the sources in both cases lie along this line. Figure 6 . Figure 6.The cumulative distribution of redshift errors for Quaia test set sources with G < 20.0, considering SDSS spectroscopic redshifts zSDSS as the ground truth, for estimates directly from our kNN model (gray), the original zGaia redshifts (purple), and our final zQuaia estimates (black) based on a combination of the other two.Our SPZ redshifts have far fewer outliers and similar precision compared to the Gaia estimates. Figure 7 . Figure 7.The fraction of outlying redshifts with |∆z/(1 + z)| > (0.01, 0.1, 0.2), as a function of G magnitude, for our redshift estimation test set.The SPZ redshifts are shown in black, and the Gaia redshifts in purple.The fraction of outliers increases steeply with increasing G for G > 19.5 for both zQuaia and zGaia, though the fraction of catastrophic outliers for zQuaia is significantly lower (and the dependence less steep) compared to zGaia. Figure 8 . Figure 8.The systematics maps used in the selection function model: (a) dust extinction from Chiang (2023); (b) the stellar distribution based on ∼10.6 million randomly selected Gaia sources with 18.5 < G < 20; (c) the unWISE source distribution based on ∼10.6 million randomly selected unWISE sources; (d) the quantity M10, the median magnitude of sources with ≤ 10Gaia transits, which encodes the Gaia scanning law and source crowding; and (e) the unWISE scan pattern given by the mean number of single-exposure images of the sky region in the coadd.Note that the color bar on the M10 and unWISE scanning law maps are reversed, as high values indicate a cleaner region, the inverse of the other maps.We also include separate templates for only sources in the LMC and SMC regions for both the stellar and unWISE source densities, with the background subtracted.All templates are discussed in more detail in the text. Figure 9 . Figure 9. Sky distribution of the Quaia quasar catalog, in Galactic coordinates and displayed using a Mollweide projection.Panel (a) shows sources with G < 20.0, the cleaner version with more reliable redshifts, and (b) shows the catalog down to its magnitude limit of G < 20.5. Figure 11 . Figure 11.Redshift distribution of Quaia for our spectrophotometric redshift estimates zQuaia (black), normalized to the total number of objects.For comparison, we also show the normalized distributions of other samples, cut to the G < 20.5 limiting magnitude of Quaia where relevant: the Gaia redshift estimates zGaia for the same Quaia sources (purple); zGaia for the sources in the full Gaia quasar candidate sample with G < 20.5 (gray); zGaia for the Gaia DR3 'Purer' subsample with G < 20.5 (green); and the SDSS redshifts zSDSS for the SDSS DR16Q quasar sample that have Gaia crossmatches, with G < 20.5 (blue).The median redshift of each distribution is shown by the diamond and vertical line in the respective color. Figure 13 . Figure 13.(a) The selection function map for the G < 20.0 subset of Quaia, based on a Gaussian process model of the dust, stellar distribution, and M10.(b) The fractional residuals between a random catalog downsampled by the modeled selection function and the true Quaia G < 20.0 catalog. Figure 14 . Figure 14.Other current quasar catalogs for comparison with Quaia.All are shown for sources with G < 20.5 or the equivalent converted from another band, in Galactic coordinates and displayed using a Mollweide projection.The catalogs are (a) the Gaia DR3 'Purer' sample, (b) the WISE-PS1-STRM catalog, (c) the SDSS DR16Q catalog, (d) the eBOSS quasar clustering catalog, and (e) the Milliquas catalog.Note that the color bars have different scales in each panel. Table 2 . (Wells et al. 1981)escriptions of Quaia, Published as a FITS Data File(Wells et al. 1981).For the example entry, we show the first catalog row.
17,372.6
2023-06-30T00:00:00.000
[ "Physics" ]
First report of Giardia duodenalis and Enterocytozoon bieneusi in forest musk deer (Moschus berezovskii) in China Background Giardia duodenalis and Enterocytozoon bieneusi are widespread pathogens that can infect humans and various animal species. Thus far, there are only a few reports of G. duodenalis and E. bieneusi infections in ruminant wildlife. Thus, the objective of this study was to examine the prevalence of G. duodenalis and E. bieneusi in forest musk deer in Sichuan, China, as well as identifying their genotypes. Results In total, we collected 223 faecal samples from musk deer at the Sichuan Institute of Musk Deer Breeding in Dujiangyan (n = 80) and the Maerkang Breeding Institute (n = 143). Five (2.24%) faecal samples were positive for G. duodenalis; three belonged to assemblage E, and two belonged to assemblage A based on the sequence analysis of the β-giardin (bg) gene. One sample each was found to be positive based on the glutamate dehydrogenase (gdh) and triosephosphate isomerase (tpi) gene, respectively. Thirty-eight (17.04%) faecal samples were found to be E. bieneusi-positive based on the internal transcribed spacer (ITS) sequence, and only SC03 genotype was identified, which belonged to the zoonotic group 1 according to the phylogenic analysis. The infection rates were significantly different among the different geographical areas and age groups but had no apparent association with gender or clinical symptoms. Conclusions To our knowledge, this was the first molecular characterisation of G. duodenalis and E. bieneusi in musk deer. Identification of the zoonotic genotypes indicated a potential public health threat, and our study suggested that the forest musk deer is an important carrier of these parasites. Background Giardia spp. are parasites with a broad host range comprising economic, companion, and wildlife animals, ranging from mammals to amphibians and birds, and humans [1,2]. These parasites can have various clinical manifestations such as diarrhoea and abnormalities in growth and development, particularly in young hosts. For example, giardiasis can develop into malabsorption syndromes and other chronic diseases, resulting in stunted growth or emaciation in children [3]. According to WHO, approximately 200 million people in Africa, Asia and Latin America have symptomatic Giardia infection [4]. Enterocytozoon bieneusi is another common intestinal parasite that infects the host's enterocytes, causing gastrointestinal illness such as chronic diarrhoea in animals and humans, particularly in immunosuppressed groups, including organ-transplant recipients, children, the elderly, and patients with cancer, diabetes, or AIDS [5,6]. Ingestion of water and food contaminated with oocyst-containing faeces is the principal route of transmission for these species [7]. Giardia duodenalis is the only species of Giardia infecting humans and is comprised of eight assemblages (A-H). Among them, assemblages A and B have a broad host range and zoonotic potential. In particular, subtypes A1, A2, A3, A4, B1, and B4 are closely associated with human infections. In contrast, assemblages C-H have strong host specificity and a narrow host range [1,8,9]. Approximately 90% of human microsporidiosis cases are caused by E. bieneusi [10,11]. In addition to its detection in humans, E. bieneusi has been reported in various economic animals and wildlife, including snakes and birds [12][13][14]. Currently, over 240 genotypes of E. bieneusi have been identified and divided into eight groups (groups 1-8). Most genotypes in group 1 have zoonotic potential, whereas the other groups have narrow host range and higher host specificity [15]. As an endangered species, musk deer (Moschus spp.) is currently considered a class I-protected animal in China. Forest musk deer (Moschus berezovskii) is the largest species of musk deer and mainly found in the Sichuan and Guizhou provinces of China [16,17]. Musk, which has a remarkably high economic and medicinal value, is secreted by the musk gland located in the groin of male forest musk deer [18]. Because of their pathological effects on forest musk deer, infection with G. duodenalis or E. bieneusi can result in a significant loss of musk yield. This study aimed to investigate the presence of these parasites in musk Fecal sample collection In February 2017, 223 faecal samples were collected from forest musk deer at the Sichuan Institute of Musk Deer Breeding located in Dujiangyan and Maerkang, in the Sichuan Province of China. Immediately after defecation, fresh faecal samples were collected using sterile disposable latex gloves, numbered, and placed in individual plastic bags. During specimen collection, we only gathered the top layer of the faeces to ensure that there was no contamination. All samples were placed on ice in separate containers and immediately transported to the laboratory. Specimens were stored in 2.5% potassium dichromate at 4°C in a refrigerator until analysis. DNA extraction and nested PCR amplification Faecal samples were washed with distilled water and centrifuged at 3000× g for 3 min. This process was repeated three times. Genomic DNA was then extracted from approximately 200 mg of each semi-purified product, using the E.Z.N.A. Stool DNA Kit (D4015-02; Omega Bio-Tek, Norcross, GA, USA). DNA samples were stored in 200 μl of the kit Solution Buffer at -20°C until use. G. duodenalis and E. bieneusi were identified using nested PCR amplification of the β-giardin (bg) gene and internal transcribed spacer (ITS) sequence, respectively. bg-positive specimens were subjected to further amplification of the glutamate dehydrogenase (gdh) and triosephosphate isomerase (tpi) genes, whereas ITS-positive specimens were subjected to amplification of three microsatellites (MS1, MS3, and MS7) and one minisatellite (MS4). The primers and amplification conditions were as previously described [19][20][21] (Table 1). Each reaction was performed in a total volume of 25 μl that included 12.5 μl 2× Taq PCR Master Mix (KT201-02; Tiangen, Beijing, China), 8.5 μl deionized water (Tiangen), 2 μl DNA, and 1 μl each of upstream and downstream primers. Positive and negative controls were included in each test. Secondary PCR products were subjected to 1% agarose gel electrophoresis. Nucleotide sequencing and analysis Products of the expected size were sent for a twodirectional sequencing analysis (Invitrogen, Shanghai, China). Assemblages and subtypes were determined by the alignment of the nucleotide sequences with known reference sequences for the bg, tpi, and gdh genes of G. duodenalis, and for the ITS, MS1, MS3, MS4, and MS7 sequences of E. bieneusi available in the GenBank database, using BLAST and Clustal X. Neighbor-joining phylogenetic analysis of the aligned G. duodenalis and E. bieneusi sequences was utilised to assess genetic clustering of the genotypes. A total of 1000 replicates were used for the bootstrap analysis. Nucleotide sequence GenBank accession numbers All nucleotide sequences of the bg, gdh, and tpi genes of G. duodenalis, and ITS, MS1, MS3, MS4, and MS7 of E. bieneusi isolated from forest musk deer in this study were deposited in the GenBank database under accession numbers MF497406-MF497412 and MF942581-MF942596, respectively. Results and discussion G. duodenalis and E. bieneusi are emerging zoonotic pathogens. To our knowledge, this study is the first to report the presence of G. duodenalis and E. bieneusi in musk deer, with an infection rate of 2.24% (5/223) and 17.04% (38/223), respectively. G. duodenalis infection rate in the Dujiangyan breeding centre (3.75%) was slightly higher than that in the Maerkang breeding centre (1.40%), whereas E. bieneusi infection rate was much lower in Dujiangyan than in Maerkang (7.5% and 22.38%, respectively) ( Table 2). This may be due to differences in the source of food and water used for feeding, or other environmental factors. In this study, the infected forest musk deer ranged from less than one to eight years of age. Young individuals (≤ one-year old) accounted for more than half of the G. duodenalis-and E. bieneusi-positive samples (60% and 57.89%, respectively), which may be caused by incomplete development of the immune system of young animals compared with adult animals. The proportion of infected females and males was similar. Several infected animals had obvious diarrhoea (two and nine for G. duodenalis and E. bieneusi, respectively), which may be due to the individual's low resistance to infection. There was no apparent age-or gender-associated difference for the infections in this study, in agreement with the findings of Zhang et al. [22]. Here, assemblage A of G. duodenalis was obtained only from young forest musk deer in the Maerkang breeding location, whereas assemblage E was obtained from adult forest musk deer in the Sichuan Institute of Musk Deer Breeding in Dujiangyan (Table 3). Although the distribution of G. duodenalis in musk deer has not been reported, there are few reports of these parasites infecting other species in the families Cervidae and Bovidae, in the same suborder as the forest musk deer. G. duodenalis identified in these animals was mainly assemblage A, and in several studies, the rate of infection in these species was higher than that in forest musk deer in our study. For example, Lalle et al. [23] reported that the prevalence of G. duodenalis was 11.5% in fallow deer (Dama dama) which was also higher in fawns than in older deer, and the genotype was assemblage A. García-Presedo et al. [24] reported that 8.9% of Fig. 1 Phylogenetic relationships of the ITS gene nucleotide sequences of the E. bieneusi genotypes identified in this study and other reported genotypes. The genotypes in this study are indicated by triangles for Maerkang and squares for Dujiangyan roe deer (Capreolus capreolus) samples were positive for G. duodenalis, and the genotype was AII. In Norway, 12.3% of moose (Alces alces), 1.7% of red deer (Cervus elaphus), 15.5% of roe deer, and 7.1% of reindeer (Rangifer tarandus) were found to be infected with G. duodenalis [25]. In the United States, one white-tailed deer (Odocoileus virginianus) was found positive for G. duodenalis, and assemblage A was identified [26]. Solarczyk et al. [27] reported that the sub-assemblage of G. duodenalis found in red deer and roe deer was AIII and zoonotic AI, respectively. Also, sheep faecal specimens from China were found to be positive for G. duodenalis assemblage A genotype [28]. Given that G. duodenalis assemblage A was previously identified in humans, forest musk deer can play a role in transmitting G. duodenalis to humans. In the E. bieneusi analysis, ITS sequencing showed that all E. bieneusi isolates from Maerkang and Dujiangyan were characterised as SC03 (n = 38), which had been previously found in sika deer (Cervus nippon) at zoological gardens in China [29]. Other reference sequences in the same phylogenetic branch were from parasites isolated from racoons in eastern Maryland in the United States, goats in China, and patients with HIV/AIDS in the Henan Province of China [20,30]. Based on the phylogenetic analysis of the ITS sequence, E. bieneusi isolated in this study belonged to group 1 (subgroup 1d) (Fig. 1), which suggested their zoonotic potential. From the 38 ITS-positive specimens, there were nine, five, one, and three isolates successfully sequenced at MS1, MS3, MS4, and MS7 loci, respectively. Analysis of sequence polymorphisms and single nucleotide polymorphisms (SNPs) at MS3 locus revealed two distinct types (type I and II) ( Table 4). Zhang et al. [31] reported that 7.06% (23/326) of sika deer were positive for E. bieneusi with eight genotypes detected. Also, 34.0% (16/47) of faecal samples from Père David's deer (Elaphurus davidianus) in China were E. bieneusi-positive [32]. Another report showed that 29 deer were infected with E. bieneusi, including 28 sika deer and one red deer [33]. Zhao et al. [34] reported seven genotypes of E. bieneusi in golden takins (Budorcas taxicolor bedfordi) in China, and Shi et al. [20] found E. bieneusi in 28.8% (176/611) of goats and 42.8% (177/414) of sheep, with 42 genotypes identified. Twenty-three (7.0%) yaks in China were E. bieneusi-positive; three genotypes (BEB4, I, and J) from group 2 that were previously reported in humans and two group 1 genotypes were identified [11]. Seventeen E. bieneusi genotypes were identified in 26 (32.5%) white-tailed deer in the United States [26]. Therefore, the E. bieneusi genotype we identified in forest musk deer, and most E. bieneusi genotypes reported in the Cervidae and Bovidae can infect both humans and animals. However, E. bieneusi isolated from forest musk deer appeared to be from a single genotype, in contrast to those found in other deer species, yaks, and goats. Although the genetic heterogeneity of G. duodenalis and E. bieneusi is well described, their method of transmission is still not clear. Investigations on their epidemiology, detection methods, and diagnosis are required to provide experimental bases for ensuring the health and safety of both animals and humans. Conclusions This study demonstrated the prevalence of G. duodenalis and E. bieneusi in forest musk deer in China. Furthermore, to our knowledge, this is the first report of G. duodenalis and E. bieneusi infections in musk deer and thus demonstrating that the host range of these parasites is wider than previously reported. Zoonotic genotypes identified in this study showed the transmission potential of G. duodenalis and E. bieneusi from forest musk deer to humans or other animals. Currently, there is no known effective vaccine or drug to treat infection with these parasites. Hence, measures should be taken to prevent humans and animals from being infected by these parasites.
3,012.8
2018-03-26T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Prevention of “dangerous games” on the Internet: the experience of the DimiCuida Institute line of action in digital environments “Dangerous games” (“online challenges”) are part of the digital culture, attracting thousands of children and adolescents and causing severe damage to their health. This study aimed to analyze the experience of the DimiCuida Institute (DCI) in Fortaleza, state of Ceará, Brazil, a pioneering initiative to prevent online challenges, disseminated in different Web 2.0 environments, which can harm young people’s health. The specific objectives of this study sought to analyze the emergence of the Institute; identify the main partners involved in its network, assess the digital and analog environments in which they operate; and understand the prevention strategies developed. This is a case study based on document analysis. Our data, extracted from several digital platforms, was processed with Atlas Ti and subjected to thematic analysis. The DCI emerges from the existential and political resignification of a bereavement experience and is characterized by emphasizing alternatives to online activities, especially focusing on schools and their agents. Since this is still a field under construction and without previous references for lines of action, the prevention carried out in digital environments scarcely explores the language and resources of the Internet, betting on parental control and scarcely considering the experiences of the body and identity performances involved in the challenges. Introduction "Dangerous games" or, in their emic description, "online challenges" are a very specific type of digital content which has increasingly attracted young people and adolescents' attention on social networks. Often featured as a form of play or game, these challenges are shown in videos usually hosted on YouTube, although they also appear on other platforms. Challenges can involve performing unusual tasks, such as dipping one's head in a bag full of charcoal, or repulsive ones, such as eating large doses of cinnamon. Some challenges even consist of illegal acts, such as plundering and ransacking one's own school. Still, dangerous challenges gain the most recognition in the digital space. The literature portrays these games and challenges (which several generations before the advent of the Internet experienced as "extreme sports") as competitions which may involve violence and other practices enabling adolescents and young people to experience risky experiences adults would rebuke and show their courage and irreverence. From the perspective of sociology of the body, such sentientbodily experiences are in line with younger people's search for identity construction, body self-knowledge, and emotional development (Le Breton, 2012). These practices show the impossibility of a society in which the institutions and agents dealing with adolescents fully control the risks to which the latter are fully exposed (Le Breton, 2000, 2010. Recent research (Deslandes et al., 2021;Miranda;Miranda, 2021) found that YouTube broadcasts dozens of challenges which are popular among children and adolescents and whose online enactment is a media strategy aimed at turning young people into celebrities among their peers, albeit at the expense of potential damage to their health. Their main objective is to incite their audiences' participation and adherence via the performance of tasks which can lead to self-harm, injuries to third parties and, in some cases, death, depending on the proposal of the challenge. Among these, "choking games" stand out for two reasons: first due to their enormously harmful potential since they can kill or cause temporary or permanent brain damage; and second because they cause intense body effects resembling those due to the consumption Choking challenges have widely spread in the digital world and thousands of young followers carry them out (Defenderfer;Austin, Austin, Texas. Davies, 2016;Linkletter;Gordon, Dooley;2010). Still, several professionals and institutions have offered content aimed at preventing such practices, bringing information and examples of life stories which were negatively marked by the damage caused by these challenges. Moreover, this material draws parents' attention to the risks of lacking control over the content their children consume on the Internet. Since the 1990s, parental involvement regarding the Internet has gained prominence with the Convention on the Rights of the Child (UN; Unicef, 1990), which defended the idea that parents should be held accountable for guiding and monitoring children and adolescents' use of the Internet for any purpose until they reach the age of majority, thus highlighting the exercise of parental control by considering it a fundamental practice in mediating children and adolescents' Internet access. Adopting a more conciliatory tone than what "control" would suggest, the literature also uses the category "parental mediation." Grizólio and Scorsolini-Comin's (2020) survey found no consensus regarding mediation models, with a considerable diversity of terminologies: "monitoring," "communication quality," "restrictive mediation," "authoritarian mediation," "authoritarian mediation and laissez-faire style," "active mediation," "supervision," among others. Thus, internet access mediation raises many more doubts than certainties in how to guide children and adolescents. The idea that the Internet offers many opportunities for young people's development while representing risks and potential damages unites a set of institutions which defend a safer and more violence-free online reality. In this context, this study aims to undertake an exploratory analysis on the experience of the DimiCuida Institute (DCI), a pioneering initiative to prevent online challenges which can damage young people's health. Our specific objectives sought to evaluate the emergence of the Institute, its main partners, the digital and analog environments in which they operate, and the prevention strategies it develops. We believe that this study can contribute to improve initiatives using digital spaces to prevent similar phenomena in children and adolescents. Methodology This research was conducted from the case study perspective, enabling the production of inferences about certain social practices in light of our analysis of actors' history and experiences. A single case was selected not only for its intrinsic (its performance specificity) but also for its instrumental interest since its analysis enables inductions and insights which can be used in other similar institutional actions (Stake, 2000). Since it is the institution which produced the most content on this theme, the DCI was chosen for its pioneering line of action to prevent choking games in Brazil. A strategy, consisting of two stages and articulating several methods, was adopted to identify and analyze the connections and materials in the DCI network. First, the DCI digital environments hosted on Facebook, Instagram, and YouTube were mapped to identify their partners. We work with the "ego-centered network" described in Recuero's (2020, p. 70) synthesis: "that which weaves a network by setting a given node and following connections which are separated by certain degrees of separation." The degrees of separation between network actors (the distance between the central actor and others) are defined by research. Thus, we included neither second-level connections (partners of partners) nor interaction reciprocities. By visiting partner institutional websites, new elements in the DCI network were identified, i.e., professionals and organizations supporting and helping it disseminate its content. Main partners are understood as profiles which like, share or comment on the material produced on the DCI website. The collaborative software DRAW (for electronic publishing and vector drawing) was used to visually represent partner connections. It is an open-source multiplatform whose main feature is its versatile connectors between figures (available in a range of line styles) which facilitate diagram construction. In the second stage of this research, the strategies the DCI and its direct partners proposed to prevent choking games were collected and qualitatively analyzed. Data were manually extracted from Facebook, Instagram, and YouTube during the first half of 2019. Captures were made in .jpeg and stored in folders named according to the platform from which the material was extracted. In addition to these images, other items were inserted, such as videos and texts. This stage was functionally supported by a qualitative research software developed to process qualitative data, Atlas Ti, which is widely used by researchers for its functionalities and file and digital platform compatibility. At first, 301 items were found in various file formats. An expressive number of repeated items was found by screening this material since the same content was posted on different digital platforms. Thus, those with greater reach, i.e., the most liked, shared or reposted were chosen and identical ones, excluded. Our final sample consisted of 212 texts, images, and videos. This material was independently transcribed and coded by two researchers and classified into two subject groups with four categories capable of classifying the contents. Each classified material could be inserted into more than one subject group (Chart 1). Passages which could contain institutions or professionals' personal information were redacted. Subject groups Categories and subcategories Number of files The thematic analysis proposed by Mieles Barrera, Tonon, and Alvarado Salgado (2012) was applied to our qualitative data acknowledging that the reported experiences manifest structures to which subjects attribute relevant meanings and can be explained by interpreting them via orderings and typologies. The analysis process was guided by three methodological postulates: logical consistency, evinced in the theoretical-methodological clarity of this study and acuity of its categories; subjective interpretation, defending the focus of analysis as the sense actors make of actions; and adequacy, seeking a coherence horizon between the typifications of research and the meanings attributed by its subjects. Data were successively read so we could familiarize ourselves with the text, followed by theme identification, content codification, groupings, descriptive reports, and, finally, interpretative inferences. After analysis, our interpretations were submitted to DCI managers and professionals who validated the produced inferences. Results We organized our results from a sequence which begins with the DCI history and ethos, describes its partnership networks, analyzes its main action strategies, and highlights the initiative of some of its partners. The DimiCuida Institute, a specialized, professional, and testimony line of action: mourning as a context for resignification The DCI was created in 2014 in Fortaleza, Ceará, Brazil, after a 16-year-old lost his life practicing the fainting game on the Internet. As his parents grieved, they met other families who had suffered the same loss, setting ties with two international institutions -the American Erik's Cause and the French Association des Parents d'Enfants Accidentés par Strangulation (Association of Parents of Child Victims of Strangulation -Apeas) -founded by parents whose children also died in this way. Since then, these institutions have dedicated themselves to raising awareness on the subject and undertaking prevention actions. Later on, DCI professionalized itself and now has a small team with a psychologist, educator, and legal advisor. Thus, in a journey which began with pain and mourning which were resignified by reconstructing a narrative to preserve the memory of victims, the DCI established itself as the first Brazilian organization dedicated to online dangerous games. Its stated initiatives are: i) research and studies involving trained health, education, and safety professionals; ii) exchange of information and experiences with similar entities in Brazil and the world; iii) prevention aimed at education, health, and public-safety professionals and parents; iv) qualified prevention methodology for children and adolescents; and iv) clarification and support for families who have experienced similar problems. In recent decades, movements linked to victims of violence constitute one form of mobilization which has gained great visibility and produced a new political subjectivity (Fassin;Rechtman, 2007). Contemporary scientific production on various types of activism and social movements has pointed to an individualization process of collective causes which has spread and multiplied throughout the world. This literature emphasizes that the body has been used as a source of identification of causes/mobilizations characterizing victim activism (Mahoney, 1994;Janoff-Bulman, 1985). According to From Pain to Power: Crime Victims Take Action (U.S. Department of Justice, 2007), the directly affected led civil and elderly rights, welfare, environmental protection, and AIDS research and treatment movements in the U.S. The document states that these movements arose from victimization and neglect, persecution, or marginalization. In all these cases, the involvement of victimized individuals legitimized their cause. In The Empire of Trauma: An Inquiry into the question of victimhood, Fassin and Rechtman (2007) propose that the use of the term "trauma" is not a given, but a construction: (…) it (…) is produced through mobilizations of mental health professionals and defenders of victims' rights, and more broadly by a restructuring of the cognitive and moral foundations of our societies that define our relationship to misfortune, memory, and subjectivity. (Fassin;Rechtman, 2009, p. 6-7) Thus, victimization assumes an important role as producer of subjectivity in contemporary times. From Fassin and Rechtman's (2007) ideas, Sarti (2011) states that the contemporary construction of persons as victims is seen as a way of conferring social recognition to suffering, circumscribing it and giving it intelligibility. In Brazil, studies corroborate the production of victim activism (Araújo, 2019). People who have lost relatives to police, urban, traffic, self-inflicted, and other forms of violence can transform what would be a private loss into a public cause by fighting for space in public policy agendas, aiming to produce campaigns, restitute rights, and change legislation, among other demands. Institutions such as the DCI act, thus, at the intersection between activism (guided by an ethical-ideological commitment to a cause) and prevention (led by strategies which use professional knowledge). Protection networks against online "dangerous games" Starting from the affinity and approximation relations DCI has with other agents in the main digital platforms, we seek to visualize their connections with the different actors in this dangerous game prevention network. The diagram in Figure 1 contains two connection levels: level 1 corresponds to the direct partners of the DCI and level 2, to its indirect partners (partners of partners). We also found partners (especially public agencies in their states of origin) who indirectly act in their digital networks, developing joint actions. Digital operation and the content produced by DCI and its partners On both Facebook and Instagram, we see the low DCI post volume and reach. Its most disseminated posts shared links to videos made by television stations or other large communication vehicles. Prevention posts hardly reached 15 likes or views and rarely received any comments. We found they showed little digital interaction, i.e., information dissemination predominated, rather than the formation of associative bonds. After analyzing the DCI digital environments, we realized that its partners and "followers" failed to As per the diagram, the DCI has few direct partners and a low connection degree (Recuero, 2020). However, some connect to larger and more complex networks, enabling approximation interactions and thus an expansion of associative bonds, reinforcing Granovetter's (1973) strength of weak bonds, bonds with less binding which can connect different groups, expanding relationship, opportunity, and belonging networks. Connections with ChildFund Brasil, SaferNet, and Nethics Digital Education offer DCI potential access to at least 60 other national and international institutions. In this set, we found agendas which converge with the DCI line of action: (1) (2) activism, triggered by mourning establish actual network interactions. Content posted on a major digital platform failed to strongly bond to other profiles via sharing, commenting, or likes. The DCI performs better on YouTube than in any other platform, with a good brand of subscribers but a modest number of views (Chart 2). Unlike this modest dynamic, we have SaferNet, a partner which has far-reaching posts receiving more than 100,000 views. However, we should emphasize that SaferNet deals with internet security from a much broader and diverse point of view, including, in addition to online challenges, topics such as racism, pedophilia, gender bias, and many others. DCI content is branded to detail what online challenges are and how to avoid them. A more expanded awareness strategy shows information in a simple and direct language, most often accompanied by images which illustrate the broadcast message. Our mapping showed that most of this content targets parents and especially educators. Posts show a predominance of formal language and scarcely explore algorithms (stories, lives, new applications, etc.). The elaborated themes indicate that the content in this network can barely target challenge consumers, i.e., children and adolescents. Chart 2 -DCI performance on social networks, 2018/2019 The produced content also greatly concerns itself with young people's Internet abuse. Several posts bid parents concern themselves with their children's excessively long connection to the Internet. These messages reinforce the idea that the Internet is a space of risks which should be accessed and experienced under adult supervision, emphasizing somewhat constantly that, even in moments of fun, such as playing games or browsing social media, users may be in some danger, even if unconsciously. Some publications encourage parents to use applications and tools to monitor their children's 2 Available at: https://pt-br.facebook.com/dimicuida/. Accessed on: June 7, 2022 browsing. Several posts show the advantages of parental control software in avoiding forms of violence and other dangers. The general idea is that managing internet risks is mainly up to parents, who need to invest more in digital literacy and receive support so they can search, download, and use applications and devices which control access to certain content. When they convey the operational details of such applications, these posts use a didactic language since they recognize that children and adolescents understand more about technology than their parents. Despite the importance of using these strategies, the content in these posts emphasizes that "nothing replaces dialogue," recognizing the limits of a merely technological control initiative, as Figure 2 shows. The main argument corroborating the thesis that the Internet is an unsafe environment for this population consists of personal experiences of loss due to such challenges. The content this partner network produces constantly evokes examples of life trajectories which have been marked by losses (especially the DCI, Erik's Cause, and Apeas). Throughout this research, the case which most sensitized the DCI network was that of a seven-year-old girl who died after performing a challenge Youtubers proposed, which consisted of spraying aerosol deodorant into her mouth. She suffered a cardiac arrest a few minutes after inhaling a large amount of that spray. Her relatives and family friends posted several appeals on social networks, asking for parents and guardians to pay attention to the risks these and other challenges circulating on digital platforms bring, further reinforcing the association between danger and the Internet. A striking feature of the DCI digital line of action is that the published material constantly invites its audience to leave the Internet and seek analog forms of prevention via lectures, courses, and face-to-face workshops. The DCI promotes several meetings with students, parents, and experts to discuss the topic in conversations and recreational activities. We also found that this prevention network aims, as one of its goals and expertise, to train education professionals in combating the damage such games can bring. The analyzed data quite often refer to schools as an analog counterpoint to the danger the internet can bring to children and adolescents (Figure 3). Much of the dissemination material highlighted the importance of the information specialists conveyed in this space of conviviality. Data also often referred to pedagogues and psychologists in the offered content, courses, and lectures, as well as in the dissemination of these meetings. These professionals' information function discursively as a source of scientific truth opposing the possible untruths the Internet can bring. The most recurrent item in these posts was the 3 Available at: https://pt-br.facebook.com/dimicuida/. Accessed at: June 7, 2022 term "lecture and prevention of dangerous games in school," among the most liked and commented posts from the prevention network. In some cases, the network used large media to promote face-to-face events in schools in the state of Ceará, disseminating content more widely. Data analysis showed four types of strategies to encourage children to avoid engaging in these challenges: music, theater, sports, and food. Music therapy is available in courses and workshops with musicians and therapists and aims to stimulate creative activities which would reduce Internet abuse. Theater is another coping mechanism. These meetings seek to stimulate sociability and relationships among young people; posts on such activities show that work is done to "reconnect" them to their bodies. The DCI also promotes, though less often, workshops and courses on sports and food, aiming to minimize exposure to social networks. In parallel to the content produced for the Web 2.0 main platforms, this online challenge prevention network has especially aimed to produce analog content, such as books, booklets, and printed reports, usually disseminating it in classrooms. An example of this type of material widely disseminated in digital environments was the book Tem perigo no ar (There is danger in the air), published by DCI and released in several Brazilian cities. According to posts, it targets children aged five to 11 and playfully shows how the brain needs oxygen to function well. This material aims to warn children and adults about the danger of choking games. Thus, we again find that analog content is of fundamental importance for the DCI in preventing the possible harm online challenges can bring. As its main objective is hindering adolescents' access to violent/dangerous content, the DCI shows great concern about the legal resources which violence can trigger. The network promotes meetings and lectures with jurists and lawyers to discuss legislation on digital practices which parents and educators can use to pressure the removal of videos encouraging violence to themselves and third parties from the platforms hosting them. At the time of this study, discussions pointed to efforts and a partnership with YouTube toward that. Unlike pedagogues and psychologists, who are a constitutive part of this prevention network, lawyers and jurists serve as consultants who advise prevention actions. However, we found no legal action toward formulating policies to control these platforms. The analyzed data showed a great affinity between challenge and suicide prevention networks. Death by challenge -as much as we recognize the collective character of these games and the number of people involved, even if digitally -is, in the vast majority of cases, considered a suicidal act. Thus, in several face-to-face events and publications, technically-trained psychologists and pedagogues addressed suicide prevention. Discussion After the covid-19 pandemic, the Internet became a fundamental tool to manage contemporary daily life. The literature addressing its consumption among young people and adolescents in this worldwide network points to tensions between the positive and negative characters permeating these new relations. On the one hand, a set of authors (Arab;Diaz, 2015;Simons, 2010) emphasizes the opportunities the Internet can bring, such as educational learning and digital literacy, civic participation and commitment, creativity, self-expression, social identity and connections, socialization, entertainment, skill development, and learning motivation. On the other hand, associated negative aspects enter the debate, such as the risks of commercial damage, sexual harassment, violence, attack on values, affective distancing, and hearing loss. In a context in which tension between opportunity and risk characterizes the internet, the DCI emerges as the center of a network producing content against the damage online challenges can bring. Dangerous games, especially suffocation ones, have existed for a few decades and have always been a cause for concern among parents and educators (Bada; Clayton, 2020). However, with the creation of YouTube in 2005, millions of young people could watch videos which propagated and trivialized this type of behavior (Miranda;Miranda, 2021;Linkletter;Gordon;Dooley, 2010). Thus, it is not by chance that the DCI activism emerges from family loss and resignification (Fassin;Rechtman, 2007;Sarti, 2011). DCI is a small non-governmental organization (NGO) with budgetary limitations and a limited group of hired professionals, which somewhat restricts the scope of its activities. However, it distances itself from the voluntary and religious model which predominates in most third-sector organizations (Cazzolato, 2009); basing its actions on some of the hegemonic scientific knowledge on contemporary childhood management, psychology, and pedagogy. DCI content shows lower engagement than online challenges but has a wide variety of actors who share, like, and comment it. According to Granovetter (1973), so-called weak ties are fundamental to disseminate innovation since individuals with diverse experiences and backgrounds make up these networks. They are important because they connect us to several other groups, breaking the configuration of "isolated islands" (clusters) and configuring a social network. We observe, thus, that the DCI has a potential space for expansion, given its efforts toward greater connectivity with the partners of its partners, further developing its own network (Recuero, 2020). Because it is a field under construction and without previous action references, face-to-face spaces still greatly influence digital dangerous game prevention. We found that one solution to avoid the dangers of the Internet was developing face-to-face activities which attracted the attention of the population more at risk. Data showed that this form of prevention especially explored schools, trying to get children away from computers and inviting them to face-to-face sociability. The strong link between prevention content and the school community sets a much-discussed paradox with the arrival of Web 2.0. This prevention content finds some limits in its form of communication by insisting on an online-versus-offline division in which the offline space directly refers to schools and the security its specialists (teachers, pedagogues, and psychologists) guarantee and the internet consists of an unsafe space with numerous potential damages. As stated earlier (Deslandes et al. 2021), our everyday lives have definitely incorporated and embodied technology. As Hine (2015, p. 15) summarizes, it is "embedded, embodied, and everyday." Any binary opposition between real/virtual and digital/analog loses its meaning the moment smartphones, tablets, and watches incorporate applications; digital algorithms presentifying several platforms on mobile devices. Thus, the idea of offering analogue activities in place of the Internet may be ineffective today due to the blurred boundaries between online and offline spaces. It is currently no longer possible to leave the internet, even if it offers risks. We must consider that play culture itself has migrated to "digitality" (Fortuna, 2014), in which young people find an important space for body and identity experimentation (Fassin;Rechtman, 2007). Thus, online challenges become experiences marking their digital identities since they explore one of the main characteristics of digital sociability: overexposure. Under the pretense of being loved, appreciated, and applauded, individuals would be subjected to what Sibilia (2008) called "tyranny of visibility," having to style and cultivate their images along the lines of audiovisual media characters. We found that the sociability the digital world mediates depends on how the "I" presents itself to "others" who, in turn, the discourse this digital "I" builds will presentify in various ways. The online world constantly constructs and mediates an online world whose digital sociability evokes a very particular form of corporality. The more dangerous and damaging to the body is the challenge, the greater the chance of it becoming more popular (Deslandes et al., 2020). Another point this form of prevention explores (widely discussed in the online challenge literature) reminds us of the concept of technopanic, i.e., technology phobia, observed when certain (real or unreal) risks technology can bring are amplified (Bada; Clayton, 2020). We find another paradox by proposing parental control as a measure to prevent the risks of online challenges. Even if we recognized that the Internet is a means to contemporary sociability and that parents and children have different digital literacies, in practice, parents' self-taught actions fail to offer a solution to this impasse. By understanding digital literacy as the knowledge necessary to handle technological resources to read and write in the digital environment and participate in the social practices of this culture (Ribeiro;Freitas, 2011), we can claim that children and adolescents are "digital natives," born and socialized in the digital culture. Their parents received digital literacy after spending much of their life in a world of analog devices and logics. Thus, the controlled understanding much more about the environment than the controller questions the effectiveness of parental control in the digital universe. The paradox between parents and their children's digital literacy can lead to a false sense of security and control of children and adolescents' attitudes. We should also emphasize that analog actions are very important to prevent the possible damages online challenges can bring, as long as they complement digital initiatives. This is due to the countless resources to be explored and the fact that children are exposed to risks in the digital space, in which prevention resources are to be found. For this complementarity to be effective, digital content must attract its target audience via a temporality and language compatible with their digital experience. We need a digital education which discusses length of exposure, website browsing, and play adherence and is constantly supported by parental dialogue -considering that transgressions will always occur. Thus, it seems important to bet on digital literacy which emphasizes individuals' creativity and emancipation (Maidel;Vieira, 2015;Ribeiro;Freitas, 2011) as a way to prevent internet practices harming their health. We conclude that analyzing the DCI experience enables us to expand our reflection to other institutions dedicated to preventing self-inflicted and other forms of digitally produced or disseminated violence. Most prevention institutions concentrating their initiatives in institutional spaces (school, health services, etc.) face the challenge of acting on the Internet and dominating its aesthetics, language, culture of use, and interactional logics. Moreover, they also show unequal digital literacies, including their own professionals and interlocutors' (parents, educators, and healthcare providers). Thus, they should invest in partnerships with actors who have such skills and can help them define the appropriate actions for digital expression and take advantage of its resources. Finally, we should note that, in addition to information and communication technology professionals, children and adolescents themselves can be extremely skilled partners in this regard.
6,791.4
2022-01-01T00:00:00.000
[ "Computer Science" ]
Optical sensitising of insensitive energetic material for laser ignition An experimental investigation into optical sensitisation for laser ignition of an insensitive explosive 1,1-Diamino-2,2-dinitroethene (FOX-7) has been carried out, by using a near-infrared diode laser at a wavelength of 808 nm. In this study carbon black as the optical sensitiser was mixed at 5 wt% with the explosive using two different mixing techniques, tumble mix and ground mix. The mixture samples were characterised by microscopy to examine the dispersion of carbon black within the mixtures and analyse effects of the mixing techniques on their laser ignitability. Laser ignition maps were developed for both mixing techniques and varying sample densities also examined to determine the density effect on laser ignition with various laser parameters of beam width, laser duration and laser power. The results have shown that ground mixing method provides more uniform dispersion of carbon black in the mixture samples, and therefore allows a lower laser ignition threshold than that of tumble mixing method. Introduction Safety is an increasingly important driver in the development of explosive initiation systems.Laser's ability to directly initiate insensitive secondary explosives allows the removal of sensitive primary explosives and increases resistance to accidental initiation.It is environmentally friendly as current primaries include toxic heavy metals.Laser initiation systems are also not susceptible to electromagnetic interference and can produce multiple initiation sources due to their use of optical fibres.Laser ignition has been studied world-widely and in as early as 60s' by using lasers, e.g.Nd:YAG and ruby lasers, at various wavelengths such as 355 nm, 532 nm, 1064 nm and 694 nm respectively [1][2][3].The research showed that bare Cyclotrimethylenetrinitramine (RDX) was unable to be initiated with a Nd:Glass laser at 1060 nm, however lasers at ultraviolet wavelengths were able to initiate unconfined explosive, which indicated the greater ignition efficiency of UV lasers of higher photon energy that may allow direct breaking of molecular bonds.In his research [4], Paisley also suggested that there are two separate mechanisms for the optical ignitions at UV and NIR wavelengths, being photo dissociation and thermal decomposition, respectively, and the addition of graphite (5-10 %) does not decrease the laser initiation power threshold.This was further explored by Östmark utilising tuneable CO 2 laser at 900 nm −1100 nm to initiate RDX [5].The research showed that the laser absorption in the material followed the Lambert-Beer law and hence the absorption depth of the material irradiated is controlled mainly by the material's absorption at the wavelength.Therefore, optical absorption of a specific material is significant for its laser ignition, which was also shown by the work on the Laser Ignition in Guns, Howitzers and Tanks (LIGHT) [6].More recently, laser ignition method was experimentally investigated by using energetic nano-aluminium (n-Al) and polyvinylidene fluoride (PVDF) particles as optical sensitizer and sustained ignition of ammonium perchlorate (AP)/hydroxyl-terminated polybutadiene (HTPB) composite propellants was achieved with relatively low laser energy levels (less than5 J/cm 2 ) [7].Pentaerythritol tetranitrate (PETN) and RDX were studied for their ignition with aluminium nano-powder as an optical sensitiser by a pulsed neodymium laser [8].Also using a diode laser of low power, insensitive gun propellants based on RDX and nitrocellulose were investigated for laser power effects on their ignition and combustion characteristics [9]. 1,1-Diamino-2,2-dinitroethene (DADNE) commonly referred to as FOX-7 is a modern example of insensitive secondary explosive.It is less sensitive than RDX but exceeds its performance; yet requires a greater stimulus to initiate.Thus, it is a widely researched material for use in insensitive munition applications [10][11][12].Therefore, optical sensitisation of the material by mixing it with a laser absorption additive to increase its optical absorption would reliably achieve its initiation by the laser.Whilst metallic powders and other materials have been utilised as the additives, it is carbon black that has been proven to be an effective additive and the most widely studied especially in secondary explosives and pyrotechnics [13,14].This paper has investigated the direct ignition of FOX-7, optically sensitized with carbon black, utilising a nearinfrared (NIR) diode laser at a wavelength of 808 nm.Following our previous research [15][16][17], it has specifically examined the nature of the dispersion of carbon black within its mixture with the explosive and analysed two different mixing techniques.Laser ignition maps have been developed for both mixing techniques and varying sample densities examined to determine their effect on laser ignition with various laser parameters of beam width, laser duration and laser power. Sample preparation and characterisation FOX-7 used in this project was in its powder form and examined utilising Scanning Electron Microscopy (SEM) shown in Fig. 1.The particles are comprised of flat plates a few microns thick with 20 -40 µm width and up to 80 µm length.This large flat plate structure would lead to the formation of large voids in loose powder and light pressings, with particle fracture likely under higher pressing loads.FOX-7 displays high optical absorptance in line with the amino and nitro groups of its molecule at wavelengths of 2.94-3.12µm and 6.06-7.40µm [10,18].Its lack of absorption in the near infrared (NIR) range defines the requirement for the addition of an optical sensitizer to absorb the laser energy at NIR for laser ignition of FOX-7. The carbon black used for this study is a black aciniform, as can be seen in Fig. 2 where the material agglomerates to form larger clusters of up to 20 µm with the majority in 0.1 -10 µm size.Such large particles of the material may cause undesirable uniformity when mixing with the explosives.The previous work at Cranfield [16,17] shows that the required mixing ratio of carbon black to increase absorptance varies considerably and addition of 5 wt% carbon black for mixing with FOX-7 obtained a good result for the laser ignition studies.Therefore, it was decided to utilise this ratio in this study.It should be noted that an optimum ratio may exist outside this value.The performance and sensitivity of an explosive mixture are affected by the addition of carbon black that reduces the explosive volume and can act as a grit sensitising the explosive to friction and grit. To mix carbon black with Fox-7, the materials were weighed on a balance and then transferred to a glass vial, and an initial mixing was conducted by carefully rolling and inverting the glass vial producing an even mix of the carbon black and FOX-7.This sample type is referred to as Tumble Mix.A portion of this mixture was then transferred to a mortar and water was added to the mortar to a level just covering the surface of the mixture.Grinding with a pestle was conducted by hand for about 20 min until the mix no longer contained any grains that could be felt with the pestle.This was then dried in an oven under vacuum at 100 • Celsius for two hours to remove the water.This mixture is referred to as Ground Mix. When mixed with carbon black utilising the tumble mix technique, the morphology of the FOX-7 particles remained the same as shown in Fig. 3.When the mixture of FOX-7 and carbon black was ground the morphology was considerably altered, as shown in Fig. 4 where the overall size of the particles has been reduced and they no longer exhibit the flat plate shape. For each ignition experiment the mixture sample powder was weighed and then placed into individual cells within an aluminium sample holder; this holder allowed for ten samples to be prepared at a time.The powder was then pressed to a consistent volume using a Perspex hand press; this ensured a uniform surface and density between samples.The sample holder and the press are shown in Fig. 5.Each cell has 3 mm diameter and 2 mm depth and the used press (the protruding part) has 1.4 mm height and around 2.99 mm diameter.The pellet has around 3 mm diameter and 0.6 mm height.The examples of the pressed samples with Ground Mix and Tumble Mix were shown in Fig. 6 and Fig. 7 respectively for their optical microscopic images.The ground mixture appeared as a homogenous dark olive colour.The tumble mixture had the appearance of fine sand of black and yellow particles and agglomerations of both carbon black and FOX-7 leading to large variance in appearance across the sample.Sample density was calculated from a mass to volume ratio.For the investigation of density effects, a set of sample densities were obtained by pressing various masses into a fixed volume within the cells of the sample holder, as listed in Table 1, where the sample pellets have the same dimension of 3 mm diameter and 0.6 mm thickness. Experimental set up 2.2.1. Ignition testing As shown in Fig. 8, ignition testing was carried out with the laser focussed on to the sample surface by a convex lens of 50 mm diameter 50 mm focal length.Two photo diodes were used to detect both the laser pulse and its ignited flame from the sample.A successful ignition event happens when the laser induces a flame (or visible burning) on the explosive.A laser filter centred at 800 nm was used to filter out the laser and enable the detection of the flame.A diode laser at the wavelength of 808 nm (JOLD 45 CPXF 1L Jenoptik) was used for all experimentation.This laser delivers a diverging beam through a 0.4 mm fibre core at a power of up to 45 W in continuous wave (CW) output.To control the power and timings of the laser a laser diode controller (LDC 1000, Laser Electronics) was used, and it also provided a trigger signal to a digital oscilloscope (Agilent Technologies DSO1024A).The diode laser system has a built-in pilot laser to provide a visible red beam in line with the 808 nm laser output for assisting with its alignment.To vary laser beam size on the sample surface, the sample holder was placed on a translator stage and moved up away from the focal plane of the lasers to obtain increased laser beam sizes. Ignition map development The ignition map was developed by measuring ignition delay time versus laser power.As shown in Fig. 9 for the oscilloscope traces of the igniting laser and the ignited flame, the ignition delay is the time taken between the commencements of the laser pulse and the flame, and the X.Fang and A.J. Walton full burn delay is the time taken from the commencement of the flame to its 90 % signal maximum.The commencement of the ignited flame is measured at the time point after which the flame signal rises.The onset of the laser pulse was clearly defined in all tests, and this is used as the base reference for the ignition delay and full burn delay times.These two values allow the creation of ignition maps by varying the power and comparing the delays.These measurements were carried out in the ignition tests under various laser powers, and the ignition threshold was determined when it achieved successful ignitions in over 5 out of 10 repeated tests.Once the threshold had been determined, experimentation was conducted above the thresholds.Unlike Bruceton method [19], the repeated ignition test method may directly indicate the ignition reproducibility of the samples produced with tumble and ground mixing techniques.Data in ignition maps is represented by mean values of the successful repeated tests at a laser power and the error is calculated by equation ( 1), with the experiment's standard deviation σ, being divided by the average value Avg. Absorption testing To determine the laser absorption and diffuse reflectance of the two mixture samples using different preparation techniques, the experimental setup was used, as shown in Fig. 10.Additional optical shielding (not shown to simplify the diagram) was erected to reduce the light reflection and scattering from the surrounding background affecting the result.Sample material was weighed and placed inside a washer on a glass slide to ensure an equal density and sample depth between samples.Measurements were taken with barium sulphate as a reference since it has a high reflectance and is assumed to have negligible absorbance. Using equation ( 2) it was possibly to calculate the total intensity Io from the results of the barium sulphate reflected and transmitted signals and using this value a relative value for the absorption of the explosive samples was obtained.This provided a coarse relative level for diffuse reflectance and absorption for the materials tested and more accurate analysis should utilise integrating sphere equipment. I o : Incident Irradiance.I t : Transmitted Irradiance.I r : Reflected Irradiance.I a: Absorbed Irradiance. Microscopy SEM was used to assess the structure of the FOX-7 particles and distribution of carbon black with the two mixing techniques.In the tumble mixture (see Fig. 3) the FOX-7 particles exhibited large flat plates.These plates were in the order of a few microns thick and approximately 20-50 µm across the other axes.Whilst the bulk densities were not directly measured it was noted when preparing samples of set densities for experiments that tumble mix had a significantly larger volume at each mass.It is likely that this plate like structure was the primary cause of the lower bulk density of the tumble mix, as packing of the large flat structures would lead to the formation of large voids within the material.It is recommended that future work investigating laser interaction with FOX-7 utilises a commercial grade of FOX-7 with its more spherical particle morphology.This is likely to lead to an improved density and the rheological improvements with this morphology compared to large flat plates may improve mixing between the explosive and absorbing particles.Also, in the tumble mixture carbon black particles which appear a lighter colour due to their rough surface, and a number of carbon black particles can be seen adhered to the FOX-7 particle, the rough surface of the larger particle appears to hold the carbon black particles.This fact could be utilised to design a FOX-7 particle shape to increase the trapping of carbon black powders in simple binary mixes, however it is more likely that including absorbing particles in a rubbery coating would improve the dispersion and also improve the sensitivity. In the ground mixture (see Fig. 4), FOX-7 particles no longer exhibit a plate like structure and are significantly smaller with most particles being less than 20 µm across in any axis.Additionally, there is an increased range of sizes visible in the samples with a high proportion of smaller particles.This size distribution explains in part the higher bulk density of the ground mix.The particle shapes also vary significantly with some rounded and some rhombic sharp-edged particles evident within the samples.The appearance of some particles suggests that partial recrystallisation may have occurred, alongside mechanical breakup of the material under grinding.Water was chosen to wet the mixture for safety and is noted for being a poor solvent for FOX-7, some recrystallisation may have occurred with the pressure induced from grinding. Optical microscopy was used for identification of FOX-7 and carbon black through their colour and allowed samples to be prepared with X. Fang and A.J. Walton similar physical properties.The images of both ground and tumble mixtures in Fig. 6 and Fig. 7 in above section enabled a larger area to be observed for the analysis of dispersion.Utilising ImageJ, a Java-based open-source image-procession software developed by the National Institutes for Health [20], particle analysis was conducted to determine the difference in the distributions of carbon particles between the two mixing techniques utilised.The image such as the one in Fig. 6 was first processed to isolate the carbon particles, as shown in Fig. 11, and then the particle analysis module within the software run, producing images such as Fig. 12 and data on the carbon particles.This allowed the number and size of the carbon particles to be summed and averaged.Ten sample pictures were taken for each mixing technique allowing a broad comparison to be made.It was established that the quantitative results obtained from the standard software suite were enough to make an assessment on the carbon particles between the two mixing techniques. As summarised in Table 2, ImageJ particle analysis shows that tumble mix results in significantly fewer particles on average and that these particles are larger and cover less surface area of the sample.Additionally, it was seen that the tumble mix results had a larger variation in both the number of particles and the area they covered, it was anticipated that this would result in greater variation in absorption and ignition testing.The fewer particles and smaller area of carbon black within tumble mix would lead to greater areas without carbon black. Optical absorption The powder of barium sulphate was used as a reference sample for calibration of the total intensity, and it was assumed that this material had an absorbance of zero at the wavelength used.Equation ( 2) was used to establish I 0 from a summation of the outputs of both sensors in Fig. 10 of above section; this was then used to reference the results of the FOX-7 and carbon black mixtures.The results from testing tumble and ground mix are shown in Fig. 13.Neither of these mixtures allowed the transmission of light, thus it is only barium sulphate that has values recorded for I t .FOX-7 was also tested and found to absorb slightly at the subject wavelength. As predicted from the ImageJ analysis there was a significant difference in the results from the two mixing techniques as can be seen in Table 3, including the results of pure FOX-7 without mixing with any additive.The results also show that there is a greater variation in the results from the tumble mix than either the ground mix or the barium sulphate; this is to be expected from the results of the optical analysis.Tumble mix showed an error of 5 % for the samples I r , ground mix 2 % and barium sulphate 0.7 %.Ground mix reflected 56.9 % I oRef indicating that 43.1 % I oRef was being absorbed by the material; this is to be expected as there is a greater area of carbon within the samples tested.Ground mix had 133 % greater carbon area compared with tumble mix, however the absorption comparison is 177 %, and this suggests that the altered grain shape could also affect the absorption of the material. The experimental method used for this analysis utilised a modified set up of the ignition testing rig and as such is limited to a simple comparison between the materials.Future work should consider the use of an integrating sphere equipment so as to get more accurate results.It is also recommended that alternate mixing techniques are investigated such as the use of Resonant Acoustic Mixing (RAM); this could improve mixing between the two materials without the physical impact to particle shape.It is also necessary in further research to understand how the porosity and percentage (%) of carbon black affect the optical X. Fang and A.J. Walton absorption. Ignition delay Initial investigations were conducted utilising a 1.0 mm beam width and a density of 0.64 g.cm −3 utilising a 'Go -No Go' assessment to identify the threshold power which was found to be 17 W for the ground mix.A range of laser powers were then utilised at this fixed beam width to determine a characteristic ignition map for ground mix and tumble mix.Fig. 14 shows the ignition delay from the threshold power of 17 W up to a maximum of 40 W. As expected, the ignition delay reduces as power is increased.Additionally, as power was increased a general trend of reduced error in results was also seen.The main values for this ignition map are listed in Table 4. Characteristic ignition map for tumble mix is shown in Fig. 15.Tumble mix was seen to be far less consistent in ignition tests with more failures throughout the power ranges utilised.The threshold value for tumble mix was found to be 20 W, higher than that of ground mix.Of note is that the ignition delay at 20 W of successful tests is approaching the value at the highest power of 40 W, showing that tumble mix does not follow the expected trend.Additionally, the largest delays were seen at a higher power of 35 W. It is clearly shown that the uniformity of the mixture is an important factor for relatively small laser beam used in laser ignition.The main values for this ignition map are listed in Table 5. Overall tumble mix produced larger errors compared with ground mix.Further analysis was conducted at 40 W, and it was seen that the range and deviation was significantly larger with the tumble mix in Fig. 16 than the ground mix at Fig. 17.These variations in ignition delays as shown in the two figures also indicate that the ignition energy (the products of laser and the pulse duration) used for ground mix samples is more consistent than for tumble mix samples.Less variation within the dispersion of carbon black within ground mix is attributed as the primary reason for this improved performance. However, resultant ignition testing for the pure FOX-7 without being mixed with carbon black found that even at the maximum 45 W laser power no reaction or ignition was observed, although it has low optical absorption as shown in Table 3.Therefore, its laser ignition threshold is higher than 45 W. The addition of carbon black made its threshold significantly lower. Effect of beam size Testing for the effect of laser beam size was conducted across a number of beam widths at a density of 0.64 g.cm −3 and a power of 25 W in order to determine if the improved dispersion of ground mix had any effect.As shown in Fig. 18, the average ignition delays of both ground and tumble mix increase in a linear trend with the beam width.Full burn delay follows the same trend and is shown in Fig. 19.There is no significant difference seen between the average values of the two mixtures, other than at 1.5 mm beam width, which was not expected following the results from the ignition delay testing.Also, the density of laser power (P) is related to its beam width d as I = P/( 1 4 πd 2 ) which indicating the effect of laser power density on ignition delay.The laser power densities for the used beam widths are listed in Table 6. The error in the ignition delay was not seen to either increase with decreasing beam width, nor to differ significantly between either ground or tumble mix, as can be seen in Fig. 20.Analysis of the results in conjunction with the ignition delay results above suggest that the laser X. Fang and A.J. Walton power of 25 W used for this testing is too close to the threshold power for these materials, and therefore the errors could be associated with approaching this level of power rather than changes in the beam width. Pulse duration Further experiments were conducted using the samples with densities of 0.71 and 0.94 g.cm −3 respectively to investigate the effect of the laser pulse length on full burn start and end time.The results are shown in Fig. 21 and Fig. 22, where the times of burn start, burn end and laser pulse length are marked for each test.At 0.71 g.cm −3 it was evident that the flame output closely followed the length of the laser pulse or did not exceed the length of laser input by a significant margin.Also observed was the fact that the reaction did not progress outside of the cylinder illuminated by the laser within the material, and that at shorter pulse length the consumed material did not reach the lower surface of the sample holder. The results for both densities show that the ignited burning of FOX-7 was not self-sustainable, and that the flame output would only last a short period of time following the cessation of the laser or ended before the end of the laser duration.There was sample material remaining after the burning ends, which indicates that the sample was not fully burnt and the combustion was not sustainable.It is suggested that heat is being drawn away from the reaction by the significant mass and relative low temperature of the sample holder and heat transport properties also Table 6 Laser power densities for the used beam widths at the power of 25 W. X.Fang and A.J. Walton strongly depends on internal porosity of the samples.There is a significant difference in the outputs from the two densities tested, which could be attributed to the loose powder versus compacted material.The shorter pulse lengths for 0.71 g.cm −3 sample again support the fact that the reaction is not sustainable as the reaction has proceeded to a high output level prior to the laser pulse ending, yet the flame output ends in line with the laser.At 0.94 g.cm −3 this is not as evident however the 20 ms pulse shows that the reaction in this higher density does not progress to its full burn condition of a 40 ms burn. Conclusions Ignition was achieved with FOX-7 and carbon black ground mixtures at a lower laser power threshold of 17 W, with an average ignition delay of 13.33 ms at this power level.Across most of the experiments conducted this ignition was followed closely by a sharp rise to a high-level flame output.As laser power is increased the ignition delay and full burn delay decreases in a predictable manner, with the ignition delay reducing to 4.88 ms at the highest laser power of 40 W.However, tumble mixed samples of FOX-7 and carbon black had a significantly higher power threshold at 20 W with the limited results not displaying a uniform relationship between power and ignition delay.This was in part due to its inconsistent nature. Dispersion of carbon black particles was assessed qualitatively, and it was found to be more consistent with ground mix samples, and that large areas of low carbon black particle density were common within tumble mixed samples.This was also supported by data extracted from quantitative analysis of optical microscopy that showed ground mixed samples had a greater number of carbon black particles and covered a greater area of the sample surface in comparison to tumble mix.Predictably this led to a significant difference in the absorption of laser light between the two FOX-7 and carbon black mixtures, and additionally both mixtures showed an improvement over pure FOX-7.Ground mix was shown to absorb 43.1 % of incident laser power in comparison to 24.3 % for tumble mixed.Ground mix also displayed less variation in the results achieved in absorption testing. Testing across various beam widths showed that ignition delay increased with a reduction in power density, highlighting that whilst there is a threshold power, there is also threshold energy for laser ignition.Experiments conducted across various densities showed that there were two behaviours evident between low density loose powders and compressed higher densities.Thermal conductivity improves as the powder densities increase and this was seen to increase the rate at which FOX-7/carbon black burns.It was found that a density of 0.94 g.cm −3 to 1.17 g.cm −3 produced the quickest burn. Sustainability was not witnessed in the tests conducted, with significant volumes of the sample material remaining after ignition.The volume of sample illuminated by the laser was the only material to react.Thermal conduction to the sample holder is suggested as the primary cause of this; however, the effect of pressure was not examined as all experiments were conducted unconfined.Reduction in laser pulse length at various densities showed that the burn length, whilst exceeding the shorter pulses to some extent, followed the laser pulse length reduction. Overall FOX-7 with 5 % carbon black can be ignited with the use of a diode laser at 808 nm with a minimum power of 17 W. Ground mixing method shows more uniform dispersion of carbon black in the mixture samples, and therefore allows a lower laser ignition threshold than that of tumble mixing method.However, under the conditions tested the ignition reaction was not sustainable. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. X .Fang and A.J. Walton Fig. 9 . Fig. 9. Oscilloscope traces of the light signals from the laser and the ignited flame, defining the ignition delay time and the full burn delay time. 17 . Ground mix ignition delay at 40 W and 1.0 mm beam width. Table 1 Sample densities examined. Table 2 ImageJ particle analysis summary. Table 3 Absorption testing results (% I o Ref ). Table 4 Ground mix ignition delay parameters (1.0 mm beam width) based on 10 shot tests. Table 5 Tumble mix ignition delay parameters (1.0 mm beam width) based on 10 shot tests.
6,643
2023-02-01T00:00:00.000
[ "Physics" ]
Recursive Asymptotic Hybrid Matrix Method for Acoustic Waves in Multilayered Piezoelectric Media This paper presents the recursive asymptotic hybrid matrix method for acoustic waves in multilayered piezoelectric media. The hybrid matrix method preserves the numerical stability and accuracy across large and small thicknesses. For discussion and comparison, the scattering matrix method is also presented in physics-based form and coherent form. The latter form resembles closely that of hybrid matrix method and helps to highlight their relationship and distinction. For both scattering and hybrid matrix methods, their formulations in terms of eigenwaves solution are provided concisely. Making use of the hybrid matrix, the recursive asymptotic method without eigenwaves solution is described and discussed. The method bypasses the intricacies of eigenvalue-eigenvector approach and requires only elementary matrix operations along with thinlayer asymptotic approximation. It can be used to determine Green’s function matrix readily and facilitates the trade-off between computation efficiency and accuracy. Introduction For many years there has been considerable interest in the study of acoustic wave propagation in multilayered piezoelectric media.Many techniques have been developed for analysis of such media, including transfer matrix method [1], impedance/stiffness matrix method [2][3][4], scattering/reflection matrix method [5][6][7] and hybrid matrix method [8].A comprehensive review of these methods has been provided in [9] along with their variants, numerical stability, computational efficiency, usefulness and deficiency.Since the transfer matrix method becomes unstable toward large thicknesses, while the impedance matrix method is inaccurate toward small thicknesses, they are not to be discussed further below.On the other hand, owing to their numerical stability and accuracy, both scattering and hybrid matrix methods deserve to be exploited further as mentioned or demonstrated in some recent works [10][11][12].In particular, the scattering matrix methods so far have been presented more in physics-based form (in terms of reflections and transmissions), which has motivated the unified matrix formalism in [10].However, with the unified formalism therein, it is still not clear about any relationship or distinction with hybrid matrix method.Moreover, most matrix methods thus far rely on the eigenwaves solution in their basic building blocks.Since eigensolver often takes substantial computations, it is useful to consider other methods without the need for eigenwaves solution [9]. In this paper, we present the recursive asymptotic hybrid matrix method for acoustic waves in multilayered piezoelectric media.The method is extended from the non-piezoelectric case [8] and exploits the hybrid matrix which preserves the numerical stability and accuracy across large and small thicknesses (instead of stiffness matrix [13] that may become inaccurate).For discussion and comparison, we also present the scattering matrix method in physics-based form and coherent form.The latter form resembles closely that of hybrid matrix method and helps to highlight their relationship and distinction.For both scattering and hybrid matrix methods, their formulations in terms of eigenwaves solution are provided concisely.Making use of the hybrid matrix, the recursive asymptotic method without eigenwaves solution is described and discussed.The method bypasses the intricacies of eigenvalue-eigenvector approach and requires only elementary matrix operations along with thin-layer asymptotic approximation.It can be used to determine Green's function matrix readily and facilitates the trade-off between computation efficiency and accuracy. Acoustic Waves in Multilayered Piezoelectric Media Problem Formulation Figure 1 shows a planar multilayered structure comprising N piezoelectric layers stratified along ẑ direction (within optional external layers 0 and N+1).For each layer f of thickness f h , its upper and lower interfaces/ boundaries are denoted by f Z  and f Z  , respectively.Let the fields in each layer f be described by field vector Assuming plane harmonic wave with exp( ) j t  time dependence and transverse wavenumber t t k s   , the field vector f f satisfies a first-order differential equation as where the layer system matrix is f A consists of the material parameters of layer f specified in terms of mass density f  and various stiffness constants, piezoelectric stress constants and permittivity via f  's (see [1]). Solution with Eigenwaves Equation (3) can be written as an eigenvalue problem ( ) whose solutions represent the eigenwaves within each layer f.For convenience, the normal wavenumbers zf k and their associated eigenvectors f  can be grouped into the following matrices: Here, ( ) . The superscripts '>' and '<' stand for "upward-bounded" and "downward-bounded" partitions, which correspond to upwardbounded and downward-bounded eigenwaves respectively (cf.boundedness/radiation condition).In line with field vector (1), the eigenwave matrix σ υ partitions.Each of these may be further partitioned in accordance with their compositions in (2) as , (Note that our convention in ( 7)-( 8) is that the notation without superscript '>' or '<' represents fields while the same notation with such superscript represents waves of upward-bounded or downward-bounded type.)Using the matrices above, the field vector solution can be expressed as f c is the coefficient vector (to be determined), while ( ) ( ) is the wave amplitude vector that lumps the exponential terms together.Following the upward-bounded and downward-bounded associations above, these vectors can be partitioned into ( ) , ( ) ( ) (and their decompositions) are functions of z, while f ψ , f c (and their decompositions) are not.Furthermore, the field vector f f is continuous across the interface of two different layers, so we have . However, the wave amplitude vector . Thus, it is important to specify exactly the z location of the interface to be within which of the two adjacent layers. Scattering Matrix Method Using the eigenwaves in each layer f, one can proceed to determine the solution for a stack of multilayered media. To that end, we first define the local interface scattering matrix that better describes the physics of wave scattering (reflection/transmission) at the interface of layers f and f + 1: , 1 denote the local reflection and transmission matrices for waves incident from layer f to f + 1, while 1, denote those for incidence from layer f +1 to f.These matrices can be derived directly in terms of the eigenwaves of both layers as Based on the local interface scattering matrix, one can determine the scattering matrix for additional layers (one at a time) of a stack using certain recursive algorithm.In particular, consider the downward-bounded waves incident from layer f +1 toward layer 0. The stack reflection and transmission matrices, , can be obtained from the local interface scattering matrix and the preceding ,0 f r and ,0 f t using the recursive algorithm (cf.( 23) and (25) of [9]): Likewise, the stack reflection and transmission matrices 0, 1 f  r and 0, 1 f  t for incidence of upward-bounded waves from layer 0 toward layer f +1 can also determined via recursive algorithm: 0, 1 0, ,0 The form of ( 13)-( 16) facilitates the physics-based description of wave multiple reflections in the stack of multilayered media. As an alternative, it is instructive to define the matrix relating the wave amplitude vectors in the form Such matrix has been denoted as layer-interface scatterer [9], since it combines the layer scatterers ( ) with interface scattering matrix as To be in coherent form, the stack scattering matrix [1: ] f S is also defined in place of , r t , which embeds (within layers 0 and f+1) the stack from layer 1 to f (denoted by the superscript [1:f]), i.e. In essence, they represent a full matrix variant of algorithm A3 in Table 1 of [9]. Hybrid Matrix Method The scattering matrix method in the previous section involves relations among wave amplitude vectors f  w and f  w .As mentioned earlier, since these vectors are not continuous across interfaces, it should be more convenient to work directly with field variables instead.In this aspect, a variety of definitions and algorithms are possible including the transfer and impedance matrix methods.These methods are not unconditionally stable since they may cause numerical instability or inaccuracy problem for very large or very small layer thickness.Such problem can be overcome altogether by resorting to f H is called layer hybrid matrix since it has a mixture of impedance, admittance and transfer elements.Using the eigenwaves in each layer, the layer hybrid matrix can be determined as 11 12 21 22 It can be analytically shown that f H is still numerically stable even when the layer thickness tends to infinity or zero.Indeed, assuming at least slight loss as in practice, when the layer thickness tends to infinity, i.e. On the other hand, when the layer thickness tends to zero, it is evident that ( ) ( ) , thus the hybrid matrix in (26) becomes 11 12 Therefore, the hybrid matrix preserves the numerical stability and accuracy across large and small thicknesses. For multilayered media, the stack hybrid matrix [1: ] f H for a stack from layer 1 to f (denoted by the superscript This stack matrix can be obtained by incorporating the layer matrix f H into the recursive algorithm: ( ) ( ) ( ) ( ) Notice that the form of ( 30)-( 33) resembles closely that of ( 21)-( 24), which helps to highlight their relationship and distinction.In particular, both scattering matrix and hybrid matrix do not differ much in their recursive algorithms for a stack of multilayered media.However, besides relating different entities (waves f w vs. fields f f ), their basic matrices are distinct, i.e., f S involves eigenwaves of two layers in ( 12) and (18); while f H involves eigenwaves of individual layer only in (26). Solution without Eigenwaves-Recursive Asymptotic Method Thus far both scattering and hybrid matrix methods rely on the eigenwaves (of two or one layers) as the input (for f S and f H ) in each recursion to arrive at [1: ] f S and [1: ] f H .For such eigenwaves solution, there exist various intricacies of solving the eigenvalues and eigenvectors including complex root searching, degeneracy treatment and upward/downward eigenvector sorting or selection. To obviate the need for eigenwaves, we resort to the recursive asymptotic hybrid matrix method.The method bypasses the intricacies of eigenvalue-eigenvector approach and requires only elementary matrix operations along with thin-layer asymptotic approximation as described below. For each individual layer f, we geometrically subdivide the layer into n+1 sublayers having thicknesses as shown in Figure 2.For the thinnest sublayer n+1, its hybrid matrix is obtained directly by thin-layer asymptotic approximation: Starting with this matrix, we implement self-recursions as This recursive algorithm proceeds until i = 1 and the layer hybrid matrix is found as f  H H . Throughout the procedure, there is no need to solve any eigenproblem and the hybrid matrix can be computed stably and accurately even for very thick or very thin layer. Discussion and Numerical Results The previous sections have discussed some algorithms for scattering and hybrid matrix methods.For concise comparison, Table 1 lists each of the algorithms and its pertaining equations involved for each major step represented by an arrow.In each major step, there is at least one (dense) matrix inversion to be dealt with, which often constitutes the most time-consuming operation.For the scattering matrix method, we list the algorithms in physics-based form as well as coherent form.The latter form helps to bring out the close resemblance with the algorithm of hybrid matrix method.Also listed in Table 1 is the input required for each algorithm.Since the scattering matrix relates wave amplitude vectors across interfaces, the input ought to be eigenwaves of two layers.As for the hybrid matrix that relates field variables, the input may need only the eigenwaves of individual layer.Through the recursive asymptotic method, the input does not invoke any eigenwaves at all. To highlight the distinctions between the hybrid ma- With eigenwaves (5) ( 7) Hybrid matrix method: (26) (30 Recursive asymptotic hybrid matrix method: (34) (38) (30 trix method with eigenwaves and the recursive asymptotic method without eigenwaves, we further list down below the key steps in their respective procedure.In particular, the procedure with eigenwaves is i) Solve the eigenvalue problem (5) for wavenumbers and eigenvectors ii) Perform upward/downward-bounded eigenvectors sorting or selection in ( 6)-( 7), noting the boundedness/ radiation condition and degeneracy treatment if needed iii) Derive the layer hybrid matrix using (26). Step i) is often time-consuming, while step ii) deserves much careful attention and could be rather bothersome in practice.On the other hand, the procedure without eigenwaves via the recursive asymptotic method is i') Initialize the thin-layer asymptotic approximation (34) directly from f A ii') Perform self-recursions (35)-(38) until i = 1 iii') The layer hybrid matrix is found as (1) f  H H .All steps here are straightforward and involve elementary matrix operations only. To assess the accuracy of recursive asymptotic hybrid matrix method, we investigate the relative error changes with the number of geometric subdivisions n+1 or equivalently, the recursion number n.We arbitrarily take a ZnO layer of 1 μm thick at 1 GHz as an example.H represent the layer hybrid matrix obtained from the recursive asymptotic method and eigenwaves solution, respectively.The error is calculated by taking the average over a range of transverse wavenumbers.Notice that the error decreases initially due to smaller truncation error for smaller initial sublayer thickness n d .After certain minimum point, the error increases slightly and reaches a plateau without increasing further.To illustrate the usefulness of recursive asymptotic hybrid matrix method, let us consider a ZnO/ diamond/Si structure at 2 GHz.The thicknesses of ZnO and diamond layers are 1.2 and 10 μm respectively, while Si substrate and vacuum are assumed semi-infinite.For analysis of surface acoustic wave (SAW) on such structure, one can derive the generalized Green's function matrix G defined by where s  is the charge density on the surface.G can be formulated using the scattering matrix with eigenwaves in a robust manner, see [5].Alternatively, one can also determine G using the stack hybrid matrix [1: ] N H (for a stack from layer 1 to N) as where 0  is the permittivity for vacuum (layer N+1), ( ) and sub Z is the characteristic surface impedance for Si substrate (layer 0) The stack hybrid matrix [1: ] N H can be obtained with or without eigenwaves solution as mentioned earlier.Figure 4 shows the Green's function element computed with and without eigenwaves.In the latter case, we apply the recursive asymptotic hybrid matrix method with n=6.Although this recursion number is rather small, the results agree quite well and the plots are barely distinguishable.Referring to Figure 3, one can select higher recursion number for better accu- racy, although this may not be needed in many cases (e.g. when material data is not that accurate).In general, the computation efficiency is improved for lower accuracy required and also for thinner layer with fewer geometric subdivisions.Therefore the method provides a very convenient way that facilitates the trade-off between computation efficiency and accuracy.Note that the efficiency improvement here is meant for every layer and one will gain substantial savings in the total computation time when there are many layers in the stack for modeling inhomogeneous media.Moreover, the method is very useful for being simple enough since it does not require any eigenwaves for all layers (even semi-infinite substrate).Thus, it may be applicable even when the eigensolver package is not readily accessible, such as on light-weight multi-thread processors (e.g.GPUs). Conclusions This paper has presented the recursive asymptotic hybrid matrix method for acoustic waves in multilayered piezoelectric media.The hybrid matrix method preserves the numerical stability and accuracy across large and small thicknesses.For discussion and comparison, the scattering matrix method has also been presented in physics-based form and coherent form.The latter form resembles closely that of hybrid matrix method and helps to highlight their relationship and distinction.For both scattering and hybrid matrix methods, their formulations in terms of eigenwaves solution have been provided concisely.Making use of the hybrid matrix, the recursive asymptotic method without eigenwaves solution has been described and discussed.The method bypasses the intricacies of eigenvalue-eigenvector approach and requires only elementary matrix operations along with thin-layer asymptotic approximation.It can be used to determine Green's function matrix readily and facilitates the trade-off between computation efficiency and accuracy. ff formed by generalized stress vector f σ (comprising normal stress f τ and normal electric displacement zf D ) and generalized velocity vector f υ (comprising velocity f v and the rate of change of electric potential Fig- ure 3 shows the average relative error versus recursion number n.The relative error is measured by - Figure 3 . Figure 3. Average relative error vs. recursion number n. Figure 4 . Figure 4. Green function element computed with and without eigenwaves (via recursive asymptotic hybrid matrix method with n = 6).
3,859.8
2011-09-27T00:00:00.000
[ "Engineering", "Physics" ]
Neuroprotective Peptides and New Strategies for Ischemic Stroke Drug Discoveries Ischemic stroke continues to be one of the leading causes of death and disability in the adult population worldwide. The currently used pharmacological methods for the treatment of ischemic stroke are not effective enough and require the search for new tools and approaches to identify therapeutic targets and potential neuroprotectors. Today, in the development of neuroprotective drugs for the treatment of stroke, special attention is paid to peptides. Namely, peptide action is aimed at blocking the cascade of pathological processes caused by a decrease in blood flow to the brain tissues. Different groups of peptides have therapeutic potential in ischemia. Among them are small interfering peptides that block protein–protein interactions, cationic arginine-rich peptides with a combination of various neuroprotective properties, shuttle peptides that ensure the permeability of neuroprotectors through the blood–brain barrier, and synthetic peptides that mimic natural regulatory peptides and hormones. In this review, we consider the latest achievements and trends in the development of new biologically active peptides, as well as the role of transcriptomic analysis in identifying the molecular mechanisms of action of potential drugs aimed at the treatment of ischemic stroke. Introduction Ischemic stroke (IS) continues to be one of the leading causes of death and disability in the adult population worldwide. In the early stages after the onset of an ischemic attack, effective treatment for stroke still includes the use of thrombolytic agents or mechanical thrombectomy [1][2][3][4]. Timely successful reperfusion is the most effective treatment for patients with acute IS, but vascular recanalization can lead to ischemia-reperfusion (IR) injury, and thrombolytic drugs have no effect on protecting or reversing neuronal damage [2,5,6]. The treatment of IS is not limited to reperfusion therapy. Drug therapy includes antiplatelet agents, anticoagulants, antioxidants, antihypertensives, anti-excitotoxic calcium-stabilizing drugs, and other drugs [7][8][9][10]. Despite the versatility and diversity of the drugs used, the currently existing pharmacological methods for the treatment of IS are not effective enough and require further study of the molecular basis of ischemic damage, the development of new therapeutic agents, and approaches to identify potential neuroprotectors. The use of neuroprotectors to protect nerve cells from damage and death and to improve the activity of the nervous system is one of the main directions in the pathogenetic treatment of many neuropathologies. Among them are encephalopathies of various origins, neurodegenerative diseases, the consequences of traumatic brain injuries, chronic cerebrovascular accidents in the elderly, and acute cerebral ischemia. Neuroprotectors have heterogeneous chemical structures and different mechanisms of action. Herbal drugs, antioxidants and vitamins, calcium channel blockers, and agents that improve cerebral metabolism and affect neurodegeneration have a neuroprotective effect [11][12][13][14][15]. Promising candidates for It is well known that protein-protein interactions (PPI) are involved in normal physiological processes. It is estimated that, in the human interactome, PPI networks include about 650,000 contacts [38]. To date, more than half a million forms of PPI dysregulation have been associated with pathological events, and addressing such dysregulation is considered a new therapeutic approach for the treatment of many diseases [17]. It has been found that short peptides that interfere with PPIs may have therapeutic effects. These peptides are called interfering peptides (IP). They are able to bind to the surfaces of proteins, thus blocking their interaction. Among the pathologies associated with PPI is IS. As is known, excessive release of glutamate from synaptic endings and associated excitotoxicity is one of the main mechanisms causing neuronal death in stroke. A large number of studies have significantly expanded the understanding of the mechanisms underlying excitotoxicity in cerebral ischemia and identified many molecular targets for action, and thus, opened up new possibilities for neuroprotective strategies for ischemia [39][40][41][42]. The development of IPs that prevent excitotoxic stress during cerebral ischemia has become one of the strategies aimed at preventing neuronal death in the penumbra region surrounding the damage zone [43][44][45]. IPs include the recently developed synthetic R1-Pep (SETQDTMKTGSSTNNNEEEKSR) and PP2A-Pep (FQFTQNQKKEDSKTSTSV) peptides (Table 1), which inhibit the interaction of γ-aminobutyric acid type B (GABA B ) receptors with enzymes involved in their phosphorylation [46,47]. Under physiological conditions, GABA B receptors control the excitability of neurons in the brain through prolonged inhibition, and thereby counteract neuronal overexcitation and death. However, during cerebral ischemia, excitotoxic states rapidly downregulate GABA B receptors through phosphorylation/dephosphorylation processes mediated by Ca 2+ /calmodulin-dependent protein kinase II (CaMKII) and protein phosphatase 2A (PP2A) [46,48,49]. After phosphorylation, GABA B receptors undergo lysosomal degradation rather than being recycled back to the plasma membrane [48,50]. The R1-Pep peptide penetrating into cells inhibits the interaction of GABA B receptors with CaMKII, preventing its phosphorylation. Administration of this peptide to cultured cortical neurons exposed to excitotoxic conditions, as well as its addition to mouse brain slices after middle cerebral artery occlusion (MCAO), restores the function of GABA B receptors [46]. Another small interfering PP2A-Pep peptide, according to in vitro and ex vivo studies, inhibits the interaction of GABA B receptors with PP2A and restores the expression of GABA B receptors on the cell surface of neurons under normal physiological conditions and after excitotoxic stress [47]. The model of regulation for GABA B receptor activity with the participation of R1-Pep and PP2A-Pep peptides is shown in Figure 1. A current drug candidate for disrupting PPI in IS is the cell-penetrating peptide nerinetide (NA-1, YGRKKRRQRRRKLSSIESDV) ( Table 1). The peptide consists of nine C-terminal residues of the NR2B subunit of the N-methyl-D-aspartate (NMDA) receptor, which selectively binds glutamate and aspartate. It prevents the post-synaptic density protein 95 (PSD-95) from binding to the NR2B subunit of the NMDA receptor and the PDZ domain of neuronal nitric oxide synthase (nNOS) [8,51]. Disruption of this PPI by NA-1 attenuates neurotoxic signaling cascades that lead to excessive calcium ion entry into neurons. NA-1 also prevents excitotoxic cell death and subsequent brain damage. Long-term evaluation of stroke outcomes in rodent and primate models of focal ischemia has also shown that NA-1 can reduce infarct size, along with an improvement in the neurological deficit. However, clinical studies have shown that treatment with a single dose of NA-1 along with endovascular thrombectomy did not improve long-term stroke outcome compared to patients treated with placebo [52]. At the same time, the cohort of patients treated with NA-1 without any thrombolytics had a lower risk of mortality, a significant reduction in infarct volume, and an improvement in functional outcome. This finding will require confirmation but suggests that neuroprotection in human stroke might be possible [52]. Arginine-rich peptides A relatively new class of compounds with a combination of different properties that provide a neuroprotective effect in the treatment of IS is cationic arginine-rich peptides (CARPs). Polyarginine peptides show high levels of cellular internalization and have high therapeutic potential. Guanidine groups in arginines form bidentate hydrogen bonds with negatively charged carboxyl, sulfate, and phosphate groups of proteins, mucopolysaccharides, and cell membrane phospholipids. Such interactions lead to the internalization of peptides into the cell under physiological conditions [53]. Possessing Thus, according to the data of in vitro and ex vivo studies, the developed interfering peptides exerted neuroprotective activity and inhibited excitotoxic neuronal death. IPs are believed to have great therapeutic potential for inhibiting progressive neuronal death in patients with acute stroke. Arginine-Rich Peptides A relatively new class of compounds with a combination of different properties that provide a neuroprotective effect in the treatment of IS is cationic arginine-rich peptides (CARPs). Polyarginine peptides show high levels of cellular internalization and have high therapeutic potential. Guanidine groups in arginines form bidentate hydrogen bonds with negatively charged carboxyl, sulfate, and phosphate groups of proteins, mucopolysaccharides, and cell membrane phospholipids. Such interactions lead to the internalization of peptides into the cell under physiological conditions [53]. Possessing high anti-excitotoxic and anti-inflammatory neuroprotective efficacy, they are able to interfere with calcium influx into cells, stabilize mitochondria, inhibit proteolytic enzymes, induce survival signaling, and reduce oxidative stress [54][55][56][57]. Meloni et al. are actively studying peptides of this class in vitro for the activity of the thrombolytic agents alteplase (tPA) and tenecteplase (TNK). In one of the latest studies by the Meloni team, it was shown that arginine-rich peptides R18D (polyarginine-18, D-isomer) and R18 (polyarginine-18, L-isomer) ( Table 1), when co-administered with thrombolytic agents, increase the maximum activity of the thrombolysis reaction while maintaining these neuroprotective properties [58]. Thus, the polyarginine peptides R18D and R18 represent new potential neuroprotective agents for the treatment of acute IS, the administration of which can have a significant effect in clinical use during clot thrombolysis. Another polyarginine ST2-104 (ARSRLAELRGVPRGL) peptide (Table 1), obtained by the fusion of nona-arginine (R9) with a short peptide aptamer CBD3 from the collapsin response mediator protein 2 (CRMP2), protected neuroblastoma cells SH-SY5Y from death after exposure to glutamate, limited excess calcium intake, blocked apoptosis and autophagy, reduced infarct volumes, and improved neurological scores in MCAO-treated rats [59]. The neuroprotective effect of ST2-104 was due to its effect on apoptosis and autophagy via the CaMKKβ/AMPK/mTOR signaling pathway. Shuttle Peptides to Provide Neuroprotection Another new therapeutic strategy for acute IS is the use of peptides as carriers (shuttles) that ensure the permeability of drugs through the blood-brain barrier (BBB) [60,61]. It is well known that the BBB is a structural barrier that ensures the supply of nutrients to the brain, protects it from harmful substances, and, at the same time, effectively blocks the entry of most neuroprotective agents. A useful property of peptides is their ability to overcome the BBB, which allows them to be used as a carrier for drug delivery. Brainpermeable peptide-drug conjugates, consisting of BBB shuttle peptides, linkers, and drug molecules, directly cross the BBB via an adsorption-mediated transcytosis pathway or through interaction with receptors or other proteins on the surfaces of BBB cells to initiate endogenous transcytosis or other means of transport [62][63][64]. It is well known that glycine has a neuroprotective effect in cerebral IR but has a low permeability through the BBB. A tripeptide, H-Gly-Cys-Phe-OH (GCF) ( Table 1), which can permeate through the BBB, acts as a BBB shuttle and prodrug, delivering the amino acid glycine to the brain to provide neuroprotection [65]. A well-known cellular antioxidant enzyme is superoxide dismutase (SOD). However, exogenous SOD cannot be used to protect tissues from oxidative damage due to the low permeability of the cell membrane. The recombinant CPP-SOD fusion protein, which combines the SOD protein with short cell-penetrating peptides (CPPs) ( Table 1), can cross the BBB and alleviate severe oxidative damage in various brain tissues by scavenging reactive oxygen species, reducing the expression of inflammatory factors, and inhibiting NF-κB/MAPK signaling pathways [61]. Thus, the clinical application of CPP-SOD impacts the damage associated with oxidative stress and new therapeutic strategies. Based on their physical and chemical properties, CPPs are classified as cationic, amphipathic, and hydrophobic peptides. CPPs typically contain more than five positively charged amino acids. Most cationic CPPs are derived from natural TAT (YGRKKRRQRRR) and penetratin (RQIKIWFQNRRMKWKK) peptides [53,66]. Peptides That Mimic Natural Regulatory Peptides and Hormones Many regulatory peptides are effective neuroprotectors. The main features of regulatory peptides are polyfunctionality, formation caused by cleavage from a precursor polypeptide, and a cascade mechanism of action. Neuropeptides include regulatory peptides that have a pleiotropic effect and are produced by neurons. The role of neuropeptides in the development of diseases and the treatment of neurological disorders was recently described in detail in a review by Yeo et al. [35]. New neuroprotective peptides with the potential to improve stroke outcomes include synthetic peptides that mimic natural regulatory peptides and hormones. They have greater metabolic stability, a longer half-life of their elimination from the blood, and a smaller size [35]. Synthetic adropin has therapeutic potential [67]. Adropin is a unique hormone encoded by the energy homeostasis-associated (Enho) gene [68]. This highly conserved peptide has many functions, including maintaining the integrity of the BBB and reducing the activity of matrix metalloproteinase-9 (MMP-9) [69]. The synthetic adropine (adropin , ACHSRSADVDSLSESSPNSSPGPCPEKAPPPQKPSHEGSYLLQP) treatment of IS mice reduced infarct size by activating eNOS and reducing oxidative damage [67] (Table 1). A recent study showed that in aged mice undergoing transient MCAO, post-ischemic therapy with synthetic adropine markedly reduced infarct volume, cerebral edema, and BBB damage; lowered MMP-9 levels; and significantly improved motor function, muscle strength, and long-term cognitive function [69]. The therapeutic effect in rats with MCAO was provided by another synthetic dynorphin A(1-8) peptide [70] (Table 1). This peptide is a fragment of dynorphin A with strong opioid activity. Dynorphin A(1-8) (YGGFLRRI) has been shown to inhibit oxidative stress and apoptosis in MCAO rats, affording neuroprotection through NMDA receptor and κ-opioid receptor channels [70]. In MCAO-treated rats, intranasal administration of dynorphin A(1-8) showed better behavioral improvement, a higher extent of Bcl-2 and activity of SOD, along with a much lower level of infarction volume, brain water content, number of cell apoptosis, extent of Bax, and Caspase-3 compared to the control [70]. The NX210 (WSGWSSCSRSCG) peptide is also a potential therapeutic agent against cerebral IR injury ( Table 1). The peptide is derived from the thrombospondin type 1 repeat (TSR) sequence of SCO-spondin. NX210 prevents oxidative stress and neuronal apoptosis in cerebral IR through enhancement of the integrin-β1/PI3K/Akt signaling pathway [71]. At present, in the development of neuroprotective drugs, considerable attention has been paid to Glucagon-Like Peptide-1 (GLP-1, HAEGTFTSDVSSYLEGQAAKEFIAWLVK-GRG) analogs used to treat type 2 diabetes and obesity. Diabetes is an important risk factor for cerebral infarction. In diabetic patients, the frequency of cerebral infarction is 1.8-6.0 times higher compared to non-diabetic patients [72]. One of the main biological targets for the pharmaceutical action of the pleiotropic GLP-1 hormone and its analogues is the glucagon-like peptide-1 receptor (GLP-1R). Binding to this receptor stimulates the secretion of insulin by pancreatic β-cells, thereby causing a decrease in glycemia. There are currently seven GLP-1R agonists (GL1-RAs) approved for the treatment of diabetes [73]. At the same time, the long-acting synthetic GLP-1 analogues Liraglutide [74] and Semaglutide [75] demonstrated the most notable therapeutic success. Liraglutide (HAEGTFTSDVSSYLEGQAAK(E-C16 fatty acid)EFIAWLVRGRG) is a 32 amino acid peptide with C-16 fatty acid fragments. It has 97% identity with the human GLP-1 sequence. Numerous studies using models of cerebral ischemia in rodents indicate that Liraglutide reduces the volume of the infarct zone, has a neuroprotective and antioxidant effect, promotes angiogenesis and increases the expression of VEGF in the area of cerebral ischemia [76][77][78], improves metabolic and functional recovery after stroke [79], and reduces neurological deficits [80][81][82]. The study of the mechanism of action of Liraglutide in the model of rat cerebral IR with diabetes showed that the peptide inhibited endoplasmic reticulum stress and thereby reduced apoptosis [82]. A recent study by Yang et al. disclosed a new neuroprotective mechanism by which Liraglutide provides protection against damage caused by cerebral ischemia [83]. Using a mouse model of focal ischemia of the cerebral cortex and microglial cells, the authors showed that the neuroprotective effect of Liraglutide can be achieved by inhibiting pyroptosis, an inflammatory form of programmed cell death. In this case, the anti-pyroptotic mechanism of Liraglutide in vivo can be mediated by NOD-like receptor protein 3 (NLRP3) [83]. Semaglutide (HXEGTFTSDVSSYLEGQAAK(C18diacid-γE-OEG-OEG)EFIAWLVRGRG), another long acting GL1-RA, has two amino acid substitutions compared to human GLP-1 (2-aminoisobutyric acid (Aib) 8, arginine (Arg) 34) and is derivatized at lysine 26 [84]. Subcutaneous Semaglutide, administered once a week, was first approved for permanent weight management in June 2021 in the United States. Recent studies indicate that Semaglutide is not only safe and effective in the treatment of obesity [85], but also reduces ischemic cerebrovascular events in type 2 diabetes [86]. Currently, in overweight or obese patients, the protective effect of Semaglutide in heart disease and stroke is being actively studied [87,88]. Thus, according to numerous studies, GL1-R agonist drugs recommended for the treatment of diabetes and obesity exhibit a neuroprotective effect and provide protection against damage caused by cerebral ischemia [76][77][78][79][80][81][82][83][86][87][88]. It can be assumed that the effects of these neuropeptides are much wider than currently recognized. We believe that GL1-R agonists are a potential therapeutic tool to protect the brain during strokes. In recent years, drugs based on melanocortin peptides have been actively developed. Melanocortins are a large family of neuropeptides formed from a common precursor, the proopiomelanocortin molecule, which includes the adrenocorticotropic hormone (ACTH), a group (α-, β-, γ-) of melanocyte-stimulating hormones (MSH). Melanocortins have a wide spectrum of physiological activity, which makes it possible to use their fragments for drug development. The nootropic peptide Semax (MEHFPGP) is successfully used for the treatment of IS (Table 1). The N-terminus of Semax contains an ACTH(4-7) fragment, and the C-terminus is stabilized by the addition of the tripeptide Pro-Gly-Pro (PGP). Semax has been used in neurological practice for many years in the treatment of acute and chronic disorders, including IS and its consequences [89,90]. Semax has a pronounced nootropic, neuroprotective, and immunomodulatory effect [91,92]. Recent studies have shown that another melanocortin ACTH(6-9)PGP (HFRWPGP) peptide has a wide spectrum of neuroprotective activity [93][94][95]. A study of the effect of the ACTH(6-9)PGP peptide on the survival of cultured cortical neurons against the background of the excitotoxic effect of glutamate showed that, depending on the dose, the peptide protected neurons from cell death [94] ( Table 1). The neuroprotective effect of ACTH(6-9)PGP was accompanied by a slowdown in the development of delayed calcium dysregulation and synchronous depolarization of mitochondria. The peptide significantly increased the number of neurons that restored calcium ion homeostasis after glutamate withdrawal. A subsequent study of the proliferative and cytoprotective activity of ACTH(6-9)PGP peptide on SH-SY5Y cells in models of toxicity caused by hydrogen peroxide, tert-butyl hydroperoxide, or potassium cyanide (KCN) showed that the peptide dose-dependently protected cells from oxidative stress and exhibited proliferative activity. The mechanism of peptide action was the modulation of proliferation-related NF-κB genes and stimulation of the pro-survival NRF2-gene-related pathway, as well as a decrease in apoptosis [95]. Since reperfusion used to treat IS causes additional damage in brain cells, including the accumulation of excess oxygen radicals and activation of apoptosis, the discovered therapeutic effects of ACTH(6-9)PGP allow us to highly appreciate the possibility of its clinical use after the administration of thrombolytics (e.g., tPA) or mechanical thrombectomy. Type of Peptide Peptides Functions References Interfering peptides (IPs) R1-Pep It inhibits the interaction of GABA B receptor with CaMKII, preventing receptor phosphorylation. [46] PP2A-Pep It inhibits the interaction of GABA B receptor with PP2A, preventing receptor dephosphorylation. [47] NA-1 It attenuates neurotoxic signaling cascades that lead to excessive calcium ion entry into neurons. [8,51] Cationic arginine-rich peptides (CARPs) R18D, R18 They have high anti-excitotoxic and anti-inflammatory efficiency, and are able to interfere with calcium influx into cells, stabilize mitochondria, inhibit proteolytic enzymes, induce survival signaling, and reduce oxidative stress. Peptides increase the maximum activity of the thrombolysis reaction when co-administered with thrombolytic agents [54][55][56][57][58] ST2-104 It effects on apoptosis and autophagy via the CaMKKβ/AMPK/mTOR signaling pathway. [59] Shuttle peptides GCF It acts as a BBB shuttle and prodrug, delivering the amino acid glycine to the brain to provide neuroprotection. [65] CPP-SOD The recombinant CPP-SOD fusion protein can cross the BBB and alleviate severe oxidative damage in various brain tissues by scavenging reactive oxygen species, reducing the expression of inflammatory factors and inhibiting NF-κB/MAPK signaling pathways. [61] Peptides that mimic natural regulatory peptides and hormones Adropin It reduces infarct size by activating eNOS and reducing oxidative damage, maintains the integrity of the BBB and reduces the activity of MMP-9. [67,69] Dynorphin A(1-8) It affords neuroprotection through NMDA receptor and κ-opioid receptor channels. [70] NX210 It prevents oxidative stress and neuronal apoptosis in cerebral IR through enhancement of the integrin-β1/PI3K/Akt signaling pathway. [71] Liraglutide Long acting GL1-RA that promotes angiogenesis, reduces neurological deficits, apoptosis, inhibits pyroptosis, an inflammatory form of programmed cell death. [74,[76][77][78][79][80][81][82][83] Semaglutide Long acting GL1-RA that reduces ischemic cerebrovascular events in type 2 diabetes [75,[85][86][87][88] Semax It has a pronounced nootropic, neuroprotective, and immunomodulatory effects. Peptide initiates a neurotransmitter and anti-inflammatory response. [27,91,92,96,97] ACTH(6-9)PGP It protects neurons from cell death, protected cells from oxidative stress and exhibited proliferative activity. [94,95] It should be noted that peptides often combine a few units to have different properties. In this review, we note a case where ACTH fragments were stabilized by the glyproline PGP unit. As a result, the Semax and ACTH(6-9)PGP peptides were created [89,90,[93][94][95]. Furthermore, the N-terminus of R1-Pep and PP2A-Pep IPs were conjugated with a peptide sequence derived from the rabies virus glycoprotein (YTIWMPENPRPGTPCDIFTNSRGKRAS-NGGGG) to make the IPs penetrate cell membranes. In addition, there are examples of R1-Pep penetration being increased via conjugation with CARPs or co-conjugation with CARPs and CPPs (e.g. YTIWMPENPRPGTPCDIFTNSRGKRASNGGGG-RRRRRRRRR-SETQDTMKTGSSTNNNEEEKSR) [46,47]. Furthermore, GL1-RAs, including liraglutide and semaglutide, contain additional lateral fatty rather than amino acid sequences, which improve pharmacokinetics and protect the peptide from both peptidase degradation and renal filtration [98]. Figure 2 shows multiple sequence alignments for IPs, CARPs, CPPs, and mimics of natural regulatory peptides described in this review. The results were obtained using the MAFFT v7 tool. The similarity between the structures of the GLP-1-related peptides (GLP-1, major branches of liraglutide and semaglutide) is visible. In addition, arginine-containing peptides (TAT, R18, R18D) clustered. Moreover, the NA-1 peptide, which is related to the IP group, belonged to this cluster too. Subsequently, PGP-containing peptides (PGP, Semax, ACTH(6-9)PGP, and adropin(34-76)) clustered independently. Concomitantly, the IPs did not cluster. Thus, R1-Pep, PP2A-Pep, and NA-1 did not have similar domains based on the results obtained from the MAFFT v7 tool. Interestingly, the dynorphin A(1-8), NX210, ST2-104, and GCF peptides did not belong to any cluster. Perhaps peptides have more complex and nonlinear correlations between their structures and neuroprotective properties. [89,90,[93][94][95]. Furthermore, the N-terminus of R1-Pep and PP2A-Pep IPs were conjugated with a peptide sequence derived from the rabies virus glycoprotein (YTIWMPEN-PRPGTPCDIFTNSRGKRASNGGGG) to make the IPs penetrate cell membranes. In addition, there are examples of R1-Pep penetration being increased via conjugation with CARPs or co-conjugation with CARPs and CPPs (e.g. YTIWMPENPRPGTPCDIFT-NSRGKRASNGGGG-RRRRRRRRR-SETQDTMKTGSSTNNNEEEKSR) [46,47]. Furthermore, GL1-RAs, including liraglutide and semaglutide, contain additional lateral fa y rather than amino acid sequences, which improve pharmacokinetics and protect the peptide from both peptidase degradation and renal filtration [98]. Figure 2 shows multiple sequence alignments for IPs, CARPs, CPPs, and mimics of natural regulatory peptides described in this review. The results were obtained using the MAFFT v7 tool. The similarity between the structures of the GLP-1-related peptides (GLP-1, major branches of liraglutide and semaglutide) is visible. In addition, arginine-containing peptides (TAT, R18, R18D) clustered. Moreover, the NA-1 peptide, which is related to the IP group, belonged to this cluster too. Subsequently, PGP-containing peptides (PGP, Semax, ACTH(6-9)PGP, and adropin(34-76)) clustered independently. Concomitantly, the IPs did not cluster. Thus, R1-Pep, PP2A-Pep, and NA-1 did not have similar domains based on the results obtained from the MAFFT v7 tool. Interestingly, the dynorphin A(1-8), NX210, ST2-104, and GCF peptides did not belong to any cluster. Perhaps peptides have more complex and nonlinear correlations between their structures and neuroprotective properties. Transcriptomic Analysis as a New Approach to Reveal the Molecular Mechanisms of Ischemic Damage and the Action of Potential Neuroprotectors At present, transcriptomics has become an effective approach for studying the mechanisms of pathological processes in various diseases and searching for molecular targets for their drug treatment. High-throughput mRNA sequencing (RNA-Seq) reveals information about the expression of individual genes and makes it possible to identify signaling pathways involved in the development of many diseases. Transcriptomic analysis has made a significant contribution to the study of the molecular mechanisms of brain damage as a result of IR [99][100][101]. This approach has also made it possible to reveal the mechanisms of therapeutic effects of many potential drugs at the genetic level, creating a theoretical basis for the treatment of IS [102][103][104][105][106]. RNA-Seq analysis revealed the molecular mechanisms by which regulatory peptides and peptide-related drugs perform a neuroprotective role in cerebral ischemia. There are many examples of the use of transcriptome analysis to study the mechanisms of action of a number of peptides, including Orexin-A [107], Semax [27], VR-10 [29], and semaglutide [101]. Transcriptomic Analysis as a New Approach to Reveal the Molecular Mechanisms of Ischemic Damage and the Action of Potential Neuroprotectors At present, transcriptomics has become an effective approach for studying the mechanisms of pathological processes in various diseases and searching for molecular targets for their drug treatment. High-throughput mRNA sequencing (RNA-Seq) reveals information about the expression of individual genes and makes it possible to identify signaling pathways involved in the development of many diseases. Transcriptomic analysis has made a significant contribution to the study of the molecular mechanisms of brain damage as a result of IR [99][100][101]. This approach has also made it possible to reveal the mechanisms of therapeutic effects of many potential drugs at the genetic level, creating a theoretical basis for the treatment of IS [102][103][104][105][106]. RNA-Seq analysis revealed the molecular mechanisms by which regulatory peptides and peptide-related drugs perform a neuroprotective role in cerebral ischemia. There are many examples of the use of transcriptome analysis to study the mechanisms of action of a number of peptides, including Orexin-A [107], Semax [27], VR-10 [29], and semaglutide [101]. Recently, using RNA-Seq, we studied the protective properties of the Semax peptide at the transcriptome level under transient MCAO conditions. Previously, we studied the gene expression in rat brain under conditions of incomplete global ischemia of the rat brain and permanent MCAO using real-time reverse transcription polymerase chain reaction (RT-PCR). Thereby, we showed that the peptide affects the expression of a limited number of genes encoding neurotrophic factors and their receptors [108,109]. Using RatRef-12 Bead-Chips, it was shown that Semax affects the expression of genes associated with the immune and vascular systems in the brain of rats after permanent MCAO [92,97]. Using RNA-Seq analysis, we identified several hundred differentially expressed genes (DEGs) (>1.5-fold change) in the brains of rats 24 h after transient MCAO treated with Semax compared with control animals treated with saline [27,110]. We found that Semax suppressed the expression of genes associated with inflammatory processes (e.g., Hspb1, Fos, iL1b, iL6, Ccl3, Socs3) and activated the expression of genes associated with neurotransmission (e.g., Cplx2, Chrm1, Gabra5, Gria3, Neurod6, Ptk2b), while IR, on the contrary, activated the expression of genes involved in inflammation, immune response, apoptosis, and stress response and suppressed the expression of genes associated with neurotransmission. Analysis of signaling pathways associated with Semax-induced DEGs in the transient MCAO rat model using a web server for functional enrichment analysis (g:ProfileR) and Gene Set Enrichment Analysis (GSEA) data showed that DEG activation in Semax-treated rats 24 h after MCAO was associated with calcium signaling, dopaminergic, cholinergic, the glutamatergic synapse, and G-protein coupled receptor (GPCR) signaling through chemical synapses, whereas DEG suppression was associated with the phagosome, interleukin 17 (IL-17), tumor necrosis factor (TNF), p53 signaling pathways, innate immune system, neutrophil degranulation, and cytokine signaling in the immune system [27]. Thus, according to RNA-Seq data, 24 h after transient MCAO, Semax initiated a neurotransmitter and anti-inflammatory response that compensated for mRNA expression patterns that were disturbed under IR conditions. Perhaps, Semax can mediate its influence on gene expression indirectly through interaction with receptors on the cell membrane. Moreover, the action of Semax, which is able to reduce the disturbances caused by ischemia, can be explained by joint action of peptide on receptors in an allosteric manner, together with hormones and mediators of the ischemic response. The model of the influence of Semax on the transcriptome of brain cells and the spectrum of Semax effects identified at the gene expression level at 24 h after transient MCAO in rats is shown in Figure 3. Semax, as noted above, contains an ACTH(4-7) fragment flanked at the C-terminus by the PGP tripeptide. There is evidence that PGP not only ensures the resistance of peptides against biodegradation, but also exhibits a wide range of biological activity. Namely, PGP is involved in the formation of the immune response, and the PGP peptide has antiplatelet, anticoagulant, fibrinolytic, anxiolytic, antiapoptotic, and antistress activity [108,[111][112][113][114][115][116]. Using real-time RT-PCR, we studied the effects of PGP and another PGP-containing the Pro-Gly-Pro-Leu (PGPL) peptide on the expression of a number of inflammatory cluster (IC) and neurotransmitter cluster (NC) genes in rat brain under transient MCAO. Then, we compared the studied peptide effects with the action of Semax under IR conditions [96,117]. Both PGP and PGPL peptides showed an effect dissimilar to the effects of Semax at 24 h after transient MCAO. The administration of peptides did not have a statistically significant effect on the expression of genes involved in inflammation. This result highlights the importance of the structure of the ACTH(4-7) fragment for the effects of Semax. In addition, the IC (iL1b, iL6, and Socs3) rat genes for PGP, as well as the IC (iL6, Ccl3, Socs3, and Fos) and NC (Cplx2, Neurod6, and Ptk2b) rat genes for PGPL were discussed. The expression levels of these genes changed significantly after the corresponding peptide, compared to Semax administration. Based on the results of the analysis of gene expression under experimental conditions using bioinformatic approaches, a functional network was built that illustrates the spectra of common and unique effects of PGP, PGPL, and Semax peptides [117]. Thus, transcriptomic analysis, in addition to studying the molecular mechanisms of action of peptides, makes it possible to reveal the relationship between their chemical structure and possible effects at the genomic and functional levels. Semax, as noted above, contains an ACTH(4-7) fragment flanked at the C-terminus by the PGP tripeptide. There is evidence that PGP not only ensures the resistance of peptides against biodegradation, but also exhibits a wide range of biological activity. Namely, PGP is involved in the formation of the immune response, and the PGP peptide has antiplatelet, anticoagulant, fibrinolytic, anxiolytic, antiapoptotic, and antistress activity [108,[111][112][113][114][115][116]. Using real-time RT-PCR, we studied the effects of PGP and another PGP-containing the Pro-Gly-Pro-Leu (PGPL) peptide on the expression of a number of inflammatory cluster (IC) and neurotransmi er cluster (NC) genes in rat brain under transient MCAO. Then, we compared the studied peptide effects with the action of Semax under IR conditions [96,117]. Both PGP and PGPL peptides showed an effect dissimilar to the effects of Semax at 24 h after transient MCAO. The administration of peptides did not have a statistically significant effect on the expression of genes involved in inflammation. This result highlights the importance of the structure of the ACTH(4-7) fragment for the effects of Semax. In addition, the IC (iL1b, iL6, and Socs3) rat genes for PGP, as well as the IC (iL6, Ccl3, Socs3, and Fos) and NC (Cplx2, Neurod6, and Ptk2b) rat genes for PGPL were discussed. The expression levels of these genes changed significantly after the corresponding peptide, compared to Semax administration. Based on the results of the analysis of gene expression under experimental conditions using bioinformatic approaches, a functional network was built that illustrates the spectra of common and unique effects of PGP, PGPL, and Semax peptides [117]. Thus, transcriptomic analysis, in addition to studying the molecular mechanisms of action of peptides, makes Discussion The progress achieved in the recovery of patients after IS using pharmacological thrombolysis and mechanical thrombectomy has not solved the problem of limiting progressive neuronal death after stroke. There is still a need for further search for therapeutic agents for the treatment of stroke. As can be seen from the available literature, a large number of studies using oxygen-glucose deprivation and reperfusion (OGD/R) cell cultures, as well as laboratory animal models of experimental brain ischemia mimicking conditions of ischemic injury, have demonstrated the potential benefit of neuroprotection. Neuroprotective peptides could be the answer to medical demands in the treatment of strokes and their consequences. To date, hundreds of peptides have undergone preclinical or clinical studies [8]. As noted, peptides are an extensive class of molecules with a variety of structures and functions. In our review, we have identified several groups of peptides that can exhibit valuable properties for neuroprotection. Thus, IPs affect the activity of receptor systems, blocking PPIs, and the effects of IPs may be important in overcoming the neurotoxic effects of ischemic injuries. Peptides that mimic natural regulatory peptides and hormones serve as a basis for creating drugs. As a result, a GLP-1 analog (semaglutide), as well as an ACTH analog (Semax), will potentially be used as medicines in different countries [89,118]. The mechanisms of action for CARPs are less studied, but they have multiple antitoxic and anti-inflammatory effects during ischemia, affecting the activity of numerous signaling systems. It should be noted that the search for targeted drug delivery methods while observing the principles of safety and efficacy in therapy is one of the most relevant current topics. In this regard, shuttle peptides, which are carriers of biologically active molecules across the BBB, are of interest. The complexes of CPPs and proteins (for example, CPP-SOD) ensure the efficient penetration of the enzyme into the target area of the cell in realizing the therapeutic effect. Moreover, many peptides combine two valuable qualities at the same time. Examples of such convergent actions are peptides formed by the fusion of several peptides with different properties. Thus, the Semax peptide has a significant neuroprotective effect in stroke, including after intraperitoneal administration. Semax contains a PGP residue that increases metabolic resistance. It is important to note that peptides might have more complex and nonlinear correlations between their structures and neuroprotective properties. Thus, many peptides belonging to the same group do not structurally repeat each other, as per the results we obtained using MAFFT v7. From our point of view, peptides have another fundamentally important property. Peptides have multiple (pleiotropic) effects on receptor systems [119][120][121]. This property of peptides is the key when treating multifactorial diseases with a complex cascade of events, including strokes. Acute ischemia leads to multiple membrane polarizations and the activation of brain-cell receptors due to various factors caused by ischemic damage. This, in turn, triggers various signaling pathways that mediate the influence of many gene transcriptions inside the cell nucleus [28,110,[122][123][124]. Peptides can allosterically interact with receptors and modulate the multiple signals that cells receive during a stroke [125,126]. It is also assumed that the pleiotropic properties of peptides allow for the formation of neuroprotective polyfunctional responses to overcome multiple reactive damage processes. The involvement of peptides in numerous network interactions, including multiple allosteric modulations of receptor activities, requires specific approaches to elucidate their mechanisms of action. The current development of omics technology can be especially valuable in this regard. Omics technologies make it possible to obtain multiple genome-wide data arrays at a time and carry out multiple results comparisons in the conditions of one experiment. Studies based on transcriptomic and proteomic approaches can reveal the expression profiles of RNAs (RNA-Seq, single-cell RNA-Seq), proteins, and peptides to assess how much neuropathology has distorted biomolecule levels [122,124,[127][128][129][130][131]. At the same time, the normalization (compensation) of the disturbed profiles of the analyzed molecules after exposure to the peptide is evidence of the potential drug (peptide)'s neuroprotective properties under neuropathological conditions. One stage of testing peptides for neuroprotective properties can be based on this fact. Further applications of the functional enrichment and data clustering methods make it possible to elucidate the metabolic systems the peptide is involved in modulating. Potentially, with the accumulation of a lot of omics data, machine learning and artificial intelligence methods may reveal the relationship between peptide structures and their functions. Then, the prediction of peptide structures required for drug properties can be realized. Thus, the frontiers of peptide drug development can be expanded by converging natural, health, and computer sciences. Conclusions It can be assumed that the latest developments in the creation of neuroprotective agents, combined with new possibilities for their delivery to the brain, an understanding of the pathogenesis of stroke in general, and consideration of the previous shortcomings in preclinical and clinical studies, will provide a breakthrough in the treatment of IS. Here, we reviewed the most recent strategies to search for neuroprotectors for the treatment of IS, advances in the development of biologically active neuroprotective peptides, and the role of genome-wide transcriptome studies in identifying the molecular mechanisms of action of potential drugs.
8,390.2
2023-04-22T00:00:00.000
[ "Biology" ]
Confinement of picosecond timescale current pulses by tapered coplanar waveguides Tapered coplanar waveguides with integrated photoconductors were designed, fabricated, and measured, with pulsed transmission results comparing well with High Frequency Structure Simulator simulations which predict increased confinement and electric field concentration in the tapered region. Devices made with titanium/gold metallisation were used to demonstrate transmission and confinement, while the magnetoresistive properties of devices with cobalt/copper multilayers were used to demonstrate the field concentration. In the latter case, a mathematical framework was developed to understand the relationship between tapering effects and the picosecond magnetoresistance response. V C 2018 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http:// creativecommons.org/licenses/by/4.0/) . https://doi.org/10.1063/1.5027202 The meV photon energies exhibited by terahertz (THz) The meV photon energies exhibited by terahertz (THz) frequency radiation (0.1-10 THz) are comparable with the energy spacings in a wide range of sub-micron-scale condensed matter systems. The most common approach to accessing the THz properties of such systems is via freespace THz time domain spectroscopy. Although previously best suited to the measurement of ensembles of small systems, recent modifications of the technique to provide nearfield microscopy with broadband spectral coverage have been demonstrated, in which the THz radiation is restricted and/or focused by means of either apertures 1 or metallic tips, 2 for example. An alternative technique we developed embeds the object under study in a coplanar waveguide (CPW) with integrated photoconductors for THz-bandwidth signal excitation and detection. We have previously used this to investigate the plasmonic spectra of two-dimensional electron gases, for example. 3,4 Such geometries offer the advantage of precise lithographically defined control of an electric field placement 5 removing alignment problems associated with a scanned probe. Here, we demonstrate that tapering of the CPW can be used to enhance the THz field confinement further. We use this enhanced concentration of electric field to increase interaction with a magnetic multilayered system which exhibits giant magnetoresistance. The field enhancement induces a factor of $2.4 increase in the change in current with the magnetic field in the tapered waveguide relative to untapered THz CPWs. An Ansoft High Frequency Structure Simulator (HFSS) 6 was first used to investigate the effects of tapering on the THz electric field confinement within a metallic CPW device [ Fig. 1(a)]. The centre track of the coplanar waveguide in this simulation had a width of 30 lm, with a centreconductor spacing of 10 lm between the centre track and the neighbouring ground planes. The ratio of the gap-width to the centre conductor width was then maintained as the CPW was tapered linearly to a centre conductor width of 1 lm with a gap of 0.33 lm over the course of 500 lm. Following simulation, cross-sections of the maximum instantaneous electric field strength (at 1 THz) were taken at intervals both going into and out of the tapered region [ Fig. 1(a)]. From each cross-section, the field confinement and the average field strength in the confined region were extracted. For the purpose of discussion, we define the "confinement area" of the cross-section as an area containing instantaneous electric field strengths within one order of magnitude of the instantaneous maximum electric field strength. The confinement area and average field strength as a function of CPW width are shown in Fig. 1(b). As the CPW is tapered to smaller dimensions, the confinement area decreases, while the average instantaneous electric field strength increases, as expected from geometric considerations. In order to excite THz pulses into and detect their passage through the tapered CPW, 350 nm of low-temperaturegrown gallium arsenide (LT-GaAs) was epitaxially transferred onto low permittivity quartz substrates, allowing photoconductive sampling. 7 Additional steps were then necessary to complete the two different types of devices [Ti/ Au tapered CPWs and giant magnetoresistance (GMR) CPWs], as now discussed. For the Ti/Au waveguides, once the LT-GaAs was transferred, a 1 lm-wide tapered region was then defined over the LT-GaAs layer. As the dimensions of the waveguide are reduced, the device will become more sensitive to fabrication error, and therefore, the region of the narrowest taper was defined using electron-beam lithography (EBL), with the rest of the device then defined by subsequent optical lithography. Ti/Au was used for the metallisation, with thicknesses of 5 nm/70 nm for the EBL step and 5 nm/150 nm for the optical lithography step. The transition between the optical and ebeam defined metal layers was designed to be as smooth as possible by means of an overlap between the EBL and optical lithography stages to avoid impedance discontinuities. A 100 fs pulsed laser at a centre wavelength of 800 nm was split into two paths (pump and probe). The probe path length was adjusted with a retroreflector on a motorised delay line. A symmetric bias was applied across one pair of photoconductive switches in order to excite a THz pulse propagating in the odd CPW mode. To measure the THz pulses, a lock-in amplifier was used to monitor the current generated in the ground plane when the THz pulse arrived simultaneously with the laser probe pulse. The lock-in amplifier was referenced to the frequency of an optical chopper inserted in the probe beam path. Since the ground plane was used for detection, measurements could be made anywhere along the gap between the ground plane and the centre conductor, with the exact position of measurement being determined by the location of the laser probe beam. By moving the position of the probe beam, it was thus possible to track the progress of the THz pulse as it passed through the tapered CPW [ Fig. 2(a)]. The success of the smooth transition between optical and e-beam stages was verified by a lack of reflections originating from this overlap region (the small reflection signal seen in these traces originates from the far switch). Small variations in the performance of the transferred LT-GaAs were found between devices (affecting both the pulse amplitude and the signal-to-noise ratio), and it was therefore impractical to compare device attenuation directly with the HFSS modelled data. We note that the pulse shape remained consistent across all devices. In order to demonstrate unambiguously that the THz pulse is confined to the waveguide within the highly tapered section (rather than propagating in free space or in the dielectric), a tapered waveguide with a U-bend in the small tapered section was fabricated [inset in Fig. 2(b)]. The bend increased the transmission length by 110 lm, with the radius of curvature being 5.5 lm at the centre of the waveguide. This device was mounted on a translation stage with additional control in the z-direction. Measurements were made as described previously, but the device was then moved in 100 lm increments using the micrometer control rather than adjusting the probe beam. The pump beam was realigned after each change in the z-position. In Fig. 2(b), the position of the THz pulse peak is plotted against the measured device movement relative to its starting point. A clear jump in the THz pulse position is noted at the middle of the device, corresponding to the pulse then having to travel the increased distance through the bend section, thus confirming unambiguously that the electric field is indeed confined to the CPW even in its narrowest region. Assuming a constant pulse velocity through the entire waveguide, the extra 110 lm of CPW was expected to produce a delay of approximately 700 fs in the THz pulse arrival time. Experimentally, a delay of one picosecond was measured. This larger delay is most likely due to the effective change in the waveguide properties as the dimensions of the waveguide are reduced to a similar size to the LT-GaAs thickness, in effect changing the substrate permittivity from that of quartz to the much higher permittivity of GaAs. This was confirmed by modelling two waveguides in HFSS, one with a centre track of width 30 lm and the other with a centre track width of 1 lm. The effective permittivity of the waveguide at 1 THz increased from 2.6 to 5.5 with this reduction in the CPW size, respectively. The ratio of the two pulse velocities would then be v 30 v 1 ¼ ffiffiffi 1 p ffiffiffiffi 30 p ¼ 1:45, which is equal, within experimental error, to the discrepancy in the arrival times, which gives a ratio of 1.4 6 0.1. As expected, an examination of pulse amplitudes directly before and after the U-bend reveals the extra attenuation expected from an increased propagation length. The increase in attenuation was found to be approximately À3 dB compared to a straight tapered waveguide although we note that the variability between devices may be in part responsible for this figure. In order to quantitatively demonstrate the increase in confinement and field strength in tapered regions in practical devices, CPW devices were then made with cobalt/copper multilayers ½Coð35ÅÞ=Cuð10ÅÞ Â20 instead of Ti/Au. 8 For these devices, most of the transferred LT-GaAs was etched away using sulphuric acid, leaving only 70 lm  70 lm squares to act as photoconductive switches. Optical lithography was then used to define CPWs with six switches and two sections of the track, one section with no taper and the other with a tapered region [ Fig. 3(a)]. For comparison, three waveguides were made in the same growth and with the same recipe but with tapered regions where the centre conductor was reduced to 20 lm, 10 lm, and 5 lm. The dimensions of the 5 lm waveguide were chosen as they were close to the smallest which could be achieved using optical lithography and sputtering of the multi-layered system. After metal deposition, the devices were found to exhibit a giant magnetoresistance (GMR) of 16% for DC measurements at room temperature. It has previously been shown that such changes in magnetoresistance have a direct effect on propagating THz pulses. 9 As the waveguide dimensions reduce, our model shows that the average electric field strength in the vicinity of (lossinducing) metal increases. Given a cross-sectional area of A and a length of L, the power dissipation in unit volume AL will be I 2 R=ðALÞ which can be rewritten as J 2 =r using J ¼ I=A and R ¼ L=ðA rÞ, where J is the current density, R is the resistance, and r is the conductivity. Using Ohm's law (J ¼ rE), power dissipation can then be written in terms of the electric field P ¼ rE 2 . Therefore, it is expected that the tapered region in the waveguide will induce greater dissipation due to increased field confinement. This will be discussed further below. THz pulses were transmitted through a section of the track chosen by the position of applied bias. Pulses were measured at different magnetic fields, and a change in the pulse amplitude (but not arrival time) was noted. The delay line was then fixed so that the lock-in amplifier measured the current on the THz pulse peak continuously. The external magnetic field was then used to modify the resistance of the sample, and the change in the THz pulse amplitude with the field was measured. This was performed for both the tapered section and an adjacent straight section of CPW, for comparison. The change in THz pulse generated current as a function of external field is shown in Fig. 3(b). The GMR response in the tapered section of the waveguide is 2.4 6 0.2 times larger than that in the straight section of the waveguide directly adjacent to it. Devices with 10 lm and 20 lm tapers produced experimental ratios of 1.3 6 0.1 and 0.97 6 0.08, respectively. "Input" pulses were also measured, where the excitation and detection switches were directly adjacent to other, separated by only $50 lm. In these cases, there was practically no observable change in the measured current when the magnetic field was swept. To understand the field concentration effect, we consider the electric field in the waveguide E(x) as a function of position (x), where both attenuation and concentration will have an influence on its magnitude. To simplify the problem, E(x) is divided into components. In its simplest form, the electric field magnitude can be represented by E x ð Þ ¼ E 0 e Àax , where E 0 is the starting electric field magnitude and a is the attenuation constant representing all loss terms in a normal straight waveguide including ohmic, dielectric, and radiative. On application of an external magnetic field, the conductivity of the metal in the waveguide changes, and the attenuation constant will also change: a 0 ¼ a þ Da, where Da represents the change in ohmic loss for a change in conductivity. For a tapered waveguide, the electric field will be concentrated in the tapered region. This will increase the electric field magnitude near the loss inducing metal but not increase the total electric field over all space. The electric field in a tapered waveguide is then E x ð Þ ¼ E 0 e Àax e ÀDax e Àc x ð Þx 1 þ E c x ð Þ ð Þ : We introduce c(x) here as an additional ohmic loss term that arises from the change in waveguide geometry. It is dependent on metal resistance and thickness as well as on the degree of electric field concentration induced by the waveguide tapering. E c ðxÞ is the increase in the electric field due to the field concentration and is defined to be zero at either end of the waveguide. In this way, the increase in the electric field magnitude is accounted for, while the only permanent aspect of the concentration is the extra attenuation. The term E c ðxÞ can be determined through numerical simulations (using HFSS) using a method similar to that previously described. To isolate concentration effects, however, the HFSS simulation was adjusted so that the metal was a perfect conductor and there was no substrate. Additionally, the confinement factor was reduced to two in order to examine only the strongest field in direct proximity to the metal in the waveguide. The data were similar to those shown in Fig. 1(b), and the relationship between the electric field magnitude and the waveguide width, along with the knowledge of the waveguide geometry, allowed the calculation of the field concentration for each waveguide. The percentage change in power dissipation (%P) with changing conductivity is then considered, where %P ¼ P max ÀP min P min . Recalling that P ¼ rE 2 , we then obtain where r 1 > r 2 and Da represents the increase in a as the conductivity changes. When calculating %P, most of the terms cancel out, and %P becomes r 2 E 2 2 e À2Dax Àr 1 E 1 2 . From experimental measurements, it is known that the input pulse changes very little with conductivity, and therefore, from Ohm's law, J 0 ¼ r 1 E 1 ¼ r 2 E 2 . Using this, it is found that %P ¼ r 1 e À2Dax Àr 2 r 2 . The change in power dissipation with conductivity in tapered and untapered waveguides can then be expressed as the ratio R P ¼ P T %P P S %P , where P T and P S represent the total power dissipation in the tapered and straight sections of the waveguide, respectively. Since %P cancels, this ratio only depends on these total power dissipation values. The total power dissipation in the straight waveguide is then However, since it is current, and not power dissipation that is being measured, the square root needs to be calculated in order to compare with the experimental data. The ratio of current change in tapered and untapered waveguides is then , where only k is unknown. Numerically solving R J for a range of k values reveals that for k ¼ 0.01, the ratio is equal to 2.2, 1.4, and 1.06 for 5 lm, 10 lm, and 20 lm tapers, respectively, corresponding well to the experimental results of 2.4 6 0.2, 1.3 6 0.1, and 0.97 6 0.08. Our analysis showed that the 1 lm taper waveguides studied would show an enhancement ratio of 2.8, while tapering to smaller dimensions would result in larger enhancement factors, consistent with the increased field concentration shown in Fig. 1(b). In conclusion, we have measured the time domain response of tapered on-chip coplanar THz waveguides with integrated photoconductors. Confinement to the waveguide in the narrowest sections of CPW was demonstrated by transmitting a THz pulse through a U-bend to delay the signal. Increased field confinement in tapered waveguides formed from materials exhibiting GMR was also demonstrated. Tapered waveguides provide significant potential for studying the picosecond response of individual micron length-scale objects and systems, with the attendant field concentration potentially allowing non-linear effects to be accessed.
3,920
2018-05-02T00:00:00.000
[ "Physics" ]
Effects of Heat Treatment and Yb 3+ Concentration on the Downconversion Emission of Er 3+ /Yb 3+ Co-Doped Transparent Silicate Glass-Ceramics The SiO 2 -Al 2 O 3 -BaF 2 -TiO 2 -CaF 2 transparent silicate glass-ceramics containing BaF 2 nanocrystals were successfully prepared by heat treatment process through conventional melting method. Effects of heat treatment processes and Yb 3+ concentration on the downconversion (DC) emission of the co-doped Er 3+ /Yb 3+ transparent silicate glass-ceramics were investigated. With the increase of temperatures and times of heat treatment process, the DC emission intensity of the co-doped Er 3+ /Yb 3+ glass-ceramics was significantly enhanced. At the same time, with the increase of Yb 3+ concentration, the value of DC intensity of Er 3+ /Yb 3+ co-doped bands centered at 849, 883 and 1533 nm is maximized when the concentration of Yb 3+ reaches 2.5 mol.%. When the concentration exceed 2.5 mol. %, the DC emission intensity of Er 3+ /Yb 3+ co-doped bands centered at 849, 883 and 1533 nm was decreased, owing to the self-quenching effect. It’s interesting that the DC emission intensity of Er 3+ /Yb 3+ co-doped band centered at 978 nm didn’t quench when the Yb 3+ concentration exceed 2.5 mol. %. At the same time, the DC mechanism and ET processes between Yb 3+ and Er 3+ ions were discussed. Introduction In recent years, the silicon solar cells (Si-SC) are widely used to produce electric energy, it is considered a green and inexhaustible source of energy. Therefore, many studies have developed to enhance the emission spectrum of Si-SC energy [1][2][3][4] . Usually, there are two processes that contribute to the increase in emission solar cells (SC) spectrum, which is the downconversion (DC) and the upconversion (UC) of rare earth (RE 3+ ) ions. Among them, the DC emission of the single doped Er 3+ and the co-doped Er 3+ with others RE 3+ ions is a promising way to increase the efficiency spectrum of SC [5][6][7] . In reality, the solar spectrum is within the wavelength range of 300-2500 nm 8 , whereas the band-gap of the Si-SC converts only a small band around (1.000 nm at full efficiency into the electricity. The spectrum below the band-gap is not absorbed at all, and the spectrum above the band-gap is fully absorbed but is converted into electricity with high thermal losses. This spectral mismatch are caused a major loss of energy. Therefore, the researchers have interested in improve DC luminescence intensity of the co-doped RE 3+ ions to deliver the highest spectrum efficiency for the SC energy 2, [9][10][11] . Among the existing trivalent RE 3+ ions, the Yb 3+ has a relatively simple electronic structure of two energy-level manifolds: the 2 F 7/2 ground state and 2 F 5/2 excited state around (1000 nm in the near-infrared (NIR) region, which located just above the band-gap of Si-SC 1,12 . Similar to Yb 3+ , the Er 3+ also is one of most efficient ions combining to enhance SC spectrum because it has a favorable energy level structure with 4 I 15/2 → 4 I 11/2 transition corresponding to NIR emission of about 980 nm. Therefore, enhancement on the DC emission can be achieved by combine of the co-doped Er 3+ /Yb 3+ , through energy transfer (ET) process between between Er 3+ and Yb 3+ ions. There upon the energy is transferred to two Yb 3+ ions via a resonant ET process. Finally, the Yb 3+ ions will emit the two required photons with the band-gap energy of Si-SC 13 . In 2009, L. Aarts et al., 14 have investigated the DC emission for SC in NaYF 4 : Er 3+ , Yb 3+ . This result indicated that the desired DC process from the 4 F 7/2 level has very low efficiency due to fast multi-phonon relaxation from the 4 F 7/2 to 4 S 3/2 level via the intermediate 2 H 11/2 level. Recently, In the paper of M.B. de la Mora et al., 15 mentioned the materials for DC in SC: Perspectives and challenges. Results of this paper affirmed among different options, downconversion is an appealing way to harvest the efficiency in solar cells because it permits to optimize the solar spectrum usage 15 . With the purposed to improved efficiency photoluminescence for the solar cells application. In previous studies, we have investigated enhancement of upconversion emission of Er 3+ /Yb 3+ co-doped transparent silicate glass-ceramics containing BaF 2 nanocrystals by effects of Mn 2+ concentrations 16 and heat treatment processes 17 . In this work, we continues to investigation the effects of the heat treatment processes and Yb 3+ concentration on the DC emission intensity of the co-doped Er 3+ /Yb 3+ transparent silicate glass-ceramics containing BaF 2 nanocrystals. At the same time, the mechanism of DC and ET processes between Yb 3+ and Er 3+ ions are also proposed and discussed. Experimental Details The glasses were prepared according to a conventional melt-quenching method. Table 1. Mixtures with a sufficient weight of approximately 10 g, compacted into a platinum crucible, were set in an electric furnace. The electric furnace in this study manufactured by Nabertherm, Germany. After holding at 1500 ºC for 45 min under air atmosphere in an electric furnace, the melts were quenched by putting them onto a polished plate of stainless steel. According to the glass transition temperature (T g ) of differential thermal analysis which was determined by differential scanning calorimeter (DTA-60AH SHIMADZU) with a heating rate of 10 ºC/min under a nitrogen atmosphere. The samples were cut into the size of 10×10×2 mm 3 and polished for optical measurements. To identify the crystallization phase, XRD (X-ray diffraction) analysis was carried out with a powder diffractometer (BRUKER AXS GMBH) using CuKα radiation. The sizes, shape, structure and component compositions of the asprepared nanocrystals were characterized by transmission electron microscopy (TEM, JEM-2100) at 200 kV. The reflectance spectra in the wavelength range of 350-1800 nm were measured on a Hitachi U-4100 spectrophotometer. The DC spectra in the wavelength range of 800-1650 nm and lifetime curves were measured on an Edinburgh Instruments FLS980 fluorescence spectrometer using a µF920 microsecond flash lamp as the excitation source and detected using a liquid-nitrogen-cooled PbS detector upon excitation at 410 nm. All spectral, DTA, XRD, TEM measurements were conducted at ambient temperatures. Results and Discussion To characterize the thermal stability of the prepared SiO 2 -Al 2 O 3 -BaF 2 -TiO 2 -CaF 2 glass system, a DTA curve of SEY-1 glass sample was measured and showed in Fig.1. As can be seen in this figure, three temperature parameters: the glass transition temperature (T g ) was located around 554 ºC, the crystallization onset temperature (T x1 = 675 ºC), two crystallization peaks temperatures (T p1, T p2 ) are located around 685 o C and 773 o C, respectively. Therefore, the transparent silicate glass-ceramics can be prepared by heat-treat in the first crystallization peak near 665 ºC, by controlling the appropriate crystallization temperature and process. Besides, between ~710ºC and 773 ºC, an endothermic reaction occurs. It's also the crystallization onset temperature (T x2 ) and the T x2 is determined value around 753ºC. The difference ΔT between the crystallization onset temperature T x1 and the glass transition temperature T g (ΔT = T x1 -T g ) is used as a rough indicator of glass thermal stability, and the ΔT = 675 ºC -554 ºC = 121ºC > 100 ºC indicating the prepared glass is stable and suitable for applications such as fiber amplifiers and solar cells, etc. Based on the analysis results of the DTA curve, all the prepared glasses were heattreated within the range of 665 o C to 773 o C. However, when glass-ceramics samples heat-treated up to 695 o C, the glassceramics sample is no longer transparent glass-ceramics. The optical images of glass-ceramics samples heat treatment at ~600, 685, 695 and 773 o C as shown in inset of Fig. 1. The transparent silicate glass-ceramics was prepared and the nanocrystals structures in the glass-ceramics were monitored by XRD. The XRD patterns of glass-ceramics after heat treatment at different temperatures are shown in Fig. 2 (a). From the results of Fig. 2(a) shows when the increase of processing temperature from 600 up to 685 ºC, crystal size of BaF 2 nanocrystals was increased from 10.7 up to 17.9 nm. Relationship between crystal size with the heat treatment temperatures are shown in the Fig. 2 Also from the result of the Fig. 2(a), the precursor glass sample presents a broad diffraction curve characteristic of the amorphous state, while in the patterns of transparent silicate glass-ceramics, the intense diffraction peaks are clearly observed, indicating that microcrystallites are successfully precipitated during thermal treatment. The diffraction pattern of the crystalline element is typical of a face-centered-cubic and these diffraction peaks around 2θ(degree) = 26º, 30º, 43º, 50º and 53º can be assigned respectively to the (111), (200), (220), (311) and (222) planes of the BaF 2 cubic phase. The XRD patterns of glass-ceramics after heat treatment at different times are shown in Fig. 2 (c). From the results of Fig. 2(c) shows when the increase of processing times from 10 up to 30h, crystal size of BaF 2 nanocrystals was increased from 17.6 up to 19.9 nm. Relationship between crystal size with the heat treatment times are shown in the Fig. 2(d). The crystallites size D for a given (hkl) plane was estimated from the XRD patterns following the Scherrer equation: Figs. 2 (c & d). Clearly, in this figure, the increase of the heat treatment temperatures and times were led to the crystal size increased, similar to the result of our previous works [17][18][19] . The TEM image of SEY-0.2E2.5Y-685 transparent silicate glass-ceramics sample is shown in Fig. 3. From result of Fig. 3, it demonstrates that the BaF 2 nanocrystals were distributed homogeneously among the glass matrix and the mean sizes of nanocrystals were about 18-19 nm, which was similar to those calculated by Debye-Scherrer equation. The HRTEM image of the SEY-0.2E2.5Y-685 transparent silicate glass-ceramics sample is shown in inset of Fig. 3. As from this figure, the lattice spacing of (111) was estimated about 0.334 nm. The DC emission spectra of the SEY-0.2E2.5Y-10h, SEY-0.2E2.5Y-15h, SEY-0.2E2.5Y-20h, SEY-0.2E2.5Y-25h, and SEY-0.2E2.5Y-30h transparent glass-ceramics samples, under excitation 410 nm are shown in Fig. 6. Similar in the case of changing heat treatment temperatures, the DC emission intensity of the Er 3+ /Yb 3+ co-doped bands centered at 824, 849, 883, 918, 978, 1265 and 1533 nm were strongly increased with the increase of heat treatment times from 10 to 30 h. These results confirms that the heat treatment processes greatly affects the DC emission intensity of Er 3+ /Yb 3+ co-doped transparent silicate glass-ceramics. Furthermore, the effect of Yb 3+ concentration on the DC emission intensity of Er 3+ /Yb 3+ co-doped transparent silicate glass-ceramics were also presented follows. The DC emission spectra of SEY-0.2E0Y, SEY-0.2E1.0Y, SEY-0.2E1.5Y, SEY-0.2E2.0Y, SEY-0.2E2.5Y and SEY-0.2E3.0Y transparent glass-ceramics samples, under 410 nm excitation are shown in Fig. 7. As shown in the Fig. 7, in the DC process, the Yb 3+ ions act as an efficient sensitizer. While Er 3+ fixed concentration, with the increase of Yb 3+ concentration, the DC emission intensity of Er 3+ /Yb 3+ co-doped bands centered at 849, 883 and 1533 nm were strongly increased and reaches its maximum value when the content of Yb 2 O 3 is 2.5 mol. %. When the concentration exceed 2.5 mol. %, the DC emission intensity of Er 3+ /Yb 3+ co-doped bands centered at 849, 883 and 1533 nm was decreased. This result may be owing to the reasons mainly of the self-quenching effect can be attributed to the cluster or the ions pair between the Yb 3+ ions is possibly formed in high the Yb 3+ concentration 21 . Further, the increase of Yb 3+ concentration has enhanced the probability of interaction between the Yb 3+ ions and some impurity, such as OH − impurities was born from atmospheric moisture during melting 22 . Therefore, the Yb 3+ could not effectively absorb the pumping energy leading to the quenching of the DC emission intensities. This phenomenon can be explained by these reasons: Firstly, as the molarity of Er 3+ ions increased, the increased luminescent centers lead the emission intensity bands centered at 824, 849, 883, 918, 1265 and 1533 nm significantly increased. Secondly, the possible ET from Yb 3+ to Er 3+ ions, contribute to the emission intensity bands centered at 824, 849, 883, 918, 1265 and 1533 nm improved while emission intensity bands centered at 978 nm decreased. The mechanism of the ET from Yb 3+ to Er 3+ ions was proposed as above section. Conclusions In study of this article, the effects of heat treatment and Yb 3+ concentration on the DC emission of Er 3+ /Yb 3+ co-doped in transparent silicate glass-ceramics containing BaF 2 nanocrystals were successfully investigated. Comparison with the precursor glass, the DC luminescence of Er 3+ /Yb 3+ co-doped transparent glass-ceramics has significantly enhanced after heat treatment process changing temperatures and times. With the increase of Yb 3+ concentration, the DC emission intensity of Er 3+ / Yb 3+ co-doped bands centered at 849, 883 and 1533 nm were strongly increased and reaches its maximum at 2.5 mol. % Yb 3+ concentration. When the concentration exceed 2.5 mol. %, the DC emission intensity of Er 3+ /Yb 3+ co-doped bands centered at 849, 883 and 1533 nm was decreased, owing to the selfquenching effect. Whereas the DC emission intensity band centered 978 nm, corresponding to the transitions: 4 I 11/2 → 4 I 15/2 of Er 3+ and 2 F 5/2 → 2 F 7/2 of Yb 3+ didn't quench when the Yb 3+ concentration exceed 2.5 mol. %. At the same time, we deem that there was possibly an energy transition process from the 2 F 5/2 → 2 F 7/2 transition of Yb 3+ to the 4 I 11/2 → 4 I 15/2 and 4 F 9/2 → 4 I 13/2 transitions of Er 3+ ions. In addition, the data presented for this study might provide useful information for further development of the DC in transparent silicate glass-ceramics associated with the ET between Yb 3+ and Er 3+ ions. These materials are promising for applications in enhancing conversion efficiency of SC.
3,368.4
2019-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
The Motive of Discrepancy in Hryhorii Skovoroda’s Works The article explores the discrepancy of the form and the content as a philosophical, moral and axiological problem in the works of Hryhorii Skovoroda. Using the phenomenological reading and structural analysis, the author investigates the interaction between the form and the content in treatises, soliloquies, poems, and letters of Skovoroda. The intellectual and aesthetic background of the Baroque epoch to a large extent explains why this motive of discrepancy occupies a prominent place among the writings of the Ukrainian philosopher. The article analyzes the main plots, in which the discrepancy is revealed: vocation against non-congenial work; a real friend and a fl unkey; the truth and a false thing; and the heavenly and an earthly city. After considering all these aspects and other crucial issues, it is concluded what makes the problem of discrepancy an invariant motive in the works of Hryhorii Skovoroda. Introduction Baroque is an epoch of oppositions and irrationality, according to the theory of cultural waves by Dmytro Chyzhevskyi. 1 Baroque aesthetic and philosophical thought is characterized by a combination and struggle of contrasts, stylistic and emotional excesses. It explains why, in the search for new poetry, the Baroque authors paid careful attention to the formal side of creativity, along with the content. Moreover, the form sometimes prevailed over the content in the works of Baroque writers. The experiments with curious poems by Ivan Velychkovskyi are a bright example of that. 2 Baro que authors elaborated a considerable number of poetic treatises -lists of rules for writers about composition, versifi cation, and stylistics aspects. The Latin poetic treatise called The Poetical Garden (Hortus Poeticus), written by Mytrofan Dovhalevskyi, a professor of Kyiv-Mohyla Аcademy, 3 is, probably, the most famous one. It can be noticed that the Baroque is quite an autorefl exive epoch, when pondering over the process of creative writing was an essential part of it. Even the delivery of sermons was given theoretical refl ections in the treatise of the preacher Yoanykii Haliatovskyi in his work called Nauka, Albo Sposib Zlozhennya Kazannya (The Instruction, or a Way of Composing a Sermon). 4 It is worth noting that the key metaphor of the Baroque epoch is a book as an embodiment of the world. Nevertheless, few of the Baroque thinkers promoted the unity of form and content in philosophical categories. Hryhorii Skovoroda is frequently called the last author of Ukrainian Baroque epoch, who managed to rise above it and analytically analyze the most distinctive feature of the Baroque -the feature that eventually became a weakness of this style, reaching its peak and giving birth to a generation of epigones imitating true masters (similar to kotlyarevshchyna phenomenon some decades later). Many studies describing the ideas of cognition and education in the writings of Hryhorii Skovoroda have been published by now. 5 Numerous studies have attempted to explain the axiological dimension in Skovoroda's philosophy, 6 the categories of time and space in the worldview of the Ukrainian writer. 7 A large and growing body of literature has investigated the comparative aspects of Skovoroda's heritage, especially the Antique borrowings and patristic allusions. 8 Several attempts were also made to systematize and generalize the considerable amount of literature published on Skovoroda. In 1920 Ushkalov conducted a fundamental index of research on the life and heritage of Skovoroda. 10 However, what we know about Skovoroda's attitude to the fundamental interaction of categories of form and content is largely based upon occasional mentions. For instance, Myroslav Popovych, opposing the theory of the cordocentric Ukrainian culture, profoundly states that Skovoroda's "Everyone is that whose heart is in it" does not claim sensuality and sensitivity as contradictions to reason and ratio, but the need to reconcile the human way of life and the essence of the person. 11 That is what the European scholastic tradition called the unity of the essence and the existence. Theoretical Framework Since the emergence and the openness of a structure replaced the determinism in the structural analysis of the text, the text has been considered as characterized with nonsystematic values. In this article, the structure is understood as a synchronous section of any system. From this point of view, the text is based on three determinants: the sign, the code, and the discourse. The fi rst expresses the meaning of a particular text; the second is used to decipher the purport; while the third is needed to contextualize the senses in the chronotope of other texts. Roman Ingarden 12 highlights two aspects of the text: on the one hand, it has hierarchy and linearity which determine the teleological side of the text. On the other hand, the text is characterized by discreteness and segmentation, which stay for the aesthetical side of the text. The dominant itself forms the central focus of a study by Roman Jakobson. 13 The researcher fi nds a focusing element of text that preserves the integrity of meaning within any transformation. Thus, this research is methodologically focused on a phenomenological reading and structural analysis of the system of oppositions and structures in Skovoroda's writings. The main of them are the following: the top -the bottom, the form -the content, the affi nity -the futility, the truth -the deception, the sacred -the profane, and the earthly -the heavenly. The research material includes the epistolary of Skovoroda, a collection of poems Sad Bozhestvennykh Pisen (The Garden of Divine Songs), his philosophical treatises, and soliloquies. At the time of exchanging correspondence with Mykhailo Kovalynskyi (1762-1765), Hryhorii Skovoroda's worldview was at the crossroads of classical and nonclassical rationality. On the one hand, the philosopher thinks in terms of the normative vision of things (in particular, in numerous passages against a fl unkey personality). On the other hand, Skovoroda restores the rights of subjectivity, which leads him to opposing the classical rationality and focusing on the de-subjectifi cation and deindividualization of the inner world. The Ukrainian Baroque philosopher shares the view that esprit is not a universal norm or a requirement of a certain standard, but rather the ability to go beyond the norm, to enter the realm of the refl ection. However, his thinking on the unity of the form and the content is typologically closer to Hegel's, 14 than Adorno's system. 15 It is explained by the requirement of the absolute maturity and concision of the main categories. To Be or to Pretend: Vocation and Subservience The doctrine of congenial work, which is central in Skovoroda's ethical system, expresses his faith in the possibility of self-fulfi llment for everybody in this world. Therefore, the state of being happy is treated as pursuing one's vocation given by God, regardless of external rewards. Furthermore, since vocations are distributed in a particular way in order to ensure a social order, adopting an uncongenial task leads to social discord and unhappiness, while pursuing wealth, glory, or pleasure through uncongenial work is a short road to despair. 16 Refl ecting on congenial work as the major good, Skovoroda emphasizes the unreasonableness of the desire to pretend to be someone else, to violate the hierarchy, to be out of the place determined by nature, and not to correspond to one's essence. Thus, the embodiments of inconsistency between the form and the content in Skovoroda's writing (for example, the monkey Pyshek from the treatise Vdiachnyi Erodii [Grateful Erodii]) can be compared with Baudrillard's simulacrum of the fi rst order -a forgery. Skovoroda treats an unreasonable desire to pretend being someone else as an unwise activity: "Just as merchants take precautions not to buy bad and spoiled goods under the guise of fresh goods, similarly we need to take the utmost care, so that, choosing friends, …due to negligence not to come across something fake and imaginary, which is called a fl unkey, and not to get, according to the proverb, instead of pure gold… a forgery of copper [Quemadmodum mercatores summo studio cavere solent, ne sub specie bonarum malas damnosasque emant merces, ita nobis videndum The philosopher appeals to Aesop's fable The Jackdaw and Other Birds when referring to Plutarch: "Immortal God! How does he describe the friendship! How vividly does he depict a crow decorated with someone else's feathers, including the most cunning fl unkey who pretends to be a friend [Deum immortalem! quam commendat amicitiam! Quam graphice depingit corniculam alienis plumis ornatam, id est, vaferrimum adulatorem, amici larva tectum]." 18 Skovoroda revisits Plutarch's reasoning "How to distinguish a friend from a fl unkey" in a broader sense: how to avoid entangling the veracity with the deceit. A negatively connotated feature of deceit is its variability, or the ability to adapt to the original: "It is said that monkeys, trying to imitate humans, adopt their movements and reproduce their dances. A fl unkey, imitating others, deceives them, seduces them, but not everybody in the same way. He dances and sings with someone... If he deals with a young man keen on literary and scientifi c studies, then he reads books all the time, growing his beard… and omitting entertainment [Simias ajunt сарі, dum homines imitari conantes, eorum motus et saltationes adsectantur. Adulator autem alios imitando decipit atque illicit, non eodem omnes modo. Cum aliis saltat atque cantat, aliis palaestrae se et exercitationum corporis socium adjungit. Quodsi adeptus est literis et disciplinis deditum adolescentem, totus jam in libris est, barbam... demittit, rerum delectum ornittit]." 19 Deception, or deliberate inconsistency with one's essence, is probably the greatest fl aw in a human being, according to Skovoroda. The philosopher states the benefi ts of choosing sincere, constant, and simple friends. By simple friends he means not non-intellectual, but open-hearted, non-lying, non-deceitful, and non-empty people. The Nightmares of Mind in Skovoroda's Philosophy The question of the world's structure was a hot debatable topic at the times of Ukrainian Baroque. The canonical Christian ontology presents three worlds: the created (material) world, the heavenly world as its ideal essence, and the timeless and extraterrestrial world of God. At the same time, Joseph Turobojskyi, a professor of Kyiv-Mohyla Academy in the early 18th century, identifi ed three principles of being: the matter, the form, and the chaos. According to his theory, the chaos corresponds to a deprivation of the form, the order -to the form, and the matter -to neither of them. 20 Skovoroda's ontological notions include the microcosm, the macrocosm, and the world of the Bible. In this triad, the last element -the world of sacral symbols -is especially important for our consideration, as long as it is a kind of mediation between essence and existence. Lamenting about the prevailing delusion, Skovoroda cites a poem by Pier Angelo Manzolli: "The world is a barrier for fools and a mess of vices." Therefore, the philosopher advises avoiding the grassroots defi lement: "What is more blissful than the mind, purifi ed from earthly thoughts, which sees God himself [Quid mente terrenis cogitationibus pura beatius, quae deum illum cernit]?" 21 It is noteworthy that Skovoroda contrasts living people prone to deception with the most constant and non-treacherous friends -paper books: "Therefore, I consider being the most correct to make friends of the dead, that is, sacred books. Because among the living people there are such cunning, cheating, and dishonest rascals that barefacedly deceive the young man [Consultissimum igitur arbitror parare amicos mortuos, id est sanctos libros. Sunt ex vivis usque adeo callidi versutique et neinissimi veteratores, ut adolescentem videntem et viventem palam in os decipiant]." 22 The philosopher appeals to Plato's Republic, claiming that one cannot constantly profess the virtue if they do not have strict convictions on what to strive for and what to avoid. In a poem on the Pentecost Day, when the Holy Spirit descended on the apostles, Skovoroda identifi es the need to renew the discourse and come out of Platonic cave: "The language of an oafi sh mob is highfaluting and fool of greed. Do you think this is a new language? / No, it is an old language… / Let me come out of the cave, where the abject mob lives! / Let me live in heaven, / where the new earth shines! [Ambitiosa loquuntur ubique, loquuntur avara: / Haec nova verba putas? lingua vetusta quidem haec. / Regnat ubique scelus luxusque et spurca cupido: / Haec nova facta putas? Facta vetusta quidem haec… / Exime de specu vulgi miserabilis hujus: / Insere coelicolis, qua nova terra nitet]." 23 Instead, the Holy Spirit creates both a new language and new things. Furthermore, in one of his letters, Skovoroda points out the means to express the phenomenon which will later be called simulacrum -the truth that hides its absence. Mykhailo Kovalynskyi asked his teacher how to convey in Latin the meaning of the Ukrainian proverb "Aby sia kurylo," which means that it is sometimes enough to have a barely noticeable formal sign for making an assumption (sometimes false) about the existence of a whole phenomenon. Skovoroda off ers the following Latin translation: "mihi umbra suffi cit, sive titulis, sive imago -shadows, names, or images are enough for me… Our proverb in this case states that there is enough to see smoke, even if there 20 Popovych, "Hryhorii Skovoroda," 101. 21 Skovoroda, "Lysty do M. Kovalynskoho," 253. 22 Skovoroda, "Lysty do M. Kovalynskoho," 251. 23 Skovoroda, "Lysty do M. Kovalynskoho," 295. is no fi re. Thus, here is the same case, when a shadow arises instead of a body, a sign -instead of a thing [mihi umbra suffi cit, sive titulis, sive imago. Graeci παροιμιαστί dicunt: ‛ως τύπω. Ait ibi nostras adagium sibi suffi cere fumum, licet ad fl ammam non est progressum. Sic et hic umbra pro corpore, titulus pro re]." 24 This thesis directly correlates with Skovoroda's reasoning about the perception of the category of time. The philosopher asserts the maxim that it is unwise to belittle the present, as far as it is the only object we have, because neither the past nor the future actually exists for humans. "While hoping for the future, we neglect the present: we strive for the defunct, and we neglect the current things, as if what is passing can go back or what we hope for ought to come true [Futurum speratur, praesens temnitur; captatur, quod non est, quod adest negligitur, tanquam aut praeterfl uens redire, aut futurum certo posset obtingere]." 25 This idea is consistent with Skovoroda's broader concept of happiness in general: a happy person is not the one who still wants something better, but the one who is happy with what he or she already has. In contrary, the average person tends to dream on the non-existent. It is noteworthy that a similar statement was said by Pascal: "We never keep to the present. We recall the past; we anticipate the future as if we found it too slow in coming and were trying to hurry it up, or we recall the past as if to stay its too rapid fl ight. We are so unwise that we wander about in times that do not belong to us, and do not think of the only one that does; so vain that we dream of time that are not and blindly fl ee the only one that exists." 26 An example of the antinomy of thinking is Skovoroda's refl ection on the moderation as a good. The student asked him: should we really restrain ourselves in virtue? "...If this is not the case, then care should not be taken in moderation either. Why does one outweigh the other in virtue [...In diligentia non esse modum necessarium. Sin, cur alius alium in virtute excellit]?" 27 Skovoroda, paying tribute to the depth of this thought, replies that there are two kinds of virtue. The fi rst -virtus -can be compared with a palace, a stronghold of virtue which has no measure because of no saturation, and its Master is God. Faith, hope, and love are examples of such virtues. It is worth striving for them without restraining yourself in any way even if a person is not able to reach them completely. To get closer to this palace, there are "the virtues of the second order" -means, such as the knowledge of the Greek and Roman literature, obtained through night studying; the escape from the crowd and worldly aff airs; the contempt for wealth; the fasting and moderation in general. In these activities, it makes sense to keep continence in order to achieve those virtues in which no measure is recognized. The intemperance in the virtues of this kind is a manifestation of foolishness. "Otherwise, if you spoil your eyes or lungs during one night, how will you be able to read and talk to the saints after that [Alioquin si unica nocte per immodicas vigilias aut oculos aut pulmonem laedas, quomodo deinceps legere ac colloqui cum sanctis poteri]?" -asks the philosopher. 28 Thus, Skovoroda assumes that expedient behavior means being bold in aspirations and cautious in their implementation: "Is it reasonable for someone who, starting a long way, does not keep the measure in walking [An non stultescit, qui lonqum iter ingressus, modum in eundo non tenet]?" 29 Refl ecting on the aff ectivity as a cause of the eclipse of the esprit, Skovoroda quotes Boethius's The Consolation of Philosophy: Joy, hope and fear Suff er not near, Drive grief away: Shackled and blind And lost is the mind Where these have sway. 30 However, the philosopher gives his own interpretation of the words of his Roman counterpart by "turning" the metaphor. Boethius has a negatively connoted esprit, which is "harnessed" in captivity of passions. Skovoroda instead claims that it is the esprit which "harnesses and drives" human weaknesses: "The closest to them [saints] is the one who persistently fi ghts with the aff ects and restrains them with the bridle of the esprit, like the wild horses [Proximo loco est ille, qui cum his strenue pugnat ac velut equos ferocissimos moderatur freno rationis]." 31 It is essential that the peace of the mind in Skovoroda's hierarchy sometimes even exceeds the mind itself: "Days and nights, leaving everything behind, we will strive to direct every thought […] to that world which transcends all the reasons and which is 'a world being higher than all reasons' […] The colder becomes the heat of the sinister passions, the closer we get to this divine stronghold [Dies noctesque omnibus relictis contendamus, mi carissime, ad illam pacem omnem mentem supereminentem, quae est 'myr, vsiak um preimushchyi' ... Quo magis cupiditatum vulgarium aestus desidit, hoc proprius accedimus ad arcem illam dei]." 32 Skovoroda condemns the arrogance of a supersaturated esprit, claiming that the man who have read many sacred books has insatiable pride in his heart and ambition grown. 33 28 Skovoroda In the gastronomic categories, Skovoroda investigates the human inability to assimilate the divine knowledge: "The Christians eat the body of the God-man, drink His blood, / but, being stupid, cannot consume it [Vulgus christicolum consumit mernbra θεάνδρου, / Sanguine potatur; sed male stulta coquit...]." 34 Refl ecting on the futility of human eff orts to understand the highest truth, he remarks, "Why do you take the seeds, when you do not bear fruit [Curve capis semen, si tibi fructus abest]?" 35 Instead, a true sage is aware of one's limitations and, therefore, becomes modest. The esprit, acquainted with its insularity, is able to foresee something alternative to itself and turn what is not yet conceivable into existing and meaningful one. After all, thinking arises from being surprised by the existence of the impossible, unthinkable. For being wise, it is not shameful to descend into a sphere that is usually considered grassroots and unworthy for attention: "A sage has to fi nd gold even in the manure. Is Christ for sinners the fall, not the resurrection [Attamen sapientis est e stercore aurum legere. An non Christus improbis padenie, non 'ανάστασις]?" 36 According to the Georgian philosopher Merab Mamardashvili, self-refl exivity is a feature of consciousness: we adequately know the external world, provided that we simultaneously embrace the cognitive operation by which we cognize. Skovoroda appeals to self-refl exivity in the process of thinking, in particular, demonstrating the thirst for constant action of mind. The philosopher claims that our mind does not stop being active even for a moment; it always needs to do at least something, and when it does not have a good trouble, in an instant it will turn to bad things. Skovoroda warns his student of the boredom and temptations of mind that lurk in times of holidays: "If you do not arm yourself against it, be careful of this creature, which may push you not from the bridge, as a saying goes, but from virtue to evil [Ad quod nisi te armas, vide ne in rnala te animal detrudat, non de ponte, ut ajunt, sed de virtute]." 37 The Heavenly and the Earthly City The philosophical basis of Hryhorii Skovoroda's view on the concept of a city is a synthesis of Plato's image of the cave, in which everything visible to a human is only a faded shadow of the eternal, and Augustinian opposition of two cities -the earthly and the heavenly one. 38 In the soliloqium Bran Arkhystartyha Mykhayila so Satanoyu (The Archangel Michael's Struggle with Satan), Skovoroda notes that the world has two parts -the lower and the upper, the cursed and the blessed, and the devilish and the lordly ones. 39 In the collection of poems The Garden of Divine Songs the earthly city is depicted with the epithets full of sadness (song 12), crowded (song 13), stormy sea of the world (song 14). However, the object of Skovoroda's condemnation is not a particular city. The locus of the earthly city is endowed with a high degree of conventionality. The world is opposed to heaven almost the same as hell is opposed to heaven. Skovoroda uses the biblical image of the whore-city, which entices to deviate from the righteous way to salvation. The image of the earthly world is generalized because it goes not about a specifi c city, but rather about the way of life, the spiritual path that a person chooses, caring about material goods. Instead, the Heavenly City is not an imaginary space, but a concrete one, localized in the human soul. Everyone has a city in themselves, because heaven and hell are already embedded in the human soul, not somewhere outside it: "Where is that beautiful hailstone? / You yourself are the hail, expelling the poison from your soul, / The temple and the hail to the Holy Spirit [Gdie jest tot prekrasnyi hrad? / Sam ty hrad, z dush von vyhnav yad, / Sviatomu dukhu khram i hrad]." 40 The treatise The Archangel Michael's Struggle with Satan contains a song of hypocrites who pray to God, debunking their own unbelief. Other actions which are derided by the author include walking on pilgrimage, i. e. searching for grace outside one's own heart: "We roam the holy hail, / We pray both at home and there. / Though we do not pay attention to the psalters, / But we know it by heart. / And you have forgotten all of us [Stranstvuiem po sviatym hradam, / Molimsia i doma i tam. / Khot psaltyri nie vnimaiem, / No naizust yeii znaiem. / I zabyl ty vsikh nas]." 41 The idea that happiness is attainable everywhere, and that there is no more or less auspicious place for a righteous life, is expressed in the treatise The Entrance Door to the Christian Virtue. The philosopher asks: "What would happen then if happiness was confi ned by God in America, or in the Canary Islands, or in the Asian Jerusalem, or in the royal palaces.. These places are diffi cult to reach -and therefore it is no good to do it, so "thanks be to blessed God that made the diffi culty unnecessary." Conclusions The social conditionality of the consciousness and the need to free the mind from the outside factors which distort it have been highlighted. Skovoroda advised his readers to avoid dogmatic thinking. After all, internal authenticities ("truths") are only the basic formations of consciousness that ensure its stability, the ability to resist external manipulation, and the socially organized coercion to the illusion. According to Skovoroda, it is unreasonable to seek solitude if there is nothing to fi ll it with. Aristotle defi nes a lonely individual as either a wild beast or God. It means that loneliness is death for ordinary people, but it is a pleasure for those who are either completely stupid or outstandingly wise. For the former, the desert is pleasant due to its silence and immobility -so it is unwise to seek a space that cannot be fi lled with thoughts. Meanwhile, others are at a restless feast, creating, without disturbing their peace, the whole world. The desire for solitude can be connoted positively when it comes to the sage. To the question what philosophy is, Skovoroda answers: to be alone with yourself and to be able to have a conversation with yourself. So only the void that can be fi lled with meaning is valuable. Instead, those who seek complete solitude for its own sake, not being able to fi ll it with thoughts, act unwisely, because the excess causes oversaturation, and the oversaturation leads to boredom, the boredom -to mental sorrow, and those who suff er from it, cannot be called healthy. In one of his letters, Skovoroda refers Kovalynskyi to the words of Jerome of Stridon: "Does the endless emptiness of loneliness frighten you? Walk with your thoughts in the Garden of Eden." An unambiguous relationship exists between the issues considered in the article: the essence and its realization in the congenial work; a fl unker and the real friend; the truth in contrast to a fallacy; and fi nally the heavenly city and its earthly equivalent. All of them can be defi ned as invariant motives -the elements (events, situations, modes of attitude to the reality, or characteristics of people and objects) that underlie the deployment of any text. The mentioned invariant motives in Skovoroda's writings are involved in the development of one plot -the mismatch of the form and the content as a factor in the emergence of unreasonableness. This invariant motive of discrepancy is embodied in the social (friendship, vacation), axiological (truth and falsehood), and religious (the heavenly and the earthly city) planes. Finally, appealing to the Northrop Frye's 43 concept of monomyth -the key motif of the text, consisting in the character's dream of a golden age, a return to
6,196.4
2022-12-29T00:00:00.000
[ "Philosophy" ]
A Combined Numerical and Experimental Investigation of Cycle-to-Cycle Variations in an Optically Accessible Spark-Ignition Engine A combined numerical and experimental investigation is carried out to analyze the cycle-to-cycle variations (CCV) in an optically accessible spark-ignition engine with port fuel injection. A stable and an unstable operating condition is considered. Well-established turbulence, combustion, and ignition models are employed in the large-eddy simulations (LES). High-speed measurements of the velocity field via particle image velocimetry and flame imaging in the tumble plane are conducted in the experiments. A detailed comparison between LES and experiments is carried out, including the in-cylinder pressure, the flow fields, the spatial flame distribution, and the fields conditioned on fast and slow cycles. Good agreement is achieved for the variables considering all cycles; yet, some discrepancies are observed for the conditionally averaged quantities. A systematic quantitative correlation analysis between the selected influencing variables and the CCV is presented, in which the influencing variables are extracted from different length scales (r = 3 mm, 12 mm, and 43 mm) and the CCV are distinguished between the early flame kernel development and later flame propagation. Even though the most relevant influencing parameters are different for the two operating conditions, the location of the coherent vortex structure is found to be important for the CCV of both cases. Introduction Environmental pollution and global warming have prompted the need for the use of highefficiency and low-emission propulsion technologies. Besides electric powertrains in vehicles, internal combustion engines (ICE) will still be used in the near future due to the advanced state of ICE technologies including their reliability, durability, and the existing mature supply chain, manufacturing infrastructure, and recycling facilities (Leach et al. 2020). Indeed, ICE operated with non-fossil fuels (e.g. e-fuels, biofuels, or hydrogen) will play an important role in the future transport sector, especially in combination with electrification for heavy-duty vehicles (Onorati et al. 2022). To reduce greenhouse gas emissions, thermal efficiencies are essential for ICE. Concepts such as exhaust gas recirculation (EGR) (Fontana and Galloni 2010;Kargul et al. 2019) and ultra lean-burn (Luszcz et al. 2018;Ye et al. 2021) are promising to improve the efficiency of spark-ignition (SI) engines. However, cycle-to-cycle variations (CCV), which are characterized by the nonrepeatability of combustion events for consecutive engine cycles under the same nominal operating conditions (Heywood 2018), represent a limit to the achievable dilution level of stable operations, and thus limit the improvements of engine efficiency and emissions. The predictive description of CCV and fundamental understanding of its causes are still limited but are essential for the development of high-efficiency ICE. The significant enhancement of engine diagnostics enables detailed experimental investigations of the in-cylinder process and CCV (Buschbeck et al. 2012;Zeng et al. 2015;Bode et al. 2017;Zeng et al. 2019). Important sources for CCV were identified, which include the kinetic energy of the flow field at stoichiometric operating conditions (Buschbeck et al. 2012), the flow-spray interaction for direct-injection spark-ignition (DISI) engines (Zeng et al. 2015), and flow features in specific regions (Bode et al. 2017;Zeng et al. 2019). Experimental studies typically focus on the global quantities, such as the incylinder pressure or heat release rate, and two-dimensional quantities, such as the mixture composition, velocity, and flame location in selected planes, due to the limited access to the three-dimensional fields. However, the availability of in-cylinder optical measurements allows a detailed comparison with numerical studies, where the three-dimensional fields are available. Such combined numerical and experimental investigations are promising to gain further understanding of CCV. The advance of large-eddy simulations (LES) as an inherently unsteady simulation technique reflecting the stochastic nature of turbulent flows, enables numerical studies on CCV, such as qualitative analysis considering selected fast and slow cycles and systematic quantitative correlation analysis. Granet et al. (2012) carried out LES of stable and unstable engine operation conditions and demonstrated that LES can be used to distinguish a stable operation condition from an unstable one. Zembi et al. (2019) performed LES of three engine operation conditions with different values of relative air-fuel ratio. It was shown that LES has the capability to predict the transition from stable to unstable lean operation conditions. Enaux et al. (2011) reproduced the range of the variations of the in-cylinder pressure observed in experiments using LES. By comparing the selected typical fast and slow cycles, it was found that the velocity fluctuations at the spark plug play a significant role for early flame kernel development and the overall combustion. Using LES of a stable engine operation condition, Zembi et al. (2022) also showed that valuable indicators of the combustion rate can be derived based on the local flow fields around the spark plug. Under the investigated conditions, greater velocities lead to faster combustion processes. Zhao et al. (2018) conducted correlation studies using LES and showed that the velocity field close to the spark plug determines differences in flame propagation from cycle to cycle. Truffin et al. (2015) performed correlation studies to systematically investigate the influence of the global (referring to the whole cylinder) and local (referring to only the region close to the spark plug) variables at spark timing, such as the averaged local and global temperature, pressure, and flow features, on the indicated mean effective pressure (IMEP). Significant correlations were identified and it was found that the most relevant influencing variables are notably different for varying operating conditions. Using LES of two relevantly different engines, d 'Adamo et al. (2019) demonstrated significant correlations between the global as well as the local variables and the duration of the first 1% of mass fraction burned (MFB1). They also found that the most influencing factors for determining CCV are case specific. In the literature, reactive LES results for CCV investigations are commonly validated with experiments using global variables, such as the in-cylinder pressure, or mean velocity fields as well as flame images. However, detailed comparisons between LES and experiments, especially including both the velocity fields and flame images conditioned to the fast and slow cycles, are rare. The correlation analyses of CCV usually focus on a single combustion parameter, such as IMEP, MFB1, or the maximal in-cylinder pressure. In addition, the large-scale flow features are typically represented by global quantities, such as the tumble and swirl ratio based on the center of in-cylinder mass considering gas motion inside the whole cylinder, which are not suitable for some engine geometries or operating conditions. A systematic analysis of multiple aspects is expected to further increase the understanding of CCV, such as the separate evaluation of combustion phases in combination with their main different governing effects and proper description of the coherent vortex structure. The present study aims to systematically assess the influence of variables extracted from different length scales on the CCV of different combustion phases. For this purpose, a combined numerical and experimental investigation of an optically accessible SI engine with a detailed comparison between LES and experiments has been carried out. The comparison includes the flow fields, the spatial flame distribution, and especially the fields conditioned on the fast and slow cycles. The method presented by Buhl et al. (2017) has been employed to detect the three-dimensional vortex structure in the complex in-cylinder flows. Stable and unstable operating conditions have been considered. The remainder of the paper is organized in the following manner. First, the experimental methods and operating conditions are given in Sect. 2, followed by the numerical methods in Sect. 3. Then, a detailed comparison between LES and experiments, including the incylinder pressure, the velocity fields, and the flame propagation, is presented in Sect. 4. Afterwards, CCV of the early flame kernel development and the flame propagation are analyzed along with their sources in Sect. 5. Finally, the paper finishes with a summary and conclusions. Experimental Methods and Operating Conditions The engine considered in this study is the well-characterized single-cylinder four-valve optically accessible engine at TU Darmstadt (Baum et al. 2014). The research engine employs a Bowditch piston extension, a flat quartz glass piston surface, and quartz glass cylinder liner to allow optical access from the bottom and the sides of the cylinder. The engine's spray-guided cylinder head configuration with a pent-roof and a bore and stroke each of 86 mm results in a compression ratio of 8.7:1. Using two large plenums upstream 1 3 of the intake pipe, vibrations and acoustic oscillations are controlled to provide precise boundary conditions. Two part-load operating conditions with an intake pressure of 0.4 bar and an engine speed of 1500 rpm were selected to represent a stable (stab) and an unstable (unst) condition for fired engine experiments of stoichiometric iso-octane fuel introduced via port fuel injection. The stab case is characterized by the continuous firing of a stoichiometric fuel-air mixture. For the unst case, nitrogen and carbon-dioxide are mixed with the intake air to simulate 12.9% external EGR. To ensure a constant amount of total EGR, the unst case employed skip-firing in which only every 7th cycle was ignited in order to flush out residual exhaust gases due to internal EGR before the next ignition cycle. Furthermore, the unst case has an earlier ignition timing. The ignition timing of both conditions was optimized such that the average cycle's 50% mass fraction burned (MFB50) occurred at 8 crank-angle-degrees (CAD). In this study, 0 CAD stands for compression top dead center. The resulting coefficient of variation (COV) of the maximum in-cylinder pressure for the unst case is 12.8% compared to 3.51% for the stab case. Although the stab case has a higher amount of residual gases due to the internally induced EGR by the valve overlap, the condition is nevertheless more stable than the carefully controlled external EGR case due to the different spark timings. The later spark timing of the stab case results in a stable operation and an average exhaust temperature of more than 250 • C greater than the unst case. Engine specifications and operating conditions are summarized in Table 1. High-speed planar Mie scattering of oil droplets was used to measure the velocity field via particle image velocimetry (PIV). The burned gas areas where oil droplets were evaporated represent the position of the flame. The velocity and burned gas measurements of the stab case were conducted using the setup (and same vector processing) as described in detail by Hanuschkin et al. (2021) and Dreher et al. (2021), who measured image pairs from − 360 to 0 CAD in increments of 5 CAD (1.8 kHz). For the unst case, oil droplets were illuminated by two laser sheets from pulsed Nd:YVO 4 cavities and were captured by a Phantom v2640 ultra high-speed camera equipped with a Sigma lens (APO Macro 180 mm F2.8 EX DG OS HSM, set to f/5.6), 532 nm bandpass filter, and a correction lens of focal length f = + 2000 (to counteract the astigmatism caused by the curved cylinder glass) at a sampling rate of 4.5 kHz ( − 180 to − 4 CAD , increments of 2 CAD). Burned gas regions were captured using the second frame of each image pair and indicate the position of the flame; therefore, the rest of this work will refer to extracted burned gas regions as the flame. The different laser and optical systems used coupled with the improved painting of the back of the cylinder glass resulted in fewer reflections in the PIV images of the unst case. Therefore, the resulting flame images of the unst case require less masking before analysis. Vector processing of the unst case was conducted using the software DaVis 10.1.2 and employed a multi-pass cross-correlation technique (first 2 passes: 64 × 64 , 50% overlap; last 2 passes: 32 × 32 , 75% overlap; after each pass: peak ratio criterion of 1.3, universal outlier detection of 7 × 7 , and a vector group removal criterion of size 5). Further processing of velocity fields such as obtaining instantaneous line profiles or the mean vector field was conducted using MATLAB. Likewise, extraction of the flame for both cases from burned gas regions was conducted using an in-house MATLAB script whose procedure is as follows: After masking and dewarping (3rd order polynomial), raw images were grouped by their respective cycle then the intensity counts of each image were normalized by the maximum intensity achieved in the respective cycle. Afterwards, a sliding entropy filter (5 px × 5 px) was applied, followed by the division of each image by the first image without a flame. Next, the pixels whose intensities were less than 1 were set to 0, an erosion (disk radius of 4) removed small structures around all edges, a minimum area criterion of 200 px was applied, all holes were filled within each flame, and finally a dilation of the same size of the erosion was applied to restore the flame size. Numerical Methods In this study, LES of the optical engine was performed using a commercial 3D-CFD code, CONVERGE (version 3.0) (2009), which is based on a finite-volume approach. The computational domain is discretized with a Cartesian mesh with local mesh refinement as shown in Fig. 1. The largest cell size in the cylinder is 0.5 mm. Local refinement is applied in the region close to the cylinder walls, near the spark plug, and in the crevice, with a cell size of 0.25 mm. Similar resolutions were employed in previous engine LES studies (d'Adamo et al. 2019;Truffin et al. 2015;Zembi et al. 2019) and proposed by Zembi et al. (2019) as the best compromise to resolve more than 80% of turbulent kinetic energy with a computational cost not overly expensive. A coarser mesh is used in the intake and exhaust ports with the largest cell size of 4 mm. The computational domain is composed of about 2.5 million cells at top dead center (TDC) and about 7.5 million cells at bottom dead center (BDC). Simulation of a full engine cycle of 720 CAD took about 60 h on 480 cores. The intake and exhaust boundaries of the computational domain are specified at the positions where experimental boundary conditions are available. In particular, measured 1 3 instantaneous static pressures and average temperatures are imposed at the boundaries. Since the variations of those boundary conditions over different cycles are negligible ( < 0.1 bar for pressure and < 1 K for the intake temperature), they are kept constant for multiple LES cycles. Blow-by is expected to be small in the investigated engine (Baum et al. 2014) and is not considered in the LES. The temperature on the walls of the combustion chamber and on the walls of the manifolds are specified as 60 • C and 36 • C in the LES, respectively. Dirichlet boundary conditions are applied for the crevice and cylinder walls, while the velocity wall model by Werner and Wengle (1993) and the heat transfer model by Amsden et al. (1989) are used for the walls of the manifolds where mesh resolution is coarser. The compressible Favre-filtered governing equations of mass, momentum, energy, and species are solved using the finite-volume method and discretized by a blending second-order scheme in space. The Pressure Implicit with Splitting of Operator (PISO) method (Issa 1986) Fig. 1 Computational domain and mesh of LES is used for the pressure-velocity coupling. A second-order Crank-Nicolson (Crank and Nicolson 1947) time integration is employed. Details about the discretization schemes and the iterative algorithms can be found in the CONVERGE manual (2020). The one-equation eddy viscosity model (Yoshizawa and Horiuti 1985;Menon and Calhoon Jr 1996) is used for the LES sub-grid scale (SGS) closure, where a transport equation for the SGS kinetic energy, k sgs , is solved. The G-equation model (Pitsch 2002) is applied to describe the turbulent flame propagation, where the flame front is tracked using a levelset approach by solving a transport equation of the variable G (Peters 2009). The flame front is defined at G = 0 . � G < 0 and � G > 0 indicate the regions of unburned and burned mixtures, respectively. Chemical equilibrium (Pope 2003(Pope , 2004 is assumed for the burned mixtures. The transport equation for G reads where s t is the turbulent flame speed. − D t̃ + u s t can be understood as the propagation speed of the filtered flame front. The filtered flame front curvature ̃ and turbulent diffusivity D t are given by and where sgs is the SGS turbulent dissipation modeled by C (k sgs ) 3∕2 ∕Δ . Δ is the filter width. The SGS Schmidt number Sc and the modeling constants C and C are set as 0.78, 0.0845, and 0.05, respectively. The turbulent flame speed s t is given by Pitsch (2002) where s l is the laminar flame speed, is the dynamic viscosity, t is the turbulent viscosity, u ′ is the SGS velocity, and b 1 and b 3 are modeling constants. Eq. (1) is solved using the levelset approach. Outside the flame surface, the scalar is required to satisfy | ∇G |= 1 . The G -field is reinitialized as unburned in every cycle before ignition. A look-up table for the laminar flame speed s l has been generated with FlameMaster (Pitsch 1998) using a detailed iso-octane mechanism (Cai et al. 2019). The modeling constants b 1 and b 3 in the turbulent flame speed model are calibrated to match the mean in-cylinder pressure in the experiments. The necessity of such calibration might be attributed to the poor prediction of the SGS velocity and the underresolved flame wrinkling in the performed LES. Due to the different thermodynamic conditions of the stab and unst cases, b 1 and b 3 have different values for both cases. The ignition is realized by adding an energy source of 100 mJ within a sphere with a radius of 0.5 mm. Positive G is initialized in the region with a temperature higher than a threshold of 4000 K. The same ignition is used for both cases. Because of this simplified ignition model, which resulted in faster early kernel development, the ignition timing is calibrated to match the experimental mean pressure trace, especially for the first 10-20 CAD after ignition, as proposed by Esposito et al. (2020). An increase of b 1 or b 3 or earlier ignition timing leads to higher in-cylinder pressure. The calibrated model parameters are summarized in Table 2 for both cases. For the stab case, consecutive cycles are simulated. For the unst case, the skip-firing mode is realized in the LES by first simulating consecutive non-reactive cycles without ignition and then using these results as initial conditions for the reactive simulations to reduce the computational cost. Comparison of LES and Experiments This section presents a detailed comparison of the LES and the experiments. More than thirty cycles were simulated for both cases. The first two simulation cycles are not considered in the analysis, since they are influenced by the assumed initial condition (Truffin et al. 2015). In the experiments, about 200 and 1000 cycles are measured for cases stab and unst, respectively. In-Cylinder Pressure One of the most important aspects for the validation of engine simulations is the agreement in terms of in-cylinder pressure with experiments, which is reported for both stab and unst cases in Fig. 2. As displayed in the figure, the two cases have different spark timings. Note that the model parameters, b 1 , b 2 , and the ignition timing are calibrated to match the mean pressure trace in the experiments. For case stab, the standard deviation of the in-cylinder pressure is well reproduced, even though a slight deviation in the first half of the combustion stroke in terms of pressure slope is observable. This might be attributed to the limitation of the combustion model, which does not accurately account for the initial flame kernel development and the transition from laminar to turbulent flame propagation. However, further development of the model to account for this effect was not within the scope of this study. For case unst, larger differences between the LES and the experiments exist, especially for the variations of the maximal in-cylinder pressure, p max , which might be attributed to the different proportions of the misfire cycles. Only 1% of cycles in the experiment recorded misfire, which is defined as a cycle whose gross IMEP ( IMEP g ) was negative. In the simulations, one cycle out of thirty-two, about 3%, had misfire. Such difference might be attributed to the limited number of the simulated cycles. Since the pressure of the misfire cycle is significantly lower than the other cycles, the mean and the standard deviation of pressure can be strongly influenced by the proportion of the misfire cycles. If the single misfire cycle in the LES is assigned a weight of 1/3 so that the proportion of misfire cycles in the LES equals that of the experiments, a better agreement is observed in the cylinder pressure, as shown in the supplementary material ( Fig. S-1). However, the main trend does not change. Larger pressure variations are obtained in the LES compared to experiments. In addition, some differences are observed in the expansion stroke, which might be due to the poor prediction of incomplete combustion and heat loss. However, this will not affect the analysis of the CCV in this study, as the important range for the analysis of the CCV is the time before p max is reached. Therefore, it can be confirmed that the simulations are able to distinguish the stable and unstable engine operating conditions, which was also reported in Granet et al. (2012), Truffin et al. (2015), and d' Adamo et al. (2019). A quantitative comparison of the mean, the standard deviation, and the timing, CAD p max , of p max are provided in Table 3, where LES* stands for statistics evaluated by assigning a weight of 1/3 to the single misfire cycle in LES. The variability is assessed with the coefficient of variation (COV), which is defined as the ratio of the standard deviation to the mean value. As expectable from Fig. 2, while p max is well predicted for both cases, CAD p max is retarded for case unst. Figure 3 compares the distribution of p max for both cases. It is seen that the distribution is not symmetric. In the experimental data, a long tail in the low pressure side in the distribution is observed for both cases, especially for case unst. The same trend is also captured in the simulation. However, the extreme low and high pressure cycles are not predicted in the simulation, which might be attributed to the limited combustion model for the early flame kernel development and the limited number of the simulated cycles. Flow Fields In this section, the velocity fields during the intake and compression strokes are evaluated. Figure 4 compares the 2D velocities projected onto the tumble plane passing through the cylinder axis and their standard deviations at − 270 CAD in the intake stroke for case stab. Good agreement can be observed for the overall flow structure and magnitude of the velocity fields. The high intake velocity and the stagnation regions are well predicted as can be observed in Fig. 4a. For further comparison, the two velocity components along the white dashed horizontal lines at different vertical locations are compared in Fig. 4b and c. Very good agreement is achieved for the averaged velocities (designated by the thick lines in Fig. 4b). In Fig. 4b, the range of the vertical axis is chosen to show the velocity fluctuations of individual cycles. Comparisons of the averaged velocities with a smaller range of the vertical axis are shown in the supplementary material (Figs. S-3 and S-4). The fluctuations of the velocity components in the simulation also agree fairly well with the experiments in terms of absolute values and trends, even though stronger fluctuations are observed for the LES. Furthermore, the standard deviation of the velocity components for the experiment is relatively smooth along the x-axis, while it fluctuates for the LES. These are probably due to the large difference in the sample sizes between the sets in LES and experiment. Similar results are obtained for case unst, which are shown in the supplementary material ( Fig. S-2). Figure 5 compares the velocity in the compression stroke close to spark timing for both cases. For case stab, the averaged velocities ( Fig. 5a and b) and the standard deviations (Fig. 5c) are fairly well predicted, albeit the magnitudes of the mean values are slightly higher in the LES. In addition, a vortex is observed in the simulation at the top right corner near the exhaust valves as shown in Fig. 5a. However, in the experiment, this vortex is outside the field of view. Such a difference in the location of the vortex explains the discrepancies in w for x > 20 mm in Fig. 5b. Similar results are observed for the velocity fields of the case unst. The global flow structure is captured by the LES. However, also in this case, the magnitudes of the averaged velocities are slightly higher for the LES and the location of the vortex center slightly differs (Fig. 5d), namely it is closer to the spark plug and thus a large difference in w for x > 10 mm is observed (Fig. 5e). Spatial Flame Distribution After the comparison of the flow fields, this section analyzes the flame propagation for both cases. A quantitative comparison of the flame propagation can be conducted based on the flame probability, P f , which describes the local probability of the occurrence of burned gas at a given CAD, defined as Figure 6 compares the flame probability of LES and experiments for both cases considering all cycles of each case. (5) Since the two cases have different spark timings, combustion duration, and thus frequencies of PIV measurement, flame images are shown for different CADs for the two cases. It should be noted that the experimental flame images for the stab case had to first be heavily masked due to strong laser reflections on the spark plug; for the unst case, the experiment was conducted with a different laser and camera imaging setup, as described in Sect. 2, and reflections were better mitigated. Good agreement between numerical and experimental flame propagation is observed for both stab and unst cases. For case stab, the spatial flame distribution is concentrated with high peak P f values and the flames tend to always first propagate to the left of the spark plug, which is observed in both the LES and experiments. Flames in LES tend to occur further to the top left compared to the experiments, which is consistent with the slightly overpredicted velocities from the LES before ignition as shown in Fig. 5. For case unst, the flame is more distributed throughout the field of view in this plane with lower overall peak in P f values. Compared to the stab case, the flame on the left side of the spark plug is more concentrated and the flame on the right of the spark plug is more distributed due to the flame interactions with the flow, which has a different state near the spark plug at ignition (Fig. 5). Furthermore, similar spatial flame distributions are seen in both LES and experiments. The typical flame propagation in a 3D projection obtained in LES for a single cycle of both cases is illustrated in Figs. 7 and 8. For both cases, the flames show asymmetry with respect to the tumble plane until − 10 CAD. This is visible especially for the unst case, where the flame is strongly stretched by the vortex on the exhaust side resulting in multiple pockets of burned gases. Conditionally Averaged Flow Fields and Spatial Flame Distributions In addition to the comparison of statistics from all cycles, it is interesting to compare statistics of the flow fields and the spatial flame distributions conditioned to the highest and lowest pressure cycles. Figure 9 compares the velocity fields before ignition Fig. 6 Flame probability P f in the tumble plane (y = 0) considering all cycles. The empty region in the experiments is due to reflection conditioned on the cycles with the 20% highest and lowest p max for both cases. For case stab, the same trend as in the experiments is predicted in the LES: larger velocities in the tumble plane correlate to high p max . For case unst, the correlation of p max and the location of the vortex, which is characterized by the low velocity magnitude at the top right corner, is different between LES and experiment. While the vortex center is larger and thus closer to the spark plug for high pressure cycles in the experiment, it is closer to the spark plug for low pressure cycles in LES. In addition, for the unst case, the experimental flows in the horizontal direction across the spark plug gap are greater, which accounts for the downward shift of the vortex center when comparing high pressure cycles to low pressure cycles. This difference in conditionally averaged flows between the experiments and LES is mainly caused by discrepancies in the combustion process, which influences the selection of high and low pressure cycles. It can be attributed to the simplified ignition model in the simulation, which is an energy source at the spark plug. While the spark channel in reality is convected by the flow and thus the flame kernels are initiated at different locations, the process of flame initiation occurs at the same spot in the LES and possible convection of the flame by the flow can happen only at a later stage. A more sophisticated ignition model (Dahms et al. 2009) will be employed in future studies. Figure 10 shows the spatial flame distribution conditioned on the high and low pressure cycles. For case stab, similar results are observed between the simulation and experiment. High pressure cycles exhibit larger regions of burned gas and the flame is mainly on the left side of the spark plug for both high and low pressure cycles. Fig. 9 Phase-averaged 2D velocity fields at − 30 CAD for case stab (a) and − 50 CAD for case unst (b) conditioned on the 20% highest and lowest p max cycles. The empty region in the experiments is due to reflection and the evaporation of the PIV particles However, for the unst case, different behavior is observed in the simulations compared to the experiments. In the experiments, flames for high pressure cycles are more likely to be located on the right side of the spark plug until − 18 CAD, which is consistent with the greater horizontal flow across the spark plug and the closer location of the vortex center to the spark plug as shown in Fig. 9. The flames for low pressure cycles tend to spread to the left side of the spark plug. In the simulations, flames tend to spread to the left of the spark plug for both high and low pressure cycles. This might be attributed to the simplified ignition and flame-wall-interaction models and the discrepancy of the predicted flow fields in the vicinity of the spark plug electrodes in the simulation. Fig. 10 Flame probability P f in the tumble plane (y = 0) for case stab (a) and unst (b) conditioned on the 20% highest and lowest p max cycles. The empty region in the experiments is due to reflection Analysis of CCV After the detailed comparison of LES and experiments shown in the previous section, the sources of the CCV in the simulations are analyzed here. Despite the differences between the LES and experiments, the analysis of the CCV in the LES may shed new light on the causal chain of engine CCV with the available three-dimensional data. Influencing Variables The influencing variables are introduced in this section, which are extracted at the spark timing on different scales, including the quantities evaluated in the spherical region ( r = 3 mm) centered between the spark plug electrodes, the parameters for the threedimensional coherent vortex structure evaluated locally considering a spherical neighborhood ( r = 12 mm), and the large-scale parameters considering the whole cylinder ( r = 43 mm). Small Region Close to Spark Plug Ignition and the very early combustion phase are crucial for CCV (Schiffmann et al. 2018). Therefore, variations of the thermodynamic and fluid-dynamic conditions in the region close to the spark plug are expected to be one of the major sources of CCV. In this study, the local fields close to the spark plug in a spherical region with a radius of 3 mm centered between the spark plug electrodes (Fig. 11) are considered. The influence of the following variables will be assessed in Sect. 5.2: spatially averaged temperature T sp , SGS kinetic energy k sgs sp , pressure p sp , resolved velocity components u sp , v sp , and w sp , resolved velocity magnitude |u| sp , and fuel mass fraction Y f,sp in the region close to spark plug. Coherent Vortex Structure Another group of influencing variables can be defined based on the coherent vortex structure. In this study, the vortex structure is detected by the function, 3p , defined as (Buhl et al. 2017) where ̂ o is given by ⟨ ⟩ V ( ) refers to the average velocity in the volume V around point . In this study, V is defined as a sphere with a radius of r = 12 mm centered at . p and ̂ p are the position and velocity vector projected onto the plane normal to the axis of rotation given by and (6) 3 The position vector is defined as = o − . The direction of the axis of rotation is given by where the function 3 ( ) reads (Gohlke et al. 2008) | 3p | = 0 means there is no rotation, while the center of an axisymmetric and uncurved vortex structure has unity | 3p | . Examples of the vortex structure of a single cycle close to spark timing are shown in Fig. 12 for both cases. Different vortex structures can be identified. For the unst case, since the ignition time is earlier, the tumble motion can be identified by the coherent vortex core. Conversely, since the ignition for the stab case occurs later, the organized tumble motion breaks into a vortex pair rotating in the x-direction. To quantify such vortex structures, the locations of the vortex centers on the planes across the spark plug are analyzed. For the stab case, the two vortex centers on the plane, x = x sp , are considered since the vortex pair is rotating in the x-direction. The vortex center on the plane, y = y sp , is analyzed for the unst case because the rotational motion is roughly around the y-direction. The vortex centers are determined by the maximal | 3p | on these planes. The distances between spark plug and the vortex centers, d y<0 , d y>0 , and d, illustrated in Fig. 12, are considered. The local tumble ratio, LR, which describes the strength of the local rotational flow around the vortex center, is also evaluated as where c is the vortex center and ref is the angular speed of the engine. Large-Scale Variables The large-scale variables include the averaged thermodynamic parameters of the in-cylinder mixture and the large-scale flow motion considering the whole cylinder ( r = 43 mm). Fig. 11 Illustration of the spherical region close to spark plug with a radius of 3 mm For both cases, variations of the in-cylinder pressure and the averaged temperature at spark timing are negligible (about 0.001 MPa and 2 K), and thus not analyzed in this study. Since the amount of EGR is well controlled and homogeneous for the unst case using skip-fire operation, the influence of the mass fraction of EGR, Y EGR , is only considered for the stab case. To quantify the large-scale flow motion, the tumble ratio (TR), swirl ratio (SR), and cross-tumble ratio (CR) are considered, which are defined by and where V cyl. is the in-cylinder volume. xz , xy , and yz are position vectors referring to the center of mass of the in-cylinder gases projected onto the xz, xy, and yz planes, respectively. xz , xy , and yz are the velocity projected onto the xz, xy, and yz planes, respectively. The positive and negative values of these large-scale flow motion parameters may cancel each other when the flow field exhibits symmetry. As shown in Fig. 12, the flow fields are roughly symmetric about the plane, y = 0 . Therefore, the positive and negative contributions are considered separately. Note that these values not only quantify the strength of the large-scale rotational motion but are also influenced by the location of the coherent vortex center relative to the center of mass. Fig. 12 Vortex core and streamlines around the vortex core of a single cycle at the spark timing for both cases 1 3 CCV Correlations To systematically analyze the CCV correlations in LES, CCV are distinguished between the early flame kernel development and flame propagation. The former is quantified by the burned fuel mass at 3 CAD after spark timing, m fb,ST+3CAD . This short time after ignition is chosen to investigate the influence of the ignition and the very early flame kernel development. Even though the flame kernels are quite small at such a short time after ignition, their CCV can already be identified, which significantly influences the maximal incylinder pressure, p max , as will be shown in the following. Figure 13 shows the correlation between m fb,ST+3CAD and p max . The burned fuel mass is normalized by the averaged trapped fuel mass before combustion, ⟨ m f,tot ⟩ . For the unst case, the cycles with low m fb,ST+3CAD are highlighted in red and the misfire cycle is displayed with the rectangle, which will be discussed in Sect. 5.2.2. To quantify the correlation, the Bravais-Pearson correlation coefficients are displayed in Figs. 13, 14, 15 and 16, which are defined as where x and y are the standard deviations of variables x and y, respectively. xy is the covariance of x and y. |R| = 1 or |R| = 0 means that there is a perfect linear correlation or no correlation between the variables, respectively. The significance level is also illustrated in Figs. 13, 14, 15 and 16 by the p value for the H 0 -hypothesis test with t-distribution of the test statistic (Fisher 1992). A correlation is considered significant if p < 0.05. A strong correlation is observed for both cases, indicating that the early flame kernel development plays a significant role in the CCV of p max . The relatively large scatter in both plots of Fig. 13 reveals that the combustion process varies also after the early flame kernel development. It is worth noting that the misfire cycle is not an extreme case in terms of the early flame kernel development (not the lowest m fb,ST+3CAD in Fig. 13b), which agrees with the experimental study by Peterson et al. (2011). They showed that misfiring is not a consequence of the failed ignition. CCV of the flame propagation will be analyzed based on the variations of p max excluding the influence of m fb,ST+3CAD , which is quantified by Fig. 13 Correlation between the burned fuel mass at 3 CAD after spark timing and the maximal in-cylinder pressure for case stab (a) and unst (b). For the unst case, the cycles with low m fb,ST+3CAD are highlighted in red and the misfire cycle is displayed with the rectangle where p f it max (m fb,ST+3CAD ) is the fitting line as shown in Fig. 13. In other words, Δp max is the difference between the observed pressure to the fitted line in Fig. 13. In the following, the correlations of these two quantities with the influencing variables introduced in Sect. 5.1 will be analyzed. Early Flame Kernel Development In this section, correlations of m fb,ST+3CAD and the influencing variables are analyzed. Figure 14 shows the correlation coefficients and the p values of the respective variables. As can be observed in Fig. 14, the importance of the influencing variables is different for both cases. For the stab case, a significant negative correlation ( p < 0.05 ) is found for the coherent vortex structure (12 mm), namely the minimum distance between the vortex center and the spark plug, d min = min d y<0 , d y>0 . This indicates that the early flame kernel at 3 CAD after the spark timing already starts to interact with the coherent vortex structure illustrated in Fig. 12. Consequently, a smaller distance leads to faster early flame kernel growth. However, the strength of the local rotational motion (LR) does not show a significant correlation. Additionally, no significant correlations can be identified either for the variables extracted in the small region close to spark plug ( r = 3 mm). For the unst case, a weak correlation is found for the average velocity in the y-direction close to spark plug, v sp , indicated by R ≈ 0.3 and p ≈ 0.08 . A larger v sp means that the ignited kernel will be pushed towards the spark plug (Fig. 11), which leads to higher heat loss and thus slow combustion. For this case, the flame kernels are smaller compared to the stab case and have not started to interact with the vortex at 3 CAD after spark timing. Thus, no significant correlations are observed for the coherent vortex structure ( r = 12 mm). For both cases, the influence of the large-scale variables considering the whole cylinder ( r = 43 mm) on the early flame kernel development Flame Propagation This section investigates sources for the CCV of the flame propagation, namely, Δp max . The correlation coefficients and the p values between Δp max and the influencing variables are shown in Fig. 15. For the stab case, the flame propagation is mainly influenced by the large-scale variables where correlations are higher and p values are lower. Significant correlations are found for the internal EGR concentration Y EGR and the positive swirl ratio SR+ . For the unst case, since the flame volume at 3 CAD after ignition is small, Δp max still correlates to the variables extracted from the region close to the spark plug and is strongly influenced by | | sp with |R| > 0.4 and p < 0.01 . Significant correlations between Δp max and k sgs sp or d are also observed, which is mainly because of their correlation with | | sp as shown in Fig. 16. The cycles with small early flame kernels (low m fb,ST+3CAD ) are highlighted in red and the misfire cycle is displayed with a rectangle. Since the flame speed is small for the unst case, the flow close to the spark plug, especially the rotational motion, can stretch the flame and reduce the propagation speed. Therefore, larger | | sp , smaller d, or stronger LR leads to slower flame growth and ultimately lower Δp max . It can be observed that the misfire cycle has the highest | | sp , the smallest d, and the strongest LR among the cycles with small early flame kernels (red points in Fig. 16) indicating that misfire is related to the strong stretch of the flame kernel by the flow close to the spark plug and the coherent vortex structure. It is worth noting that the applied levelset model always assumes a positive turbulent flame speed (Eq. 4) and thus cannot predict the local flame extinction. The misfire cycle is obtained in the simulation because the flame kernel is stretched by the flow, generating flame segments with large curvatures, which reduces the propagation speed of the levelset front to even negative values via the first term on the right hand side of Eq. 1. Furthermore, for the large-scale variables, a significant correlation between Δp max and the negative cross-tumble ratio, CR− , is seen. CR− is also influenced by the strength and location of the coherent vortex and thus correlates to | | sp as shown in Fig. 16d. Conclusions This paper presents a combined numerical and experimental investigation of an optically accessible SI engine with port fuel injection. Two operating conditions are considered: a stable operating condition with continuous firing of a stoichiometric fuel-air mixture and an unstable operating condition in a skip-firing mode with homogeneous external EGR. The LES is carried out with well-established turbulence, combustion, and ignition models. A detailed comparison between LES and experiments is performed. Even with some difference in the expansion stroke of the unst case, a good agreement of the incylinder pressure, especially the maximal in-cylinder pressure and its cycle-to-cycle variation, is obtained. It is also confirmed that LES is able to distinguish the stable and unstable operating conditions in terms of the flow field and flame propagation. Good agreements for the general flow structure, the phase averaged velocities, and their fluctuations during the intake and compression strokes are achieved. However, small discrepancies are observed for the location of the vortex center in the compression stroke. The spatial flame distribution is also well reproduced in the LES when considering all cycles of each case. For the stab case, the same trends as the experiments are also found in the LES for the flow fields and the spatial flame distribution conditioned on the high and low pressure cycles. Conversely, discrepancies are observed for the conditionally averaged quantities of the unst case, an observation that is likely attributed to the lack of a spark plasma and subsequent flame kernel growth in the ignition model. A systematic quantitative correlation analysis between the influencing variables extracted from different scales of the 3D LES and the CCV of the early flame kernel development and the consequential flame propagation is conducted. The influencing variables include the spatially averaged mean temperature and pressure and the flow features in the spherical region ( r = 3 mm) centered between the spark plug electrodes, parameters for the three-dimensional coherent vortex structure ( r = 12 mm), and the large-scale parameters that consider the whole cylinder ( r = 43 mm). The most relevant influencing variables are different for different operating conditions and combustion phases. However, for both cases, the location of the coherent vortex center plays a significant role in the combustion process, which is also confirmed by the conditionally averaged flow fields in the measurement of the unst case. For the unst case, the flow direction in the region close to the spark plug is found to influence the early flame kernel development. A slower growth of the early flame kernel is observed if it is pushed towards the spark plug. For the stab case, the internal EGR is found to significantly correlate to the flame propagation. The large-scale flow features are also found to influence the flame propagation for both cases. Such main findings in terms of sensitivities regarding CCV are in close agreement with the experimental study by Welch et al. (2022). Finally, the flame kernel interactions with the flow close to the spark plug and the coherent vortex structure are found to be the cause for the misfire cycle in LES of the unst case, which indicates that proper control of the vortex location might be promising to avoid misfire. Acknowledgements The authors gratefully acknowledge the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Research Unit FOR2687. The authors gratefully acknowledge the computing time granted by the NHR4CES Resource Allocation Board and provided on the supercomputer CLAIX at RWTH Aachen University as part of the NHR4CES infrastructure. The simulations for the unst case were conducted with computing resources under the project p0020108. Simulations of the stab case were performed with computing resources granted by RWTH Aachen University under project rwth0501. Convergent Science provided CONVERGE licenses and technical support for this work. Author Contributions All authors contributed to the study conception and design. BB and AD contributed to the design and the conception of the experiments. Experiments were performed by CW and BB. Simulations of the stable operation condition were conducted by HC and HE. Simulations of the unstable operation condition were conducted by HC and SC. MD and HP contributed to the conception of the data analysis. Analysis was conducted by HC, HE, SC, and MD. The first draft of the manuscript was written by HC and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. Declarations Competing interests A. Dreizler is on the editorial board of Flow, Turbulence and Combustion and B. Böhm is guest editor of the special issue: Cyclical Variations in Internal Combustion Engines; however, neither of them handled the review nor editorial process of this paper. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
11,614.4
2022-08-12T00:00:00.000
[ "Physics", "Engineering" ]
EPR study of NO radicals encased in modified open C60 fullerenes Abstract Using pulsed electron paramagnetic resonance (EPR) techniques, the low-temperature magnetic properties of the NO radical being confined in two different modified open C60 -derived cages are determined. It is found that the smallest principal g value g3 , being assigned to the axis of the radical, deviates strongly from the free electron value. This behaviour results from partial compensation of the spin and orbital contributions to the g3 value. The measured g3 values in the range of 0.7 yield information about the deviation of the locking potential for the encaged NO from axial symmetry. The estimated 17 meV asymmetry is quite small compared to the situation found for the same radical in polycrystalline or amorphous matrices ranging from 300 to 500 meV. The analysis of the temperature dependence of spin relaxation times resulted in an activation temperature of about 3 K, assigned to temperature-activated motion of the NO within the modified open C60 -derived cages with coupled rotational and translational degrees of freedom in a complicated three-dimensional locking potential. Introduction In a series of recent publications, the Kyoto Group has shown that it is possible to encapsulate small and even reactive molecules in a modified C 60 cage with tailored entrance and exit holes (Hasegawa et al., 2018a;Futagoishi et al., 2017;Hashikawa et al., 2018).Using such designertype open cages instead of closed structures creates a new route for the preparation of interesting compounds.The family of endohedral fullerenes with closed carbon cages like N@C 60 (Murphy et al., 1996), He@C 60 (Saunders et al., 1994), H 2 @C 60 (Komatsu et al., 2005), H 2 O@C 60 (Kurotobi and Murata, 2011), HF@C 60 (Krachmalnicoff et al., 2016), and CH 4 @C 60 (Bloodworth et al., 2019), as well as C 82 (Stevenson et al., 1999) based metallo-endohedrals, can thus be expanded significantly.It has been shown that these new compounds can be stable under ambient conditions, allowing easy handling.If encapsulated molecules are paramagnetic, as in the case of 3 O 2 or 2 NO, electron paramagnetic resonance (EPR) is the method of choice for elucidating their properties.This not only allows determination of the stationary-spin Hamilton parameters, but furthermore allows detection of dynamic properties arising from internal dynamics or motion of the compound as a whole.In the case of La@C 82 for instance, it was possible to conclude from an analysis of two-dimensional EXCSY spectra that the metal ion is rigidly locked to the inside surface of the carbon cage (Rübsam et al., 1996).In the present case of an encapsulated NO radical it was concluded from the broad variance of its principal g matrix values (Hasegawa et al., 2018a) that even at low temperatures the radical is not fixed to a particular site.It was remarkable that the very small value quoted for the axial component (Hasegawa et al., 2018a) of 0.225 deviates significantly from the value determined for NO radicals trapped in a single crystal host (Ryzhkov and Toscano, 2005) or NO radicals adsorbed in zeolites (Poeppl et al., 2000).This very small value of g 3 = 0.225, deduced by an analysis of a continuous wave (cw) measurement, necessitated confirmation by pulse EPR experiments, better suited for the study of very broad spectra.Although a T 2 variation as a function of the external field can distort the shape of a pulse-derived spectrum to some extent, difficulties in detecting extremely broad spectra with virtually absent changes within the typically achievable B 0 modulation amplitudes in cw EPR can lead to misinterpretations, in particular if the supposed spectrum extends a factor of 2 beyond the possible acquisition range.So far, neither relaxation nor nitrogen hyperfine data were reported, which might be important for a full characterization of the compound.It was the aim of the present study to obtain by multi-frequency EPR and ENDOR techniques a complete spin Hamiltonian parameter set for the encapsulated radical.In addition, the anticipated effects of a nonspherical cage potential on the radical are explored, and effects due to the structural modification of the cage are studied. Sample preparation NO radicals trapped in two slightly different modified C 60 cages, C 82 H 28 N 3 O 5 S and C 82 H 32 N 3 O 5 S, were studied, in the following abbreviated by NO@C60-OH1 and NO@C60-OH3, respectively (see Fig. 1 for NO@C60-OH1 and Fig. A1 for NO@C60-OH3).The notation indicates the two different orifices with one and three OH groups, respectively.NO@C60-OH1 was prepared as described in Hasegawa et al. (2018a) and NO@C60-OH3 combining the procedures described in Hashikawa et al. (2018) and Hasegawa et al. (2018a).NO@C60-OH1 and NO@C60-OH3 were dissolved in CS 2 in 2.5 and 10 mM concentrations and sealed in quartz tubes for EPR spectroscopy. EPR spectroscopy For pulsed EPR and ENDOR measurements at S-and Xband mw frequencies (3.4 and 9.8 GHz), various setups were employed.Echo-detected 9.8 GHz EPR measurements at low temperatures were conducted on Bruker ElexSys E580 and E680 instruments equipped with Oxford CF935 helium cryostats using Bruker MD4 Flexline ENDOR probe heads.Field-swept echo-detected EPR spectra (FSE) at 9.8 GHz were recorded using a two-pulse "Hahn-echo" sequence (20-300-40 ns) at temperatures of 3.6 to 12 K, yielding absorption-type spectra.Transient nutation measurements at 9.8 GHz were conducted by applying a PEANUT (Stoll et al., 1998) pulse sequence with a π/2 pulse length of 8 ns, a delay time τ of 130 ns and a high turning angle (HTAx) pulse of 4096 ns.Phase inversion time within the high turning angle (HTAx) pulse was incremented by 2 ns starting with an initial inversion after 16 ns.ENDOR spectra were recorded by applying either a Mims pulse sequence with π/2 pulses of 20 ns, a delay time τ of 200 ns and a rf π pulse length of 15 µs, or a Davies pulse sequence with pulse settings 40-30 000-20-200 ns and a RF pulse length of 25 µs.FSE data at a microwave frequency of 3.4 GHz (S-band) were obtained again using a Bruker ElexSys E680 system with an additional S-band accessory including a Bruker Flexline probe head with a split-ring resonator employing a pulse timing of 32-500-64 ns.FSE and ENDOR spectra were fitted by the EasySpin (Stoll and Schweiger, 2006) "esfit" routine using the "pepper" and "salt" simulation routines. Quantum chemical calculations Optimization of the structure of the compounds NO@C60-OH1 and NO@C60-OH3, with replacement of the 6-t-Butylpyridin-2-yl groups with 2-pyridyl groups, has been performed using Gaussian (Frisch et al., 2016) at the HPC center of FU Berlin.DFT calculations were performed using the 6-311++ basis set with UB3LYP exchange.Structures derived for nitrogen in the "up" orientation (with respect to the orifice) are depicted in Figs. 1 and A1.The difference in total energies for "up" and "down" orientations of the trapped radical was 22.6 meV for NO@C60-OH1, somewhat larger than the value (8 meV) published earlier (Hasegawa et al., 2018a), which might be caused by use of a different basis set.For NO@C60-OH3 we calculated 40.2 meV. Multi-frequency EPR data EPR data published previously by Hasegawa et al. (2018a) for NO@C60-OH1 were obtained in cw mode at a microwave frequency of 9.56 GHz.Spectra measured at 3.45 and 9.76 GHz using the FSE technique are depicted in Fig. 2. The published g matrix parameter set (see Table 1) obtained by spectral simulation of the cw spectrum is characterized by an extreme g anisotropy.The values determined by fitting the FSE spectra confirm the two larger g matrix parameters; however, they deviate significantly with respect to the pseudo-axial g 3 parameter. We quote no error margins, because a large g strain value is obtained for the g 3 value using the "esfit" routine (EasySpin; Stoll and Schweiger, 2006).The pseudo-axial principal parameters g 3 = 0.646 and 0.679, respectively, are still found to be very small compared to g 3 = 1.7175 for the same compound trapped in a crystal (Ryzhkov and Toscano, 2005) or g 3 = 1.888 when incorporated into a zeolite (Poeppl et al., 2000), but rendering the g matrix substantially less anisotropic compared to the data in Hasegawa et al. (2018a).For further confirmation of the g matrix parameter set determined by fitting the FSE spectra, we also performed a PEANUT experiment (Stoll et al., 1998), probing the Rabi nutation frequency as a function of B 0 (see below). Parameters determined for the NO@C60-OH3 compound are also listed in Table 1.Spectra are shown in Fig. A2 in Appendix A. Also for this compound with a slightly modified cage a similar set is observed, the fit parameters changing slightly towards larger values compared to those found for the OH1 compound.Even the slight difference in cage structure apparently is influencing the g matrix values.However, no prominent features of anticipated magnetic interaction between encapsulated NO radicals within the intermolecular hydrogen-bonded dimeric triply hydroxylated C 60 -derived cages were observed. Because of the rather large deviation of the g i parameters from the free electron value and the large anisotropy of g, a significant variation of the nutation frequency was expected as a function of orientation.If by orientation selection a particular g principal position is chosen, the two remaining g parameters determine the nutation frequency.As shown in Fig. 3, all Rabi frequencies are smaller than the reference value determined by a standard coal sample and increase to-wards the high field spectral range.In Fig. 3, the expected nutation frequency distributions (Stoll et al., 1998) are indicated by dashed vertical lines at the g principle values using the values for NO@C60-OH3 at X-band in Table 1.The agreement is quite convincing, and a very small g 3 parameter as deduced earlier can be excluded, since it would lead to much smaller nutation frequencies down to ≈ 3.7 MHz in the perpendicular orientations (g 1 and g 2 region, 500 mT region) of the radical.Thus, the small value of g 3 = 0.225 (Hasegawa et al., 2018a) is probably caused by overestimating the flat high field part of the cw spectrum in the simulation. Following the idea that partial quenching of the orbital angular momentum is caused by lifting of the degeneracy between the antibonding 2 π x and 2 π y orbitals, the energy splitting between these orbitals can be estimated by the pseudoaxial value of the NO g matrix (Ryzhkov and Toscano, 2005;Lunsford, 1968): Here, λ is the spin-orbit coupling constant (123.16cm −1 for NO; James and Thibault, 1964), defines the crystal-field splitting of the 2 π x and 2 π y orbitals, and L is a correction to the angular momentum along z caused by the crystal field. L is equal to 1 for a free molecule.A change in L represents a modification of the molecular wave function by the crystal field.It should be noted, however, that in previous studies (Zeller and Känzig, 1967;Shuey and Zeller, 1967) no significant deviations from 1 were observed.The highly nonlinear dependence of g 3 on is depicted in Fig. A3 (Appendix A).Using Eq. ( 1), a level splitting of approximately 17 meV (200 K) is determined from g 3 ≈ 0.7 for NO@C60-OH1 and 20 meV from g 3 ≈ 0.8 for NO@C60-OH3 (see Table 1).The lifting of the 2 π x / 2 π y degeneracy is not unexpected considering the observation of a finite zero-field splitting (ZFS) for 3 O 2 in a cage with C 1 symmetry (Futagoishi et al., 2017).In this study the potential barrier for librational motions of 3 O 2 was estimated as 398 cm −1 (49 meV) by measuring the shift of its principal ZFS component with respect to the value of the free molecule.The size of this potential barrier is on the same order of magnitude as the one calculated here for NO.The lifting of degeneracy leads to a deviation of the orbitals from two fully circular symmetric angular momentum eigenstates with opposite momentum to two orthogonal elliptic orbitals not being angular momentum eigenstates, but with non-vanishing angular momentum expectation values.With a 200 K level splitting only one of the orbitals is occupied at 5 K and rotation of the molecule corresponds to transitions from one to the other eigenstate, which should be impossible due to the large level splitting.Nevertheless, the remaining angular momentum expectation value gives rise to the very small g 3 value.The splitting is much less than values found for 2 NO and 2 O − 2 trapped in crystals, on surfaces, or in zeolites, which range from 300 to 500 meV. The 2 π x and 2 π y level splitting is of the same order of magnitude as the energy difference for the "up" and "down" orientation of the NO radical with respect to the cage opening calculated earlier (Hasegawa et al., 2018a) and also found in this study.For "up"/"down" axis reorientation a factor 10 larger barrier was found.Considering the additional degree of freedom of hindered rotation about the axis of the radical with an unknown transition barrier, this gives rise to a complicated three-dimensional orthorhombic potential energy surface.It is not surprising that under these conditions the EPR signal can be detected only at very low temperatures.The temperature dependence of the NO FSE signal (X-band) was measured relative to an unidentified stable The signal intensity is scaled to the intensity of a field-separated g ≈ g e signal from an unidentified S = 1/2 species following the Curie law.An exponential temperature dependence is assumed for the fit (dashed line) with a decay constant of 3 K.(b) Temperature dependence of the spin echo decay constant T 2 of NO@C60-OH3 (480 mT, 9.7 GHz, 2.5 mM/CS 2 ).The faster decay constant with larger weight is shown in cases where the time traces required a bi-exponential fit.Again an exponential temperature dependence is assumed for the dashed line, with a decay constant of 3.1 K. S = 1/2, g ≈ g e species in the sample and is shown in Fig. 4 (left).The NO signal decreases much faster upon temperature increase than according to the Curie law since a dramatic signal loss relative to the reference signal is observed.This strong additional signal decay of the NO radical beyond the Curie law can be described by an activation temperature of about 3 K. The dramatic loss of signal intensity by a factor 50 in the narrow temperature range of 5 to 12 K is indicative of a decrease in T 2 .This was confirmed by measuring the twopulse echo decay constant T * 2 at the peak signal position.Its temperature dependence could be fitted assuming exponential temperature dependence with an activation temperature of 3.1 K as shown in Fig. 4 (right). Measuring the field dependence of T * 2 at different temperatures supports the simple model of a restricted rotation.As shown in Fig. 5 left, at 5 K the T * 2 values increase from 600 to 1500 ns, with probing radicals changing from perpendicular to parallel orientation.This can be taken as evidence that small angle librations around the long axis are activated at this temperature, whereas long-axis reorientations are still prevented at this temperature.In contrast, at 10 K this restriction is no longer valid, shortening the echo decay accordingly for the full field range; i.e. librations about all molecular axes occur. This hypothesis is also supported by the observation that T 1 , determined by inversion recovery, also increases significantly at T = 3.6 K when moving from perpendicular to parallel orientation (see Fig. 5,right).This field dependence of T 1 leads even at 3.6 K to a noticeable change in the FSE pattern if the pulse repetition time is not sufficiently long (see Fig. A4, Appendix A).While T 1 and T 2 show a significant temperature dependence, the spectral shape, and thus the g parameters, are virtually unaffected within the temperature range of 3.6 to 12.5 K (see Table B1, Appendix B). Loss of the cw EPR signal intensity at temperatures above 80 K was also reported in Hasegawa et al. (2018b).Since the cw signal intensity is not affected by T * 2 , the NO signal could be detected in cw mode up to 40 K (Hasegawa et al., 2018b), with a much smaller decrease from 5 to 20 K than observed in our pulsed EPR study probing the echo signal with a two-pulse sequence.The low activation temperature of 3 K (∼ 0.3 meV) has to be compared to the much larger values found in the case of N@C 60 and P@C 60 , in which a well-defined potential of spherical or axial symmetry leads to degenerate vibrational levels of the translational degree of freedom of encapsulated atoms in the range of 8 to 16 meV (Pietzak et al., 2002), respectively.The partially opened cage resembles more the situation in the C 70 cage by providing a nearly axial potential.Assuming that vibration along this preferred axis is lowest in energy and taking into account the larger mass of the radical, a vibrational eigenfrequency of about 5 meV for the center of mass (CM) of the radical would be expected, which is still more than 1 order of magnitude larger than the experimental value.In contrast to encapsulated atoms, we also have to consider for the NO case a librational mode of the radical with respect to the cage axis.In a study of H 2 encapsulated in C 60 or C 70 , the eigenstates of H 2 were determined numerically by invoking the appropriate five-dimensional potential surface, describing translational and rotational degrees of freedom (Xu et al., 2009;Mamone et al., 2013).Lacking numerical values for the potential surface in our more complicated case, it is only possible to estimate typical values for the librational mode by approxihttps://doi.org/10.5194/mr-1-197-2020 Magn.Reson., 1, 197-207, 2020 mating the interconversion between "up"/"down" (its z axis) of the radical axis in a potential well of 80 meV (645 cm −1 ) (Hasegawa et al., 2018a) as a torsional oscillator.Converting the 80 meV rotational barrier into a torsion spring constant for librations of κ = 40 meV and using ω = √ κ/θ with the moment of inertia θ of NO, we arrive at a characteristic mode energy for the libration of about 4 meV (about 40 K), which is substantially larger than the experimentally observed activation temperature.However, when including transverse degrees of freedom for axis reorientation, it is not unlikely that the characteristic mode energies might further be reduced towards the experimental value. ENDOR data Orientation-selective ENDOR spectra of NO@C60-OH1 were measured at 9.7 GHz.As depicted in Fig. 6 (left), the center of lines shifts towards higher frequency when changing the observation field position from the lowest to highest edges of the absorption pattern.The frequency position at the low field side of the spectrum and the magnitude of the shift is inconsistent with proton hfi but is indicative of a dominant dipolar 14 N hfi, allowing simple determination of A i for the extreme field positions.For a determination of dipolar and quadrupolar hfi parameters, observation field values at the low and high ends of the FSE spectrum were chosen, anticipating that g matrix and hfi tensor axes are collinear.Best ENDOR resolution is obtained at the low field edge, allow- 3. Hyperfine parameters calculated for NO@C60-OH1 and NO@C60-OH3 in their "up" configuration using Gaussian G16/A03 (G16/A03, B3LYP, 6-311++).The calculated values for the "down" orientation differ by less than 3 %.ing determination of some hfi parameters by fitting, as shown in Fig. 6 (right). At this field position a consistent fit is obtained by only fixing the nuclear Larmor frequency to its field-determined value.At the high field edge no line quartet is observed for this compound.The broad pattern, however, is consistent with the result of a spectral simulation, shown in Fig. 7, using a parameter set completed with the nqi parameter of NO@C60-OH3, being better resolved at the high-field edge of the ENDOR pattern.It should be noted that no simple pattern is expected for the intermediate field range because of significant g strain.For this reason fit values are only quoted assigned to the g 1 and g 3 axis directions.No information about the signs of hfi parameters can be deduced from the experimental spectra.The assignments given in Table 2 are tentatively made by invoking the calculated hfi constants (see Table 3).Although not in very good quantitative agreement with the experiment, the calculated small isotropic hfi (+15 MHz) necessitates assignment of a negative sign to A 1 .Lacking spectral resolution when probing at the high field edge due to the large g 3 strain, the center of gravity still gives a reliable value for the large dipolar hfi for both compounds.The absent spectral resolution, even when observing at the van Hove singularities of the FSE spectrum, could result from a simultaneous presence of "up"/"down" configurations as observed in X-ray crystallography, with slightly different hfi parameters. Conclusions Using various EPR techniques, the spin Hamiltonian parameters for the encapsulated NO radical are determined.The radical, being confined in C 60 -derived cages, exemplifies the transition between a free molecule in isotropic potential and being fixed by a rigid confinement.The NO radical is particularly suited for such an investigation, since the g factor Figure 7. Simulated ENDOR spectra of NO@C60-OH1, using parameters listed in Table 2. of the free molecule in its 2 1/2 rotational ground state will change between zero (Mendt and Pöppl, 2015) to a g matrix, in which all parameters are close to the free electron value for the rigidly localized radical (Chiesa et al., 2010).In case the axial molecular symmetry is maintained by the environment allowing free rotation about its axis, the g parameter g 3 , being assigned to the NO bond axis, is predicted to vanish.The measured value g 3 = 0.77( 5) is indicative of an intermediate situation of the radical and yields information about the locking potential's deviation from axial symmetry.This 17 meV asymmetry as found here is quite small compared to the situation in polycrystalline or amorphous matrices ranging from 300 to 500 meV.The analysis of the spin relaxation times resulted in an activation temperature of about 3 K, assigned to temperature-activated motion of the radical with coupled rotational and translational degrees of freedom in the complicated three-dimensional potential provided by the cage. https://doi.org/10.5194/mr-1-197-2020 Magn.Reson., 1, 197-207, 2020 Performing ENDOR, the 14 N hyperfine coupling parameters were determined.The experimental values are in fair agreement with predictions from a DFT calculation.The spectral resolution was not sufficient to discriminate between parameter sets expected for the X-ray crystallographyconfirmed "up"/"down" configurations of the radical with respect to the orifice of the cage. The g matrix parameters did not show any temperature dependence in the range of 3.6 to 12 K, in which a dramatic orientation-dependent decrease in T * 2 is observed.This indicates that the radical is localized, not allowing for excitation of rotational modes about its axis, which would modify the g 3 value.Apparently only low-energy modes with small amplitude around their equilibrium orientation are excited at these temperatures.It should be noted, however, that the accuracy of the data analysis is high enough to detect a small difference in g parameters using cages with slightly modified orifices.It will be interesting to see in the future whether advanced computational methods will be able to reproduce g matrix and hfi tensor data for this radical in such a complicated potential. Appendix B: Additional table Table B1.Best fit parameters for the NO@C60-OH3 spectra measured at different temperatures (data not shown). Figure 1 . Figure 1.DFT-optimized structure of NO@C60-OH1 with N "up".(a) Ball-and-stick representation of the modified C 60 cage and van der Waals spheres of the caged NO with carbon (grey), hydrogen (white), nitrogen (blue), oxygen (red), and sulfur (yellow).(b) Top view on the orifice with stick representation of the cage except van der Waals spheres for the oxygen and hydrogen atoms of the orifice and the caged NO. Figure 3 . Figure 3. PEANUT spectrum of NO@C60-OH3 measured at 5 K.The red line indicates the reference frequency measured for a coal sample with isotropic g = 2.The black vertical lines indicate the expected nutation frequency distributions(Stoll et al., 1998) at the three principal g values for NO@C60-OH3 in Table 1 (X-band). Figure 4 . Figure 4. Echo-detected signal of NO@C60-OH3 (480 mT, 9.7 GHz, 2.5 mM/CS 2 , 200 ns pulse separation) as a function of temperature.The signal intensity is scaled to the intensity of a field-separated g ≈ g e signal from an unidentified S = 1/2 species following the Curie law.An exponential temperature dependence is assumed for the fit (dashed line) with a decay constant of 3 K.(b) Temperature dependence of the spin echo decay constant T 2 of NO@C60-OH3 (480 mT, 9.7 GHz, 2.5 mM/CS 2 ).The faster decay constant with larger weight is shown in cases where the time traces required a bi-exponential fit.Again an exponential temperature dependence is assumed for the dashed line, with a decay constant of 3.1 K. Figure 5 . Figure5.(a) Field and temperature dependence of the two-pulse echo decay of<EMAIL_ADDRESS>5 K data set could be satisfactorily fitted assuming a single exponential; the 10 K data required a bi-exponential fit.Both components were of similar magnitude, with the shorter T 2 larger in amplitude.(b) Field dependence of T 1 of NO@C60-OH3, measured using an inversion recovery pulse sequence at 3.6 K. Figure 6 . Figure 6.(a) Davies ENDOR spectra of NO@C60-OH1 (T = 5 K, 10 mM/CS 2 ) measured as a function of B 0 .Spectra are corrected with respect to different accumulation times for better comparison of spectral patterns.The Davies ENDOR pulse sequence (40-30 000-20-200-40 ns, 25 µs rf pulse) was identical for all spectra.(b) ENDOR spectrum of NO@C60-OH1 measured at 440 mT (see panel a) together with simulation. Figure A1 . Figure A1.Top view of the DFT-optimized structure of NO@C60-OH3 with N "up".The modified C 60 cage is represented by sticks except van der Waals spheres for the oxygen and hydrogen atoms of the orifice and the caged NO -C (grey), H (white), N (blue), O (red), and S (yellow). Figure A3 . Figure A3.Dependence of the pseudo-axial g 3 matrix element of the NO radical as a function of 2 π *x and 2 π * y level splitting. Figure Figure A4.9.7 GHz FSE spectra of NO@C60-OH3 measured at 3.6, 4.1, and 5 K. Using a rather short pulse repetition time (1 ms), the high-field part of the spectrum is partially saturated. Table 1 . (Hasegawa et al., 2018a)data of both compounds (gStrain fit parameters are listed in brackets).Previously published values(Hasegawa et al., 2018a)are given for comparison.Level splittings deduced from the deviation of the pseudo-axial g 3 parameter from g e are also shown. Table 2 . Hyperfine parameters determined by fitting Davies ENDOR spectra measured under orientation selection conditions providing the best resolution.For an assignment of signs, see text.n.d.: not determined. Table
6,213.2
2020-05-25T00:00:00.000
[ "Chemistry", "Physics" ]
Bootstrap Order Determination for ARMA Models : A Comparison between Different Model Selection Criteria The present paper deals with the order selection of models of the class for autoregressive moving average. A novel method—previously designed to enhance the selection capabilities of the Akaike Information Criterion and successfully tested—is now extended to the other three popular selectors commonly used by both theoretical statisticians and practitioners. They are the final prediction error, the Bayesian information criterion, and the Hannan-Quinn information criterion which are employed in conjunction with a semiparametric bootstrap scheme of the type sieve. Introduction Autoregressive moving average (ARMA) models [1] are a popular choice for the analysis of stochastic processes in many fields of applied and theoretical research.They are mathematical tools employed to model the persistence, over time and space, of a given time series.They can be used for a variety of purposes, for example, the generation of predictions of future values, to remove the autocorrelation structure from a time series (prewhitening) or to achieve a better understanding of a physical system.As it is well known, performances of an ARMA model are critically affected by the determination of its order: once properly built and tested, such models can be successfully employed to describe the reality, for example, trend patterns of economic variables and temperature oscillations in a given area, or to build futures scenarios through simulation exercises.Model order choice plays a key role not only for the validity of the inference procedures but also, from a more general point of view, for the fulfillment of the fundamental principle of parsimony [2,3].Ideally, the observation of this principle leads to choosing models showing simple structures on one hand but able to provide an effective description of the data set under investigation on the other hand.Less parsimonious models tend to extract idiosyncratic information and therefore are prone to introduce high variability in the estimated parameters.Such a variability determines for the model a lack of generalization capabilities (e.g., when new data become available), even though, by adding more and more parameters, an excellent fit of data is usually obtained [4].Overfitting is more likely to occur when the system under investigation is affected by different sources of noise, for example, related to changes in survey methodologies, time evolving processes, and missing observations.These phenomena, very common and in many cases simply unavoidable in "real life" data, might have a significant impact on the quality of the data set at hand.Under noisy conditions, a too complex model is likely to fit the noise components embedded in the time series and not just the signal and therefore it is bound to yield poor future values' predictions.On the other hand, bias in the estimation process arises when underfitted models are selected, so that only a suboptimal reconstruction of the underlying Data Generating Process (DGP) can be provided.As it will be seen, bias also arises as a result of the uncertainty conveyed by the process itself of model selection.ARMA model order selection is a difficult step in time series analysis.This issue has attracted a lot of attention so that, according to different philosophies, theoretical and practical assumptions as well as several methods, both parametric and nonparametric, have been proposed over the years as a result.Among them, bootstrap strategies [5][6][7][8][9] are gaining more and more acceptance among researchers and practitioners. In particular, in [6] bootstrap-based procedures applied to the Akaike Information Criterion (AIC) [10,11] in the case of ARMA models, called b-MAICE (bootstrap-Minimum AIC Estimate), has proven to enhance the small sample performances of this selector.The aim of this work is to extend such a procedure to different selectors, that is, final prediction error (FPE) [12] and two information based criteria, that is, Bayesian information criterion (BIC) [13,14] and Hannan-Quinn criterion (HQC) [15,16].In particular, the present paper is aimed at giving empirical evidences of the quality of the bootstrap approach in model selection, by comparing it with the standard procedure, which, as it is well known, is based on the minimization of a selection criterion.In particular, the empirical study (presented in Section 4) has been designed to contrast the performances of each of the considered selectors both in nonbootstrap and bootstrap world.The validity of the proposed method is assessed not only in the case of pure ARMA processes, but also when real life phenomena are simulated and embedded in the artificial data.In practice, the problem of order determination is considered also when the observed series is contaminated with outliers and additive Gaussian noise.The last type of contamination has been employed, for example, in [17], for testing a model selection approach driven by information criteria in the autoregressive fractionally integrated moving average (ARFIMA) and ARMA cases.Such a source of disturbance has been employed here in order to test the degree of robustness of the method proposed against overfitting.As it will be seen, computer simulations show that the addition of white noise generates a number of incorrect specifications comparable to those resulting from the contamination of the process with outliers of the type innovation.Outliers are a common phenomenon in time series, considering the fact that real life time series from many fields, for example, economic, sociology, and climatology, can be subjected and severely influenced by interruptive events, such as strikes, outbreaks of war, unexpected heat or cold waves, and natural disasters [18,19].The issue is absolutely nontrivial, given that outliers can impact virtually all the stages of the analysis of a given time series.In particular, model identification can be heavily affected by additive outliers, as they can induce the selection of underfitted models as a result of the bias elements introduced into the inference procedures.In the simulation study (Section 4), outliers of the type additive (i.e., added to some observations) and innovative (i.e., embedded in the innovation sequence driving the process) [19] will be considered. The remainder of the paper is organized as follows: in Section 2, after introducing the problem of order identification for time series, the considered selectors are illustrated along with the related ARMA identification procedure.In Section 3 the employed bootstrap selection method is illustrated and the bootstrap scheme briefly recalled.Finally, small sample performances of the proposed method will be assessed via Monte Carlo simulations in Section 4. Order Selection for Time Series Models A key concept underlying the present paper is that, in general, "reality" generates complex structures, possibly ∞dimensional, so that a model can at best capture only the main features of the system under investigation in order to reconstruct a simplified version of a given phenomenon.Models are just approximations of a given (nontrivial) phenomenon and the related identification procedures could never lead to the determination of the "true" model.In general, there is no true model in a finite world.What we can do is to find the one giving the best representation of the underlying DGP, according to a predefined rule.In this section, after highlighting the role played by model selection procedures in generating uncertainty, we briefly introduce the models belonging to the class ARMA along with the order selectors considered.Finally, the information criterion-based standard selection procedure is illustrated. Uncertainty in Model Selection. Uncertainty is an unfortunate, pervasive, and inescapable feature characterizing real life data which has to be faced continually by both researchers and practitioners.The framework dealt with here is clearly no exception: if the true model structure is an unattainable goal, approximation strategies have to be employed.Such strategies are generally designed on iterative basis and provide an estimate of the model structure which embodies, by definition, a certain amount of uncertainty.Common sources of uncertainty are those induced by the lack of discriminating power of the employed selector and by the so-called model selection bias [20,21], which arises when a model is specified and fitted on the same data set.Unfortunately, not only are these two types of uncertainty not mutually exclusive but also statistical theory provides little guidance to quantify their effect in terms of bias introduced in the model as a result [22].Particularly dangerous is this last form of uncertainty, as it is based upon the strong and unrealistic assumption of making correct inference as if a model is known to be true, while its determination has been made on the same set of data.On the other hand, the first source of uncertainty is somehow less serious, given its direct relationship with the size of the competition set, which is usually included in the design of the experiment.In practice, it is related to the fact that very close SC minima can be found in the model selection process, so that even small variations in the data set can cause the identification of different model structures.In general, trying to explain only in part the complexity conveyed in the observed process by means of as simple as possible structures is a way to minimize uncertainty in the model selection, as it is likely to lead to the definition of a smaller set of candidate models.This approach can be seen as an extension of the principle of parsimony to the competition set.In the sequel, how the proposed procedure, being aimed at replicating both the original process and the related selection procedure, has a positive effect in reducing both the considered sources of uncertainty will be emphasized [23]. The Employed Identification Criteria. Perhaps the most well-known model order selection criteria (SC), among those considered, are the AIC and the FPE, whose asymptotic equivalence to the -test has been proved in [24].AIC has been designed on information-theoretic basis as an asymptotically unbiased estimate of the Kullback-Leibler divergence [25] of the fitted model relative to the true model. Assuming , being the sample size, to be randomly drawn from an unknown distribution () with density ℎ(), the estimation of ℎ is done by means of a parametric family of distributions, with densities [( | ; ∈ Θ)], being the unknown parameters' vector.Denoting ( | θ) as the predictive density function, as the true model, and ℎ as the approximating one, Kullback-Leibler discrepancy can be expressed as follows: As the first term on the right hand side of (1) does not depend on the model, it can be neglected so that we can rewrite the distance in terms of the expected log likelihood, ( ; ); that is, This quantity can be estimated by replacing with its empirical distribution Ĥ, so that we have that ( ; Ĥ) = (1/) ∑ =1 log ( | θ).This is an overestimated quantity of the expected log likelihood, given that Ĥ is closer to θ than .The related bias can be written as follows: and therefore an information criterion can be derived from the bias-corrected log likelihood; that is, Denoting by and the number of estimated parameters and the sample size, respectively, Akaike proved that () is asymptotically equal to /, so that the information based criterion takes the form ( ; Ĥ) + /.By multiplying this quantity by −2, finally AIC is defined as −2 log ( ; Ĥ) + 2.In such a theoretical framework, AIC can be seen as a way to solve the Akaike Prediction Problem [6], that is, to find a model 0 producing estimation of density f minimizing Kullback-Leibler discrepancy (1).Originally conceived for AR process, extended to the ARMA case by Soderstrom and Stoica [24], FPE was designed as the minimizer of the one-step-ahead mean square forecast error, after taking in account the inflating effect of the estimated parameter.FPE statistic is defined as FPE() = [(1 + /)/(1 − /)]σ 2 (), where σ2 is the estimated variance of the residuals and is the model's size.A different perspective has led to the construction of BIC-type criteria, which are grounded on the maximization of the model posterior probability [14].In more detail, they envision the specification of the prior distribution on parameter values and the models, respectively, denoted by ( | ) and (), and their introduction into the analysis through the joint probability function (, ) = ()( | ). Posterior probabilities for (, ) are then obtained through Bayes theorem, so that the value of maximizing (4), that is, ( | ) ∝ () ∫ ∈Θ ( ; , ) ( | ) , (4) is found.With ( ; , ) being the likelihood function associated with both the data and the model , the selected order will be k = arg max ( | ).By assuming all the models equally probable, that is, () = 1/( max + 1), the BIC criterion is hence defined by −2 log ( θ) + 2 log().The last criterion considered-constructed from the law of iterated algorithm-is the BIC, in which the penalty function grows at a very slow rate as the samples size increases.It is defined as follows: HQC = log ( θ) + 2 log(log()). All these selectors can be divided into two groups: one achieving asymptotic optimality [26] and one selection consistency.AIC and FPE fall in the first group, in the sense that the selected model asymptotically tends to reach the smallest average squared error [27,28], if the true DGP is not included in the competition set.On the other hand, BIC and HQ are dimension consistent [29], in that the probability of selection of the "true" model approaches 1 as the sample size goes to infinity.However, it should be pointed out that such an asymptotic property holds only if the true density is in the set of the candidate models.In this regard, AIC and FPE as well as the other Shibata efficient criteria (e.g., Mallows [30]) fail to select the "true" model asymptotically.As pointed out earlier, ∞-dimensionality of the "truth" implies for all the models being "wrong" to some extent-except in trivial cases-so that no set of competition models will ever encompass the true DGP.As long as this approach is held true, asymptotic efficient criteria might be preferred.In this case, one may argue a lack of significance in comparing any finite list of candidate models when we rule out the existence of a true one.Such an approach is justified in that, even if no model can ever represent the truth, we can achieve the goal to find the one being approximately correct.Conversely, if one does believe that the true density belongs to the model space, hence dimension consistent selection criteria can be preferred. ARMA Model Selection through Minimization of Selection Criteria. In what follows, it is assumed that the observed time series { } ∈Z + is a realization of a real valued, 0-mean, second-order stationary process, admitting an autoregressive moving average representation of orders and ; that is, ∼ ARMA (, ), with (, ) ∈ Z + .Its mathematical expression is as follows: with , being ∈ R and ∈ R, AR polynomial, and MA polynomial, respectively.With the backward shift operator, such that = − , is denoted whereas is assumed to be sequence of centered, uncorrelated variables with common variance 2 .The parameters vector is denoted by Γ.Standard assumptions of stationarity and invertibility, respectively, of AR and MA polynomials, that is, are supposed to be satisfied.Finally, the ARMA parameters of the true underlying DGP ( 5) are denoted by ( ∘ , ∘ ) (i.e., {} ∼ ARMA ( ∘ , ∘ )) and the related model by 0 (Γ). Identification procedures of the best approximating model for 0 is carried out on a priori specified set Λ of plausible candidate models ; that is, where the chosen model, say 0 ( Γ) = (p 0 , q0 ), is selected from (i.e., [ 0 ( Γ) ≡ (p 0 , q0 ) ⊂ Λ] ≈ 0 (Γ)).In the ARMA case, each model ∈ Λ represents a specific combination of autoregressive and moving average parameters (, ).The set Λ is upper bounded by the two integers and for the AR and MA part, respectively; that is, This assumption is a necessary condition for the abovementioned Shibata efficiency and dimension consistency properties to hold other than for the practical implementation of the procedure (the model space needs to be bounded).From an operational point of view, the four SC considered in this work, when applied to models of the class ARMA, take the following form: AIC (, ) = ln σ2 , + 2 ( + + 1) , where σ, is an estimate of the Gaussian pseudo-maximum likelihood residual variance when fitting ARMA (, ) models; that is, Equations ( 10)-( 13) can be synthetically expressed as follows: where σ2 , is defined in Section 3 and is the penalty term as a function of model complexity. The standard identification procedure, here called for convenience Minimum Selection Criterion Estimation (MSCE), is based on the minimization of the SC.In practice, the model 0 minimizing a given SC is the winner; that is, 0 : ( p0 , q0 ) = arg min <,< SC (, ) . (16) The Bootstrap Method As already pointed out, in [6] a bootstrap selection method has been proposed to perform AIC-based ARMA structure identification.The comparative Monte Carlo experiment with its nonbootstrap counterpart, commonly referred to as MAICE (Minimum Akaike Information Criterion Expectation) procedure, gave empirical evidences in favor of -MAICE procedure.Such results motivated us to extend this approach to other selectors (see (11), (12), and ( 13)).For convenience, the proposed generalized version of -MAICE procedure has been called bMSE (Bootstrap Minimum Selector Expectation) procedure.Finally, in order to keep the paper as self-contained as possible, and to reduce uncertainty in the experimental outcomes, AIC has also been included in the experiment. The Bootstrapped Selection Criteria. The proposed bMSE method relies on the bootstrapped version of a given SC, obtained by bootstrapping both the residual variance term σ2 and the penalty term, so that (15) becomes The particularization of (17) to the criteria object of this study is straightforward and yields their bootstrapped versions; that is, with , , being as above defined and 2 , being the residual variance of the residuals from the fitting of the bootstrapped series * with its ARMA estimate ŷ * .In symbols, In essence, bMSE method works as follows: MSCE procedure is applied iteratively on each * bootstrap replication = 1, . . ., of the observed series.A winner model is selected at each iteration on the basis of a given SC, which in turns works exploiting the bootstrap estimated variances of the residuals.The final model is chosen on the basis of its relative frequency over the bootstrap replication. 3.2.The Applied Bootstrap Scheme.Sieve [31] [32,33] is the bootstrap scheme employed here.It is an effective and conceptually simple tool to borrow randomness from white noise residuals, generated by the fitting procedure of a "long" autoregression to the observed time series.This autoregression, here supposed to be 0-mean, is of the type = ∑ =1 ( − ) + , ∈ Z, under the stationarity conditions as in (6).Its use is here motivated by the AR(∞) representation of process of type (5); that is, with ( ) ∈Z being a sequence of iid variables with [ ] = 0 and ∑ ∞ =0 2 < ∞.In essence, V bootstrap approximates a given process by a finite autoregressive process, whose order p = () increases with the sample size such that () → ∞, () = (), → ∞.In this regard, in the empirical study the estimation of the -vector of coefficients (â 1 , . . ., âp ) has been carried out through the Yule-Walker equations.The residuals ε = ∑ p =1 − + = 1, 2, . . ., obtained from the fitting procedure of this autoregression to the original data are then employed to build up the centered empirical distribution function, which is defined as where = − ∑ =1 â − , with being the mean value of the available residuals, that is, , = p + 1, . . ., .From F bootstrap samples X * = ( * 1− p, . . ., * ) are generated by the recursion with starting values * = 0, * = 0 for ≤ − max(, ), = + 1, . . ., 2. The Proposed bMSE Procedure.Let { } be the observed time series realization of ARIMA (, ) DGP (5), from which bootstrap replications { * , ; = 1, 2, . . ., } are generated via V method (Section 3).Our B-MSCE procedure is based on the minimization, over all the combinations of ARMA structures, of a given SC by applying MSCE procedure to each bootstrap replication * , of the original time series .In what follows the proposed procedure is summarized in a step-by-step fashion. (3) A bootstrap replication, * , of the original time series is generated via V method. (4) The competition set Λ is iteratively fitted to * so that values (one for each of the models in Λ) of the SC * are computed and stored in the -dimension vector V . (5) Minimum SC * value is extracted from V so that a winner model, * 0, , is selected; that is, * 0, : ( p * , q * ) = arg min (6) By repeating times steps (3) to ( 5), the final model * 0 is chosen according to a mode-based criterion, that is, on the basis of its more frequent occurrence in the set of the bootstrap replications.In practice, the selected model is chosen according to the following rule: with the symbol # being used as a counter of the number of the cases satisfying the inequality condition expressed in (24). The order V 0 of the V autoregression is chosen by iteratively computing the Ljung-Box statistic [34] on the residuals resulting from the fitting of tentative autoregression on the original time series with sample size 0 .Further orders, say V , = 1, 2, . .., for increasing sample sizes, , = 1, 2, . .., are selected according to the relation V = ( ) 1/3 , where = V 0 / 1/3 0 (in [6] V 0 is chosen by iteratively computing the spectral density on the residuals resulting from the fitting of tentative autoregression on the original time series; the order p for which the spectral density is approximately constant is then selected). The presented method is exhaustive and then highly computer intensive, as for all the ( + 1) * ( + 1) possible pairs (in the attempt to reduce such a burden, sometimes, see, e.g., [35], the set of the ARMA orders under investigation is restricted to Λ = {(, −1): 0 ≤ ≤ Ψ}; i.e., the competition set is made up of ARMA (, − 1); however, the fact that such an approach entails the obvious drawback of not being able to identify common processes, such as ARMA (2, 0), has appeared to be a too strong limitation; therefore, in spite of its ability to drastically reduce the computational time, such an approach has not been followed here), the values of the given SC * must be computed for each of the bootstrap replications. Empirical Study In this section, the outcomes of a simulation study will be reported.It has been designed with the twofold purpose of (i) evaluating bMSE procedure's small sample performances and (ii) giving some evidences of its behavior for increasing sample sizes.As a measure of performances, the percentage frequency of selection of the true order ( 0 , 0 ), in the sequel denoted as and * for MSCE and bMSE procedure, respectively, has been adopted; that is, with denoting the number of the artificial time series employed in the experiment and # the quantifier symbol, expressing the number of times the statement "time series correctly identified" is true.Its extension to the bootstrap case * is straightforward.Aspect (i) consists of a series of Monte Carlo experiments carried out on three different sets of time series, 10 for each set, detailed in Table 1, which (1) are realization of three prespecified ARMA orders, that is, (1, 1), (2, 1), and (1, 2) (one order for each set), and (2) differ from each other, within the same set, only for the coefficients' values, but not for the order (, ).Two sample sizes will be considered, that is, = 100, 200.Formally, these sets are, respectively, denoted as {(J 1 , J 2 , J 3 )} and supposed to belong to the order subspace I: {I ⊇ J ; = 1, 2, 3}.For each DGP ∈ I, 10 different coefficient vectors are specified, that is, {I ⊇ J ≡ ( 1 , 1 ), . . ., ( 10 , 10 )}.The validity of the presented method is assessed on comparative basis, using as benchmark the standard MSCE procedure.For the sake of concision, the values and * will be computed averaging over all the DGPs belonging to either the same set J or I.In practice, two indicators, that is, the Percentage Average Discrepancy (PAD) and the Overall Percentage Average Discrepancy (OPAD), depending on weather only one set J or the whole order subspace I is considered, will be employed.They are formalized as follows: where with the symbol | ⋅ | being the cardinality of a set is denoted.In other words, the average percentage differences in the frequency of selection of the true model is used as a measure of the gain/loss generated by bMSE procedure with regard to a single J (26) or by averaging over the sets I (27).As already outlined, in analyzing aspect (ii) the attention is focused on the behavior of the proposed method for increasing sample sizes, that is, = 100, 200, 500, 1000.In Table 4, the results obtained for the case of 4 DGPs-detailed in the same table-will be given.In both (i) and (ii), for each DGP ∈ (J 1 , J 2 , J 3 ), a set of = 500 time series has been generated.Each time series ( = 1, 2, . . ., ) has been artificially replicated = 125 times using the bootstrap scheme outlined in Section 3.2 (the simulations have been implemented using the software R (8.1 version) and performed using the hardware resources of the University of California, San Diego; in particular, the computer server EULER (maintained by the Mathematical Department) and the supercomputer IBM-TERAGRID have been employed). The number of bootstrap replications employed has been chosen on empirical basis, as the best compromise between performances yielded by the method and computational time. The parameter space of all the DGPs considered always satisfies the invertibility and stationarity conditions (see ( 6), ( 7)), whereas the maximum order and investigated has been kept fixed and low throughout the whole experiment ( = = 3) mainly to keep the overall computational time reasonably low.However, such an arbitrary choice seems to be able to reflect time series usually encountered in practice in a number of fields, such as economy, ecology, or hydrology.However, it should be emphasized that in many other contexts (e.g., signal processing) higher orders must be considered. The Experiments. Other than on the pure ARMA signal, aspect (i) has been investigated in terms of the robustness shown against outliers and noisy conditions.In practice, the simulated DGPs are assumed to be The first set of simulations (experiment a) is designed to give empirical evidences for the case of noise-free, uncontaminated ARMA process of type (5).Experiment b is aimed at mimicking a situation where a given dynamic system is perturbed by shocks resulting in aberrant data, commonly referred to as outliers.As already pointed out, such abnormal observations might be generated by unpredictable phenomena (e.g., sudden events related to strikes, wars, and exceptional meteorological conditions) or noise components which have the ability to lead to an inappropriate model identification, other than to biased inference, low quality forecast performances, and, if seasonality is present in the data, poor decomposition.Without any doubt, outliers represent a serious issue in time series analysis; therefore testing the degree of robustness of any procedure against such potentially disruptive source of distortion is an important task.This topic has attracted much attention from both theoretical statisticians and practitioners.Detection of time series outliers was first studied by Fox [19], whose results have been extended to ARIMA models by Chang et al. [36].Other references include [37][38][39].In addition, more and more often outlier detection algorithms are provided in the form of stand-alone efficient routines-for example, the library TSO of the software "R," based on the procedure of Chen and Liu (1993) [37]-or included in automatic model identification procedures provided by many software packages, as in the case of the statistical program TRAMO (Time series Regression with ARIMA noise, Missing observations, and Outliers [40]) or SCA (Scientific Computing Associates [41]).Following [19], two common types of outliers, that is, additive (AO) and innovational (IO), will be considered.As it will be illustrated, unfortunately the proposed identification procedure shows sensitivity to outliers, as they are liable, even though to different extents, to noticeable deterioration of the selecting performances. In more detail, the observed time series is considered as being affected by a certain number of deterministic shocks at different time = 1 , . . ., ; that is, where is the uncontaminated one of type ( 5), ℎ measures the impact of the outlier at time = , and is an indicator variable taking the value 1 for = and 0 otherwise.Outlierinduced dynamics are described by the function () which takes the form As the onset of an external cause, outliers of the type IO have the ability to affect the level of the series at the time they occur until a lag , whose localization depends on the memory mechanism encoded in the ARMA model.Their effect can be even temporally unbounded, for example, under ARIMA DGPs with nonzero integrating stationary inducing constant .Conversely, AOs affect only the level of the observations at the time of its occurrence (in this regard, typical examples are errors related to the recording process or to the measurement device employed).They are liable to corrupting the spectral representation of a process, which tends to be of the type white noise and in general the autocorrelations are pulled towards zero (their effect on the Autocorrelation Function (ACF) and the spectral density level has been discussed in the literature (see, e.g., [42] and the references therein)), so that meaningful conclusion based on these functions-depending on their location, magnitude, and probability of occurrence-might be severely compromised.On the other hand, the effects produced by IOs are usually less dramatic as the ACF tends to maintain the pattern of the uncontaminated process and the spectral density (), being the frequency, roughly shows a shape consistent with the one computed on (i.e., () ∝ ()).The outcomes of the simulations conducted are consistent with the above. In the present study, IOs have been randomized and introduced according to a Bernoulli (BER) distribution with parameter = .04.In order to better assess the sensitivity of the proposed procedure to outlying observations, experiment b has been conducted considering two different levels of standard errors, that is, = 3 (experiment b 1 ) and = 4 (experiment b 2 ); in symbols, recalling (5), we have In b 3 , AOs have been placed according to the following scheme: The last experiment, that is, c, has been designed to mimic a situation characterized by low quality data, induced, for example, by phenomena like changes in survey methodologies (e.g., sampling design or data collecting procedures) or in the imputation techniques.Practically, a Gaussian-type noise ] is added to the output signal, so that = + ] , being the pure ARMA process.Using (5), we have = [()/()] + ] , where ∼ nid(0, 2 ) and ] ∼ nid(0, 2 ) is additive noise, independent of .The variance of ] , say 2 , has been chosen according to the relation 2 = (1/10) 2 ( ). Results . The empirical results pertaining to aspect (i) are summarized in Tables 2 and 3 for the sample sizes = 100 and = 200, respectively.By inspecting these tables it is possible to notice that, with the exception of experiment 3 , in all the other cases bMSE procedure gives no negligible Regarding the gains over the standard procedure, now BIC and HQC show PAD values above 10 (with a spike of 12.7 of HQC in the case of J 2 ), whereas the performances for the AIC (PAD above 9) are still good.Less satisfactory job is done by the FPE (PAD = 8.2).Finally, it is worth mentioning that the greatest gains pertain to the HQC, with PAD(J 1 ) = 15.5 for = 100 and PAD(J 2 ) = 12.7 for = 200. Even though to different extents, both the procedures are affected by the presence of outliers, especially in the case of the smaller sample size.However, as long as IOs (experiments 1 and 2 ) are involved, bMSE seems to do a good job in counteracting their adverse effects.In fact, for = 200, this procedure, applied to dimension consistent criteria, selects the right model always more than 50% (experiment 2 ) and approximately 55% of the times in experiment 1 .For this type of criteria, the average gain over the standard procedure is noticeable, especially in the case of experiment 1 (OPAD = 6.7 for BIC and 7.9 for HQC).On the other hand, Shibata efficient criteria achieve less remarkable results: with PAD values ranging from 4.9 for the FPE (PAD(J 3 )) to 5.7 for the AIC (PAD(J 2 )).As expected, for = 100 the impact of the IOs is stronger: applied to Shibata consistent criteria, bMSE procedure selects the right model in average approximately 43.4% of the time with a minimum of 34.6% recorded for FPE in the case of J 2 , whereas dimension consistent criteria show a * value in average equal to 55.7%.Selecting performances granted by the proposed method, even though still acceptable, tended to deteriorate to a greater extent considering the experiment 2 , especially with = 100: here the frequency of selection of the true model for Shibata efficient SC is around 40.1% versus 35% of the standard procedure, for a recorded OPAD amounting to 5.5 for the AIC and 4.7 for the FPE.Slightly better results for = 200 are recorded, where the correct model has been identified by dimension consistent criteria 55.2% (OPAD = 5.8%) of the times versus 49.4% of the standard procedure.Experiment 3 is where the proposed procedure crashes and offers little or no improvements over the standard one.The most seriously affected selector is the FPE, which shows an ability to select the correct model in average only 18.9% and 22.4% of the times, versus 21% and 25.2% recorded for the nonbootstrap counterpart, respectively, for = 100 and 200.Finally the effect of the injection of a Gaussian noise to the output signal (experiment ) is commented on.Here, the performances of the method appear to be adequate: averaging over I, the value recorded for * is 61.8% ( = 54.6) for dimension consistent criteria ( = 200) with particularly interesting improvements over the standard procedure yielded by HQC, which shows OPAD values amounting to 10% and 7.5% for = 100 and 200, respectively.The bootstrapped version of HQC performs consistently better than the other criteria: in fact it chooses the correct model in average 63.2% and 56.6% of the times for = 100 and = 200, respectively.On the other hand, FPE detects the true model with the smallest probability by reaching the average frequency of selection of the true model of 39.8 ( = 100) and 45.1 ( = 200).Shibata consistent criteria show also the smallest gains over the standard procedure; for example, for = 100 the maximum PAD is equal to 7.1 and 6.4 for AIC and FPE, respectively (both values' recorder for J 2 ), whereas dimension consistent criteria, for the same sample sizes, show a maximum PAD of 8.9 and 11.1, in the case of BIC (J 1 ) and HQC (J 2 ), respectively. In the analysis of aspect (ii), the performances yielded by the two procedures, in terms of frequency of selection of the correct model, are considered for increasing sample sizes ( = 100, 200, 500, 1000).The results for four different ARMA (2, 1) models, along with their details, are presented in Table 4.As possibly seen by inspecting this table, all the SC under test exhibit roughly a similar pattern: for the small sample size, remarkable disclosures in selecting performances between the two methods are noticeable whereas such discrepancies become less pronounced for = 500 and very small for = 1000.For example, considering all the 4 DGPs, BIC shows a PAD ranging from 12.6 (series D) to 14.7 (series C) with sample size = 100, whereas for = 1000, PAD is in the range 1.9-2.9 for the series B and A, respectively.For this sample size, the smallest PAD has been recorded selection of the different tentative ARMA models.In practice, the winning models generated at each and every bootstrap replication are ranked according to their relative frequency of selection of the true model.In this way, our confidence in the bootstrap selection procedure is linked to the difference in the relative frequency of selection of the winner model (with the highest selection rate) compared to the ones achieved by its closest competitors.Ideal situations are characterized high rate of choices of the winner model, which drops sharply considering the rest of the competition set.In such a case, we can reasonably be sure that the selected model is closer to the true order than the one found by using the standard MAICE procedure (clearly if different models are selected).On the other hand, slight discrepancies (say 3-4%) between the winning model and the others should be regarded with suspicion and carefully evaluated on a case-by-case basis. Final Remarks and Future Directions In this paper two pairs of selectors, differing for their derivation and properties, have been brought in a bootstrap framework with the purpose of enhancing their selecting capabilities.A set of Monte Carlo-type experiments has been employed in order to assess the magnitude of the improvements achieved.These encouraging results obtained can be explained in terms of the reduction of uncertainty induced by the bootstrap approach.Identification procedures of the type MSCE, in fact, base the choice of the final model on the minimum value attained by a given SC, no matter how small the differences in the values showed by other competing models might be.When they are actually small, standard MSCE procedures are likely to introduce significant amount of uncertainty in the selection procedure; that is, different order choices can be determined by small variations in the data set.The proposed procedure accounts for such a source of uncertainty, by reestimating the competing models and recomputing the related SC value times (one for each bootstrap replication).In doing so the identification procedure is based on different data replications each of them embodying random variations.Also the improvements achieved by the proposed method in the case of IOs can also be explained in the light of reduction of uncertainty.Basically, what the procedure does is to reallocate these outliers times, so that the related selection procedure can control for such anomalous observations.On the other hand, bMSE procedure breaks down in the case of AOs, probably because of the fact that the employed maximum likelihood estimation procedure is carried out on the residuals, which are severely affected by these types of outliers.Consistently with other Monte Carlo experiments, in the proposed simulations the best results are achieved by dimension consistent criteria, especially by BIC.However, two drawbacks affect this criterion: tendency in the selection of underfitted models and consistency achieved only in case of very large sample [4], under the condition that the true model is included in the competition set.The last assumption implies the existence of a model able to provide full explanation of the reality and the "existence" of an analyst able to include it in the competition set.Unfortunately, even assuming finite dimensionality of real life problems, reality is still very complex so that a large number of models are likely to be included in the competition set.As a result of that, selection uncertainty will rise.Superiority of BIC should also be reconsidered in the light of different empirical framework, as Monte Carlo experiment cannot capture the aforementioned problems.It is in fact characterized by the presence of the true model in the portfolio of candidate models.This appears unfair if we consider that criteria of the types AIC and FPE are designed to relax such a strong, in practice unverifiable, assumption and that they enjoy the nice Shibata efficiency property.In addition, in order to keep the computational time acceptable, in Monte Carlo experiments the true DGP is generally of low order, so that BIC underestimation tendency is likely to be masked or, at least, to appear less serious.For these reasons, from a more operational point of view, it can be advisable to consider the indications provided by both AIC * and BIC * , which are the best selectors in their respective categories, according to the simulation experiment.This is particularly true when the sample size is "small" and the information criteria, either considered in their standard or bootstrap form, tend to yield values close to each other for closer models.As a result of that, significant amount of uncertainty can be introduced in the selection process.Finally, as a future direction, it might be worth emphasizing that the purpose of a given model is built and thus identified, which can be usefully considered to assess the selector's performances.For instance, in many cases computational time is a critical factor, so that one might be willing to accept less accurate model outcomes by reducing the number of bootstrap replications.In fact, global fitting is not necessarily the only interesting feature one wants to look at, as a model might be also evaluated on the basis of the potential ability in solving the specific problems it has been built for.In this regard, selection procedures optimized on a case-by-case basis and implemented in the bootstrap world might result in a more efficient tool for a better understanding of the reality. a : a pure process (no contamination), b: contaminated with outliers of the type IO (experiments b 1 , b 2 ) and AO (experiment b 3 ), c: contaminated with Gaussian additive noise. Table Frequency of selection of the true model in the nonbootstrap () and bootstrap () world for = 100. Table 3 : Frequency of selection of the true model in the nonbootstrap ( * ) and bootstrap ( * ) world for = 200.PAD between 8.4 for J 2 and 9.6 for J 3 ).As expected, for = 200 both the methods show an increasing average frequency of selection of the correct model for all the SC: averaging over I and all the SC the values of 55.4% and 65.6% have been recorded for and * , respectively. Table 4 : Frequency of selection of the true in the nonbootstrap () and bootstrap ( * ) world, for different sample sizes.
9,983.4
2017-04-16T00:00:00.000
[ "Mathematics" ]
Application of Self-Organizing Maps on Time Series Data for identifying interpretable Driving Manoeuvres Understanding the usage of a product is essential for any manufacturer in particular for further development. Driving style of the driver is a significant factor in the usage of a city bus. This work proposes a new method to observe various driving manoeuvres in regular operations and identify the patterns in these manoeuvres. The significant advance in this method over other engineering approaches is the use of uncompressed data instead of transformations into certain Performance indicators. Here, the time series inputs were preserved and prepared as 10-second-frames using a sliding window technique and fed into Kohonen’s Self-organizing Map (SOM) algorithm. This produced a high accuracy in the identification and classification of maneuvres and at the same time to a highly interpretable solution that can be readily used for suggesting improvements. The proposed method is applied to comparing the driving styles of two drivers driving in a similar environment; the differences are illustrated using frequency distributions of identified manoeuvres and then interpreted for the amelioration of fuel consumption. Introduction Driving Manoeuvres provide essential insights for automotive manufacturers to understand their vehicle usage and to improve their design. It is also useful for individual drivers as well as fleet owners to understand their vehicle usage to improve their operation and service. Approach This work introduces a new method to identify and represent driving manoeuvres through data in conjunction with state of the art machine learning. There are several other data-driven approaches to predict or classify driving manoeuvres as stated in [15]. However, these methods are supervised and required labelled data to predict certain specific manoeuvres, such as exiting a round-about or stopping at a traffic light. The proposed However, FMS CAN is explicitly defined for customers to access data, and it does not interfere with the vehicle functions nor affects the warranty of the vehicle. Thus, this input data selection improves the practicability of the proposed method. The data considered in this work were collected for two years from city buses owned by a public transportation company in a particular city in Germany. The advantage of working on city buses is that they are operated systematically for long durations and distances. They also travel on almost all types of roads such as urban, suburban and even highways at times. They are operated at all times of the day, which includes dense traffic during the regular working hours of the city, to no traffic during midnights. Hence by nature, the collected data contains a wide variety of possible driving manoeuvres for the vehicle. Since this work focuses specifically on city buses, many of the driving manoeuvres identified are quite specific to the same. For example, entering and leaving bus stops which are manoeuvres frequently performed only in city buses. Hence, the results can only be reproduced with city buses operated similarly. The proposed method can also be used for other vehicle types, but this is expected to produce a different set of driving manoeuvres. Application In the latter part, using the driving manoeuvres identified, the driving styles of different drivers are compared using the manoeuvres identified. Driving Style of a driver consists of various Driving manoeuvres performed with a given vehicle. Formally, this is the frequency distribution of all possible manoeuvres. Here, two drivers having different fuel consumption while driving the same vehicle on the same track are compared. A pair of similar manoeuvres performed by these drivers is presented as an example to explain the difference. This application also showcases the ease of use of the proposed method. Methods The workflow of the proposed method can be divided into 2 phases, namely Training phase and Characterization phase. In the Training phase, the time series data of several random trips driven by different drivers is taken as input, and then processed to identify a unique set of driving manoeuvres. Additionally, the manoeuvres are clustered for better interpretation. This process is described in detail in the following subsections, and Fig. 1 provides an overview. In the Characterization phase, time-series data of trips of an individual driver is taken as input, and then mapped onto the model trained in the previous phase, to classify the manoeuvres performed. The frequency distribution of the manoeuvres performed define the driving style of the driver. An overview of this process is shown in Fig. 2. In this work, individual trips of two different drivers on the same track are mapped to compare their driving styles. Data Preparation The data were collected using standard industrial data logging devices installed in vehicles with the consent of the respective customers. The data were collected at several time-frequencies, but for this work, 1Hz was sufficient. The variables considered are mentioned in Table 1. Sliding Window Preparation The features (manoeuvres) had to be extracted from the time series data for modelling. The extracted time-frames are required to represent complete manoeuvres and have a fixed length for modelling. The inflection points (as described in [14]) on the velocity curve of the vehicle was considered as a reference point to differentiate manoeuvres as it distinguishes constant velocity and varying velocity segments of the curve. Due to the varying time durations between consecutive inflection points, as seen in Fig. 3, the segments themselves could not be utilized for modelling. Hence, the median of the time durations between consecutive inflection points was used as the standard duration of the time-frame. This value was determined to be 10 seconds (Rounded off ). When cutting the velocity curve into 10-second frames, it is possible to lose information. For instance, when a manoeuvre is shorter than 10 seconds, the beginning of the consecutive manoeuvre would become a part of the current frame. To avoid such information loss, the time series were processed by a sliding window, moving at a rate of 1 second. As the training phase requires considering all possible manoeuvres, the minimum possible moving rate seemed appropriate. In the characterization phase, higher moving rates will be used. All input variables were processed into 10-second frames, as explained above. However, distance and fuel consumption were aggregated for each frame. The data considered for modelling spanned 21 hours of operation, collected from the same vehicle through several random trips during weekdays in summer. The size of the data was limited to keep the computation manageable with a regular personal computer. The timespan was processed into sliding windows, and a randomly sampled 80% (60970 observations) were considered as the training set. The remaining 20% were stored as the test set for validation. Modelling Kohonen's Self-organizing Maps Kohonen's Selforganizing Map (SOM) algorithm was used to model this dataset and identify all the distinct manoeuvres observed. SOM is an Artificial Neural Networks algorithm that works based on competitive learning. Initially, a rectangular or hexagonal grid containing a fixed number of neurons or nodes is defined. The nodes are initialized with random vectors, having the same number of dimensions as the input data. The vectors corresponding to the nodes are collectively known as the codebook. During the training phase, input data points are randomly presented to the nodes, individually. All nodes compute their distance (usually Euclidean distance) to the given input and compete. The closest node is the winner or Best Match Unit (BMU). The neighbourhood surrounding the BMU with a certain radius, are considered as secondary winners. The BMU and its neighbourhood change their codebook vectors, adapting to the input, based on a learning function. The BMU tends to learn the most, while the neighbourhood learns comparatively less. In this way, the nodes spread throughout the data space. This is repeated for several iterations through the entire training set. Over the iterations, the learning rate and the neighbourhood radius decay, causing the nodes to stabilize. The final codebook thus contains centroids throughout the data space similar to K-means clustering. After the training, the nodes can be visualized back in the original grid structure, known as a Map. This enables visualization of multidimensional data space as low dimensional topology preserving maps. A detailed description of the algorithm can be found in [4,6]. Super-organized Maps In this work, the variant of SOM known as Super-organized Map (supersom) was used. This variant allows the input variables to be grouped as layers, and the user can specify different weights for each layer. Here, the distance value is computed separately for each layer, and the learning is biased based on the weights. The implementation details of supersom can be found in [13]. The "kohonen" package [12] for R language was used to implement this algorithm. Training the map In order to use supersom, the variables velocity, accelerator pedal position and brake pedal position (frame of 10 values) were each considered as separate layers. The fuel consumption and distance were grouped into a fourth layer. The four layers were given an equal weight of 0.25. This provides the importance of 12.5% to each scalar variable and 2.5% to the individual values in the vectors layers. The map was initialized with 400 nodes (20x20 grid) with a hexagonal grid structure. The number of training iterations was set as 700 because the mean distance between the inputs and their closest nodes did not change considerably on adding further iterations. The Training progress was observed to stabilize beyond 650 iterations and the mean distance to closest nodes was 72.7. Clustering Individually interpreting all of the nodes present on the map is possible, however painful. Clustering the nodes and interpreting the clusters would be a more manageable approach. To determine the number of meaningful clusters available on the map, Within-cluster sum of square distance (within SS ) measure was tested iteratively between 2 to 20 clusters. The 'elbow' on the within SS curve usually indicates the optimum number of clusters. In this case, it happens at 5, as shown in Fig. 4a. However, the curve does not monotonically decrease and flatten after 5. Hence, to make sure that 5 clusters are optimal, the Gap Statistic, as described in [11], was used additionally. Gap Statistic is the difference between log(within SS ) and its expectation under a null reference distribution of the data. In Fig. 4b, it can be observed that the Gap curve reaches the global maximum at 5. Therefore, the map was clustered into five groups for further interpretation. Characterization Phase In order to produce the driving style of a driver, complete trips are considered as input. A trip for a city bus is defined as operating the vehicle from an initial stop to an end stop and returning to the same initial stop. In terms of data collection, this is the period between an Engine-ON and the following Engine-OFF trigger. Within a trip, drivers do not change, as explained by the City Bus operators. Mapping trips to the pre-trained map The mapping function works quite similar to the training function. Here, the input point is introduced to the pretrained map, and the competition takes place. The BMU is straight away considered as the mapped node for this input. The codebook vectors do not change in this case. The test sets were mapped to the model with the help of the default "map" function provided by the same "kohonen" package. All manoeuvres of a driver's trip are mapped onto the pre-trained model to obtain his/her Driving Style in the form of frequency distribution over all nodes on the SOM. The trips were preprocessed in the same way as the training set, using the sliding window approach explained earlier. However, the moving rate of the window was set as 10 instead of 1. Overlapping input frames are not required for characterization, because the need for completeness, i.e. of really capturing all possible driving actions in a trip, is not necessary. Only the training set needs to contain the overlapping frames in order not to miss any relevant physical manoeuvres, which would decrease the robustness and applicability range of the model. The model should be able to handle any possible manoeuvre performed with the vehicle and regular operation. Trip Comparison Node densities for a given trip obtained from mapping are used for comparing the driving style across trips. In this work, two different trips driven with the same vehicle were considered. Since the driver identities are anonymous, the trips before and after the scheduled driver change were selected to make sure they were different drivers. The trips were individually mapped onto the pre-trained map. For simple interpretation, the difference between the trips was denoted as nodes common to both trips and nodes exclusive to either. Common nodes are manoeuvres that were observed on both trips, and they are considered as unavoidable manoeuvres due to the nature of the operational conditions. For example, waiting at the bus stop is unavoidable. Exclusive nodes are manoeuvres that are were observed only in one of the trips. They depict the driving behaviour specific to a driver if observed multiple times in one trip and not observed in the other trip. In the current work, only the exclusive nodes are used to differentiate different trips. The normalized densities of nodes can also be considered to include the common nodes for distinguishing the trips. However, this is not performed within the current scope. Interpreting Map Visualizations The "kohonen" package offers functions to visualize any trained SOM model as Property Heatmaps or Code-maps. Regardless of the content or type of visualization, the ordering of nodes remains the same. There are no X or Y axes for these maps. The nodes are numbered from 1 to 400. The 1st node is the bottom-left corner node, and the numbers increase from left to right. After the right edge, the numbering proceeds to the next row on top. Here, the bottom right node is, therefore, the 20th node and the 21st node is the upper left neighbour of the 1st node. -For the time-frames, "Line" representation is used (Figs. 6, 7, 8). Here, the values are visualized as a 2D plot, with the corresponding variable and time as the Y and X-axes, respectively. The values are scaled to fit into the node, and all nodes have the same scale for comparability. The axes are not marked, given the size of the nodes. But they can be still plotted separately from codebook vectors for better interpretation (Figs. 15, 16). -For the variables in scalar layer, "Segments" representation is used (Fig. 9). Here a circle, which is split into equal sectors or segments, is present in each node. Each sector represents a variable, and the colouring convention is shown in the legend. The angle of the sectors is constant for all nodes (here, 180 degrees). The size or radii of corresponding sectors vary with respect to the variable. The background colour of the nodes in these maps represents their respective cluster. Counts Plot This heatmap shows the number of input observations belonging to the respective node. The nodes had an average of 144.8 inputs mapped to each. Node 60 (3rd row from bottom, right edge) however had a mapping of 14947 observations, i.e., 24.5% of the training set. Upon verifying the codebook vector of the node, it was identified that the node represents the idle time of the vehicle with all values set as 0. This is normal for City buses since they spend more time at bus stops and traffic. Hence the observations were not excluded as outliers. For visualization, the mapping count was log-transformed with base 10 before plotting in Fig. 5. The test set preserved earlier were also mapped onto the model for validation and were observed to have a similar distribution on the counts plot (Not shown here). The driving styles obtained in characterization phase are also presented as counts plot (Figs. 10, 11). Interpreting the model and clusters To understand the nodes on the map, the layers of the map and their clusters are interpreted below. In the end, the cluster interpretations across the layers are combined to provide the final interpretation for the cluster. These clusters are then used to provide the context for the manoeuvres. Velocity Map The code-map for Velocity layer is displayed in Fig. 6. The interpretation for each cluster is as follows: Blue The velocities in these nodes increase gradually. Furthermore, they are always above the middle of the nodes, indicating above-average to high velocities. Grey The velocities are decreasing gradually in these nodes. However, they appear to be close to the middle or higher, indicating high velocities. Yellow The velocities are either decreasing rapidly towards zero or constantly zero. Green The velocities are mostly at 0. Sometimes they decrease from a low velocity to zero or increase from 0 to a low velocity. White These nodes contain a mixed set of velocity curves that are always higher than 0, however not as high as the blue or yellow cluster. The vehicle is not stopping in these nodes, but there are a few decelerations observed. Accelerator Pedal Position The interpretation for accelerator pedal position layer is as follows (Fig. 7). Blue The throttling behaviour appears to be aggressive in these nodes. The throttling is mostly high and sometimes released rapidly. Grey The driver is mostly releasing the throttle and sometimes just 0%. Yellow In most cases, there were no throttling observed. Few nodes have a rapid throttle from 0 or released to 0. Green These nodes are very similar to the Grey cluster. White The nodes cover all other behaviours observed. Constant 0% nodes are also present, however fewer than that of green or Yellow. Additionally, a few taps on the pedal were also observed. Brake Pedal Position The observation of Brake pedal position variations (Fig. 8) is as follows. Blue There was no braking in these nodes. Grey There was no braking in half the nodes. In the rest, there was some gradual braking observed. Yellow High brake usage was observed in these nodes, and most of them were rapid. Green The brake pedal was completely 0% along the bottom. The remaining nodes have a brake release to 0 and the top most nodes have a rapid braking. However, the rapid press was still low than that of Grey cluster. White Most of these nodes have no braking at all. A few nodes have rapid press or releases. Distance covered and Fuel Consumed Distance and fuel consumption are represented as sectors (Fig. 9). The observations are as follows. Blue In comparison to other cluster nodes, these nodes have the highest fuel consumptions and distances coved. Grey These nodes appear to travel distance slightly lesser than Blue cluster, but have very low or even 0 fuel consumption. Yellow These nodes also have very low or zero fuel consumption. The distance covered is also low. Green There is almost no distance covered in these nodes, but there are still small fuel consumptions observed. White Low to average fuel consumption is observed. Distance covered is also low to average; however, they sometimes are not correlating at higher values of either. Final interpretation of clusters Based on the previous interpretations of the individual layer of the map, the final interpretations of the clusters are as follows. Blue The driver had no intentions to stop the vehicle in the near vicinity, and the velocity is very high. Hence this cluster is termed as High-Speed Zone. Grey The driver is slowing the vehicle, but not very rapidly. He is aware of a nearby stop or obstruction and hence is planning to stop gradually. Since the vehicle is running with less influence of throttle and brake, this cluster is termed as Coasting Zone. These nodes are good to have since they are very fuel-efficient. Yellow These nodes also exhibit decelerations; however, they also have high braking. This implies the driver wants to stop the vehicle rapidly because of some circumstance and this manoeuvre is not fuel-efficient. The cluster shall be termed as Rapid Deceleration Zone. Green This cluster shows gradual deceleration and acceleration close to 0. Node 60 and other nodes where the vehicle was standing most of the time were also present. Hence this cluster can be termed as Bus Stop Zone. White The velocity in this cluster is average, and the braking and accelerations are random. The vehicle is also not stopping. Hence this cluster shall be termed asStop and Go Zone. Manoeuvres and Fuel Consumption Two particular trips labelled Trip 49 and Trip 53 were considered for the characterization phase. Trips 49 and 53 had a fuel consumption of 16.1 litres and 7.8 litres, respectively, despite having travelled a similar distance of 40km approximately (40.13 km and 40.73km respectively), Trip 49 had more than twice the consumption of Trip 53. The mapping density is plotted in Figs. 10 and 11. The density is log-transformed similar to Fig. 5 due to the domination by node 60. Trip 49 -High Consumption trip Out of the 1335 manoeuvres 119 were not mapped to any of the map nodes. This might be because the training set consisted of a limited duration, and when a completely new driver is observed, the manoeuvres can be completely different. As seen in Fig. 10, all types of manoeuvres (clusters) were observed in this trip. The node 60 had the highest density (number of mappings) with 501 observations mapped to it, similar to the training set. This being a bus stop zone manoeuvre, is an expected action in a city bus. Trip 53 -Low Consumption trip Since the trips were having the same time duration, trip 53 also had 1335 manoeuvres observed, out of which 120 were not mapped to any nodes. It was observed that most of the mappings of trip 53 were also quite similar to trip 49, as seen in Fig. 11. Node 60 was also the highest density node in the trip. In comparison to trip 49, trip 53 had fewer types of Yellow or rapid deceleration manoeuvres present on the top-right of the map. Trip differences When observing the fuel consumption of the trips in Figs To investigate the driving manoeuvres better, a pair of similar manoeuvres is taken. The codes of node 78 and Figs. 15 and 16, respectively. In node 79, the velocity was initially less than 15 m/s and due to the braking, reduces to approximately 2 m/s. The brake was released at time 2 s and the velocity was constant until time 7 s. To avoid halting, Accelerator Pedal was slightly pressed. In node 78, the velocity was initially higher at about 24 m/s. Due to the braking, it decelerated to approximately 2 m/s at time 3 s and dropped further until time 5 s. Again the Accelerator Pedal was pressed here, slightly higher than in node 79. Thus manoeuvre 78 was more aggressive than 79. It can be concluded that the driver of Trip 49 was more aggressive at lower velocity manoeuvres when compared to the driver of Trip 53. This explains the fuel consumption difference to some extent. Discussion Time Series with SOM The significant contribution of this work is the approach of using SOM to represent driving manoeuvres. [1] describes the advantages of using SOM for Time Series Prediction problem, and emphasizes the Local Nature and Topology Preserving properties of SOM-based models. The current work extends the same idea over a multivariate scenario. The topologypreserving property enables the identification of comparable driving manoeuvres. Due to the data preparation, the codebook vector of the model provides the properties of the map as time series, which makes the results interpretable. Hence, the proposed method is more comfortable to implement and interpret compared to the state of art approaches summarized in [15]. Driving Manoeuvres vs Fuel Consumption When it comes to fuel consumption optimization concerning driving styles, a common approach followed is to build a speed profile, similar to [8] or use of metrics such as Vehicle Specific Power similar to [2,3]. Although these approaches are quite robust and effective, it is quite difficult to communicate the insights to the drivers, and the context of the driving behaviour is lost. The proposed driving manoeuvre definition with velocity, throttle and brake pedal positions provides an easily interpretable method to reduce fuel consumption. Driving Style It is also quite common to distinguish driving styles as Aggressive, Normal, Gentle, and so on, as performed in [10]. In the case of City Buses, there are more restrictions due to fixed schedules and traffic. Aggressiveness and defensiveness cannot be easily computed, and a driver can exhibit both depending on the context. For instance, a driver can be aggressive while entering a bus stop and defensive while leaving. [9] shows the impact of driving styles on fuel consumption at specific points of a trip, such as bus stops and round-abouts. The proposed method does not distinguish the driving style as aggressive or defensive and rather differentiates driving manoeuvres. This would help drivers learn where they are inefficient and improve on those manoeuvres precisely. Furthermore, the driver could also be learning from his manoeuvres from a different time. Due to the regularity in the trips and localized nature of SOM, the clusters identified could easily specify the context of the manoeuvre such as bus stop even without a controlled measurement environment as done in [9]. Conclusion A method for processing time-series data with SOM to define driving manoeuvres has been introduced. The SOM nodes represent a possible set of driving manoeuvres based on the observed fleet. The clusters identified from the SOM nodes provide the context for the driving manoeuvres and help in understanding the usage of the vehicle. With the help of this model, the fuel consumption of two different trips on the same track was compared. The Bus-Stop manoeuvre of the trips was used as an example to compare similar manoeuvres. This can be built into an application for driver feedback for fuel optimization. The method introduced used velocity and pedal positions to define the manoeuvres so that the results and interpretations are conveyable to common users like bus drivers. The method can be extended further by using different input and target data, and the resulting mappings of the trip on the SOM can also be used as inputs to further use-cases, which were not discussed in this article, but currently being developed. Author's contribution SL wrote the entire article and all the necessary steps in the study, starting from Data preparation until the interpretation of the results were performed by the same. The author(s) read and approved the final manuscript. Funding EvoBus GmbH has funded this work entirely. There has been no influence by EvoBus GmbH or any other organizations, in the analysis or the results presented here. Availability of data and Material The data that support the findings of this study were obtained from vehicles manufactured by EvoBus GmbH and operated by public transport companies in Germany. Restrictions apply to the availability of these data, and they are not publicly available. Data are however available from the author upon reasonable request and with permission of EvoBus GmbH.
6,295.8
2020-04-29T00:00:00.000
[ "Computer Science" ]
Improved Hydrogenation Kinetics of TiMn1.52 Alloy Coated with Palladium through Electroless Deposition The deterioration of hydrogen charging performances resulting from the surface chemical action of electrophilic gases such as CO2 is one of the prevailing drawbacks of TiMn1.52 materials. In this study, we report the effect of autocatalytic Pd deposition on the morphology, structure, and hydrogenation kinetics of TiMn1.52 alloy. Both the uncoated and Pd-coated materials were characterized using scanning electron microscopy/energy dispersive spectroscopy (SEM/EDS) and X-ray diffraction (XRD). XRD analyses indicated that TiMn1.52 alloy contains C14-type Laves phase without any second phase, while the SEM images, together with a particle size distribution histogram, showed a smooth non-porous surface with irregular-shaped particles ranging in size from 1 to 8 µm. The XRD pattern of Pd-coated alloy revealed that C14-type Laves phase was still maintained upon Pd deposition. This was further supported by calculated crystallite size of 29 nm for both materials. Furthermore, a Sieverts-type apparatus was used to study the kinetics of the alloys after pre-exposure to air and upon vacuum heating at 300 °C. The Pd-coated AB2 alloy exhibited good coating quality as confirmed by EDS with enhanced hydrogen absorption kinetics, even without activation. This is attributed to improved surface tolerance and a hydrogen spillover mechanism, facilitated by Pd nanoparticles. Vacuum heating at 300 °C resulted in removal of surface barriers and showed improved hydrogen absorption performances for both coated and uncoated alloys. Introduction AB 2 -type Laves phase alloys are an attractive class of metal hydrides due to their better reversible absorption and desorption of hydrogen, good activation property, and low cost [1,2]. The most studied and promising AB 2 -type alloy materials are the Ti-Mn binary alloys [3][4][5]. Because of their light weight, Ti-Mn binary alloys possess a large hydrogen absorption capacity of more than 1.0 hydrogen to metal ratio (H/M) and moderate equilibrium plateau pressure (reported to be 0.7 MPa) under near ambient temperatures as compared to other AB 2 alloys [6]. Regardless of these superior properties, deterioration of hydrogen charging performances resulting from the surface chemical action of poisonous electrophilic gases is still a concern and therefore activation prior to hydrogen absorption is required [1,7]. Some well-known attempts to improve hydrogenation behaviour of these binary alloys include element substitution, structural change, and multicomponent strategies [6,8]. An example of element substitution includes a study by Liu et al. [1], where Ti and Zr comprised the A site, while Mn, Cr, V, Ni, Fe, and Cu metals occupied the B site to produce a (Ti 0.85 Zr 0.15 ) 1.05 Mn 1.2 Cr 0.6 V 0.1 M 0.1 alloy (where M=Ni, Fe, Cu). This material showed a great improvement in cyclability but poor hydrogenation kinetics due to its poor poisoning-resistance. Other improvements of the hydrogenation behaviour have been previously reported through surface protecting techniques such as microencapsulation [9,10], coating with metal oxides [11], and fluorination treatment [12]. All these techniques have high and low affinity for hydrogen and poisonous gases, respectively, but their limitations are not avoidable. For instance, the microencapsulation technique, which involves coating of the bulk alloy with 10 wt.% of Ni or Cu, utilises large amounts of the coating metal that is not responsible for hydrogen storage. It is also not economically favoured and produces heavy metal alloys [10]. On the other hand, surface modification through deposition of platinum group metals (PGMs), particularly palladium (Pd) which has a strong affinity for hydrogen, has been reported to possess relatively favourable and efficient improvement towards hydrogenation properties of alloy materials [13,14]. The effect of Pd on hydrogenation properties of alloy materials has been studied intensively over the years. For example, a study by Zaluska et al. [15] showed that, to some extent, Pd coating on AB, A 2 B, and AB 5 alloy materials promoted fast hydrogen absorption, with a small or no incubation period. Similar observations were shared by Uchida et al. [16] when Pd nanoparticles were deposited on the surface of titanium films. In this investigation, autocatalytic palladium deposition was identified for surface modification of TiMn 1.52 alloy; its effect on hydrogen sorption kinetics after exposure to air was then studied using a Sieverts-type apparatus. To the best of our knowledge, such studies on hydrogenation kinetics of Pd-coated TiMn 1.52 alloy do not appear to have been reported yet. Materials The AB 2 -type (TiMn 1.52 ) hydride-forming alloy was prepared from Ti (99.9%) and Mn (99.9%) purchased from Sigma Aldrich (St. Louis, MO, USA). The AB 2 hydride forming alloy was prepared by arc-melting on a water-cooled copper crucible in a protective argon atmosphere. All prepared ingots were melted three times to provide homogeneity. Subsequently, the prepared metal ingots were pulverised by ball-milling in argon for 10 min. The material was allowed constant exposure to air throughout the experimental studies. Surface Modification of Alloy Surface modification of the TiMn 1.52 alloy was conducted through autocatalytic deposition of Pd in a hypophosphite-based autocatalytic plating bath following the procedure described here [17]. Prior to palladium deposition, the materials were first sensitized and activated in a palladium-tin (Pd-Sn) colloidal solution [17] resulting in increased densities of Pd deposition and surface Pd loading on the intermetallide. The activated intermetallide was subsequently suspended in the palladium plating bath. An equivalent volume of NaH 2 PO 2 solution (10 g/L) was added separately. The plating time and stirring rate were fixed at 30 min and 300 rpm, respectively. The main purpose of surface coating with Pd was to improve the poisoning-tolerance of the material as well as to form a material with excellent hydrogen sorption properties. The autocatalytic deposition of Pd was applied to a~5 g batch of TiMn 1.52 alloy. Characterisation Techniques X-ray diffraction (XRD) studies of the alloys were performed using a Bruker Advance powder diffractometer (Madison, WI, USA; 40 mA, 40 keV) at the Materials Research Group, iThemba Labs, in Cape Town, South Africa for phase identification. The XRD analysis was done with an X-ray source of Cu Kα radiation (λ = 1.5406 Å). Scanning electron microscopes/energy-dispersive spectroscopy (SEM/EDS, Edax Genesis, Tilburg, The Netherlands, 100 live seconds) studies were carried out using a Leo 1450 microscope (Carl Zeiss, Jena, Germany) (20 kV, secondary electrons) at the Physics department, University of the Western Cape (UWC) to evaluate the morphology of AB 2type alloy particle size/shape, Pd particle dispersion on the surface of the AB 2 -type alloy particles, and Pd particle size/shape. The effect of autocatalytic palladium deposition on hydrogenation kinetics of TiMn 1.52 alloy was evaluated by a comparison of hydrogen absorption after pre-exposure to air and hydrogen absorption after vacuum heating at 300 • C. Vacuum heating facilitates the removal of any existing oxide layers on the surface of the alloy. Hydrogen absorption was conducted using a commercial Sieverts-type volumetric installation (PCTPro-2000, Hy-Energy LLC, California, CA, US) at the South African Institute for Advanced Material Chemistry (SAIAMC), UWC. The measurements were carried out at T = 20 • C, P 0~3 0 bar H 2 , for 2 h. The experimental results were processed by application of formal kinetic analysis, using the Avrami-Erofeev Equation (1) where (H/AB 2 ) is the actual hydrogen concentration in the alloy, (H/AB 2 ) max is the maximum hydrogen concentration in the alloy, t is time, k represents rate constant, and the index of power, n, is interpreted as a value indirectly connected to the reaction mechanism. Figure 1 shows the XRD patterns of Pd-coated and uncoated TiMn 1.52 alloys. The XRD analyses indicate that TiMn 1.52 alloy exhibits a disordered structure and C14-type Laves phase without any second phase. C14-type Laves phase of the same alloy was previously reported by Dekhtyarenko et al. [19] and Hu et al. [8]. The most interesting feature about C14-type Laves hydrogen storage materials is that they have favourable hydrogen absorption/desorption kinetics, exhibiting easy penetration of hydrogen atoms [20]. For Pdcoated alloy, two sharp diffraction peaks of much higher intensities than those of TiMn 1.52 alloy appear at 2θ = 30.56 • and 2θ = 31.83 • . The peaks are attributed to (021) and (040) reflections of phosphorus structure, respectively [21]. The phosphorus was impregnated into the Pd layer during plating of the NaH 2 PO 2 -derived Pd layer. In addition, another two peaks appeared at 2θ = 62.49 • and 2θ = 64.82 • . Crystallite sizes of the two alloys were calculated using Scherrer's equation (Equation (2)) [22], where the peak at 2θ = 40.05 • was used as a representative peak. τ = (κλ)/β cos(θ) Both the uncoated and Pd-coated alloys were found to have the same crystallite size of 29 nm, suggesting that there was no admixing/incorporation between palladium nanoparticles and the bulk TiMn 1.52 alloy. Figure 2 presents SEM images of TiMn 1.52 before and after surface coating by autocatalytic palladium deposition. The uncoated alloy exhibited a relatively smooth surface, which was occupied by irregular-shaped particles varying in size from 1 to 8 µm. A particle size distribution histogram of the material indicated that the majority of the particles had a size of 1 µm. The alloy may be classified as a nonporous material. For the sample coated with Pd, a discontinuous layer of near-spherical Pd particles was observed. Moreover, the layer seemed to be very dense and uniform. A particle size distribution histogram (Figure 2f) estimated that this alloy had particles ranging in size from 50 to 475 nm, while the majority of the particles had a particle size of 200 nm. Morphological and Elemental Characterisations EDS analyses (Figure 3) were employed in parallel to the SEM studies in order to determine the elemental compositions of the alloys. Table 1 show that EDS data correspond very well with the targeted composition of TiMn 1.52 alloy, indicating a successful admixing of Ti and Mn metals (ratio of 1:1.52) through the arc melting process. The EDS of Pd-coated alloy (Figure 3b) reveals that the impurities are phosphorus, carbon, and tin at a level of 0.70, 2.31, and 4.54 wt.%, respectively. The presence of these impurities may have resulted from palladium-tin (Pd-Sn) colloidal solution during sensitisation and activation of AB 2 alloys as well as from the NaH 2 PO 2 -based plating bath during autocatalytic deposition of palladium. When comparing the EDS graphs (Figure 3a,b) of the two alloys, we observe that the net counts of Ti and Mn decreased from 10.25 and 9.7 to 0.9 and 1.5, respectively. This observation may be attributed to a successful palladium loading that covered most part of the alloy surface with a net count of 4.8, as witnessed in the EDS data. Hydrogen Absorption Kinetics Studies of the hydrogenation performances of both uncoated and Pd-coated TiMn 1.52 alloy were conducted after pre-exposure to air and after preactivation by vacuum heating. The hydrogenation kinetic curves are presented in Figure 4. In addition, Table 2 presents the results obtained through fitting of the experimental data on the Avrami-Erofeev model, which is described here [17]. Without vacuum activation, the uncoated TiMn 1.52 alloy exhibits slow hydrogen absorption, accompanied by a long incubation period of~5 min. This is attributed to the presence of a poisonous oxide film on the surface, which causes difficulties in transportation of atomic hydrogen into the bulk alloy. An index of power of 1.37 (Table 2) signifies that hydrogen absorption for unmodified TiMn 1.52 alloy without activation is controlled by nucleation and growth mechanisms. Upon activation in a vacuum by heating at 300 • C for 2 h, the hydrogenation kinetics of TiMn 1.52 alloy significantly improved and this was supported by the sudden increase of rate constant from 0.438 to 13.2 min −1 ( Table 2). This is due to the fact that vacuum heating results in the removal of any oxide layers, producing a fast hydrogen-absorbing surface. However, the maximum hydrogen absorption capacity for the activated uncoated alloy was found to be lower than that for the nonactivated uncoated alloy, as depicted in Table 2. Subsequently, the maximum hydrogen absorption capacity continued to drop after electroless Pd coating. A similar trend was observed by Davids et al. [23] when loading Pd on a TiFe alloy surface. Loss in hydrogen absorption capacity after Pd coating may be attributed to a large metal (Pd) loading on the surface of TiMn 1.52 alloy. Li et al. [24] recommended that in order to avoid losses in hydrogenation behaviour, the total weight of PGMs during surface modification of metal hydride-forming alloy through PGMs should be in trace amounts (≤0.1 wt.%). In our case, EDS analysis revealed more than 0.1 wt.% of Pd particles on the surface of TiMn 1.52 alloy, and a decrease in maximum hydrogen absorption capacity was observed upon Pd deposition. In addition, the presence of impurities such as Sn and C on the surface of Pd-coated alloy might have hindered complete hydrogen absorption by the material. Although a decrease in hydrogen capacity was observed, there was an improvement in hydrogenation kinetics, carried out without activation by vacuum heating, upon Pd-coating. The enhancement can be attributed to the partial removal of surface oxide films upon autocatalytic deposition of palladium as well as to the catalytic activity of Pd(P) nanoparticles facilitating splitting of hydrogen molecules into hydrogen atoms. The incubation period is shorter for the alloy coated with Pd. After vacuum heating, the surface coated alloy exhibited faster hydrogen absorption performances without the presence of an incubation period as compared to the surface coated alloy without vacuum heating. Its hydrogenation kinetics are slower than for activated unmodified TiMn 1.52 alloy. This is supported by a higher rate constant of activated unmodified alloy (k = 13.2 min −1 ) as compared to that of activated modified alloy (k = 1.05 min −1 ). The activated uncoated alloy together with inactivated and activated Pd-coated alloys all exhibited an index of power of between 0.5 and 1. Therefore, their interaction with hydrogen is controlled by the nucleation mechanism. Conclusions This study presents the effect of autocatalytic deposition of palladium on the structure, morphology, and hydrogenation kinetics of TiMn 1.52 alloy. The study demonstrates that surface modification of TiMn 1.52 alloy through Pd deposition results in the formation of a discontinuous layer of Pd nanoparticles on the surface of the alloy, thus causing relatively improved activation performances and hydrogen absorption kinetics even after exposure to air. The effect was attributed to improved H 2 dissociation on Pd nanoparticles. The maximum hydrogen absorption capacity of the material decreased upon Pd deposition, and this was associated with a large metal loading on the surface.
3,382.4
2021-04-01T00:00:00.000
[ "Materials Science" ]
Inherent enumerability of strong jump-traceability We show that every strongly jump-traceable set obeys every benign cost function. Moreover, we show that every strongly jump-traceable set is computable from a computably enumerable strongly jump-traceable set. This allows us to generalise properties of c.e.\ strongly jump-traceable sets to all such sets. For example, the strongly jump-traceable sets induce an ideal in the Turing degrees; the strongly jump-traceable sets are precisely those that are computable from all superlow Martin-L\"{o}f random sets; the strongly jump-traceable sets are precisely those that are a base for $\text{Demuth}_{\text{BLR}}$-randomness; and strong jump-traceability is equivalent to strong superlowness. Introduction An insight arising from the study of algorithmic randomness is that anti-randomness is a notion of computational weakness. While the major question driving the development of effective randomness was "what does it mean for an infinite binary sequence to be random?", fairly early on Solovay [26] defined the notion of K-trivial sets, which are the opposite of Martin-Löf random sequences in that the prefix-free Kolmogorov complexity of their initial segments is as low as possible. While Chaitin [4,3] showed that each K-trivial set must be ∆ 0 2 , a proper understanding of these sets has only come recently through work of Nies and his collaborators (see for example [8,20,21,14]). This work has revealed that K-triviality is equivalent to a variety of other notions, such as lowness for Martin-Löf randomness, lowness for K, and being a base for 1-randomness. These other notions express computational weakness, either as the target of a computation or as an oracle: they either say that a set is very easy to compute, or is a weak oracle and cannot compute much. The computational weakness of K-trivial sets is reflected in more traditional measures of weakness studied in pure computability theory. For example, every Ktrivial set has a low Turing degree. Recent developments in both pure computability and in its application to the study of randomness have devised other notions of computational weakness, and even hierarchies of weakness, and attempted to calibrate K-triviality with these notions. One such attempt uses the hierarchy of jump-traceability. While originating in set theory (see [25]), the study of traceability in computability was initiated by Terwijn and Zambella [27,28]. Definition 1.1. A trace for a partial function ψ : ω ω is a sequence T ÜTÔzÕÝ z ω of finite sets such that for all z È dom ψ, ψÔzÕ È T ÔzÕ. Thus, a trace for a partial function ψ indirectly specifies the values of ψ by providing finitely many possibilities for each value; it provides a way of "guessing" All authors were supported by the Marsden Fund of New Zealand, the first and the third as postdoctoral fellows. The second author was also supported by a Rutherford Discovery Fellowship. the values of the function ψ. Such a trace is useful if it is easier to compute than the function ψ itself. In some sense the notion of a trace is quite old in computability theory. W. Miller and Martin [18] characterised the hyperimmune-free degrees as those Turing degrees a such that every (total) function h È a has a computable trace (the more familiar, but equivalent, formulation, is in terms of domination). In the same spirit, Terwijn and Zambella used a uniform version of hyperimmunity to characterise lowness for Schnorr randomness, thereby giving a "combinatorial" characterisation of this lowness notion. In this paper we are concerned not with how hard it is to compute a trace, but rather, how hard it is to enumerate it. Definition 1.2. A trace T ÜTÔzÕÝ is computably enumerable if the set of pairs ØÔx, zÕ : x È T ÔzÕÙ is c.e. In other words, if uniformly in z, we can enumerate the elements of T ÔzÕ. It is guaranteed that each set T ÔzÕ is finite, and yet if T is merely c.e., we do not expect to know when the enumeration of T ÔzÕ ends. Thus, rather than using the exact size of each element of the trace, we use effective bounds on this size to indicate how strong a trace is: the fewer options for the value of a function, the closer we are to knowing what that value is. The bounds are known as order functions; they calibrate rates of growth of computable functions. Definition 1.3. An order function is a nondecreasing, computable and unbounded function h such that hÔ0Õ 0. If h is an order function and T ÜTÔzÕÝ is a trace, then we say that T is an h-trace (or that T is bounded by h) if for all z, T ÔzÕ hÔzÕ. In addition to measuring the sizes of c.e. traces, order functions are used to define uniform versions of traceability notions. For example, computable traceability, the uniform version of hyperimmunity used by Terwijn and Zambella, is defined by requiring that traces for functions in a hyperimmune degree a are all bounded by a single order function. Zambella (see Terwijn [27]) observed that if A is low for Martin-Löf randomness then there is an order function h such that every function computable from A has a c.e. h-trace. This was improved by Nies [20], who showed that one can replace total by partial functions. In some sense it is natural to expect a connection between uniform traceability and K-triviality; if every function computable (or partial computable) from A has a c.e. h-trace, for some slow-growing order function h, then the value ψÔnÕ of any such function can be described by log n log hÔnÕ many bits. Following this, it was a natural goal to characterise K-triviality by tracing, probably with respect to a family of order functions. While partial results have been obtained [1,15] this problem still remains open. The point is that while K-triviality has been found to have multiple equivalent definitions, all of these definitions use analytic notions such as Lebesgue measure or prefix-free Kolmogorov complexity in a fundamental way, and the aim is to find a purely combinatorial characterisation for this class. An attempt toward a solution of this problem lead to the introduction of what seems now a fairly fundamental concept, which is not only interesting in its own right, but now has been shown to have deep connections with randomness. Figueira, Nies and Stephan introduced a notion seemingly stronger than strong jump-traceability, called strong superlowness, which can be characterised using plain Kolmogorov complexity. Corollary 1.7. A set is strongly jump-traceable if and only if it is strongly superlow. Proof. Figueira, Nies, and Stephan [10] showed that every strongly superlow set is strongly jump-traceable, and that the notions are equivalent on c.e. sets. Strong superlowness is also closed downward in the Turing degrees. Unlike K-triviality, strong jump-traceability has both combinatorial and analytic characterisations. Proof. In [11] it is shown that every set computable from all superlow 1-random sets is strongly jump-traceable, and that every c.e., strongly jump-traceable set is computable from all superlow 1-random sets. We remark that the results of [11] imply that every strongly jump-traceable set is computable from all superhigh random sets, but we do not yet know if all sets computable from all superhigh random sets are all strongly jump-traceable. Another connection between strong jump-traceability and randomness passes through a notion of randomness stronger than Martin-Löf's, introduced by Demuth. As mentioned above, the Demuth analogue of the incomplete Martin-Löf covering problem was solved by Greenberg and Turetsky, giving yet another characterisation of c.e. jump-traceability. This characterisation cannot, of course, be extended to all sets, since every Demuth random is computable from itself. The analogue of the covering problem for all sets is the notion of a base for randomness: a set A is a base for a relativisable notion of randomness R if A is computable from some R A -random set. Hirschfeldt, Nies and Stephan [14] showed that a set is a base for Martin-Löf randomness if and only if it is K-trivial. On the other hand, while every base for Demuth randomness is strongly jump-traceable (Nies [23]), these two notions do not coincide (Greenberg and Turetsky [13]). However, this relies on the full relativisation of Demuth randomness. Recent work of Bienvenu, Downey, Greenberg, Nies and Turetsky [2] discovered a partial relativisation of Demuth randomness, denoted Demuth BLR , which is better behaved than its fullyrelativised counterpart. Corollary 1.9. A set is strongly jump-traceable if and only if it is a base for Demuth BLR -randomness. Proof. Nies [23] showed that every set which is a base for Demuth randomness is strongly jump-traceable. An examination of his proof, though, shows that for the Demuth test he builds to use the hypothesis of being a base for Demuth randomness, the bounds he obtains are computable. In other words, his proof shows that every set which is a base for Demuth BLR randomness is strongly jump-traceable. In the other direction, by [13], every c.e., strongly jump-traceable set A is computable from a Demuth random set, and by [2], each such set is also low for Demuth BLR randomness, and so in fact computable from a ÔDemuth BLR Õ A -random set, in other words, is a base for Demuth BLR randomness. Again this notion is downwards closed in the Turing degrees. Our proof of Theorem 1.5 utilises a concept of independent interest, that of a cost function. Formalised by Nies (see [22]), cost function constructions generalise the familiar construction of a K-trivial set (see [9]) or the construction of a set low for K (Mučnik, see [7]). Indeed, the key to the coincidence of K-triviality with lowness for K is the fact that K-triviality can be characterised by obedience to a canonical cost function. In this paper, we define a cost function to be a ∆ 0 2 , non-increasing function from ω to the non-negative real numbers R . A cost function c satisfies the limit condition if its limit lim x cÔxÕ is 0. A monotone approximation for a cost function c is a uniformly computable sequence Üc s Ý of functions from ω to the non-negative rational numbers Q such that: each function c s is non-increasing; and for each x ω, the sequence Üc s ÔxÕÝ s ω is non-decreasing and converges to cÔxÕ. Here we use the standard topology on R to define convergence, rather than the discrete topology which is usually used to define convergence of computable approximations of ∆ 0 2 sets and functions. A cost function is called monotone if it has a monotone approximation. In this paper, we are only interested in monotone cost functions which satisfy the limit condition, and so when we write "cost function", unless otherwise mentioned, we mean "monotone cost function satisfying the limit condition". If ÜA s Ý is a computable approximation of a ∆ 0 2 set A, then for each s ω, we let x s be the least number x such that A s¡1 ÔxÕ A s ÔxÕ. If Üc s Ý is a monotone approximation for a cost function c, then we write c s ÔA s Õ for c s Ôx s Õ. It is understood that if A s A s¡1 , then no cost is added at stage s to the sum c s ÔA s Õ. Nies [24] showed that obedience does not depend on the monotone approximation for c; that is, if A obeys c, then for any monotone approximation Üc s Ý for c, there is a computable approximation ÜA s Ý of A for which the sum above is finite. See Proposition 3.2 below. However, different approximations for A may cause the sum to be infinite. Unlike K-triviality, strong jump-traceability cannot be characterised by a single cost function; one way to see this is by considering the complexity of the index-set of strong jump-traceability, which is Π 0 4 -complete (Ng [19]). Greenberg and Nies [12] isolated a class of cost functions which together characterised strong jumptraceability on the c.e. sets. Benignity is an effective witness for the limit condition. It is a generalisation of the additive property of the canonical cost function for Ktriviality. Let Üc s Ý be a monotone approximation for a cost function c, and let ǫ 0 be rational. We define an auxiliary sequence of markers m 1 ÔǫÕ, m 2 ÔǫÕ, . . . , by letting m 1 ÔǫÕ 0, and given m k ÔǫÕ, letting m k 1 ÔǫÕ be the least s m k ÔǫÕ such that c s Ôm k ÔǫÕÕ ǫ, if there is such a stage s; otherwise, m k 1 ÔǫÕ is undefined. The fact that lim c s c and that lim c 0 shows that the sequence Üm k ÔǫÕÝ must be finite, and so we can let kÔǫÕ k ÜcsÝ ÔǫÕ be the last k such that m k ÔǫÕ is defined. Definition 1.11. A cost function c is benign if it has a monotone approximation Üc s Ý for which the function ǫ k ÜcsÝ ÔǫÕ is bounded by a computable function. Note that if Üc s Ý witnesses that c is benign, then the last value mÔǫÕ m kÔǫÕ ÔǫÕ need not be bounded by a computable function; it is ω-computably approximable (ω-c.e.). Greenberg and Nies showed that a c.e. set is strongly jump-traceable if and only if it obeys all benign cost functions. Much like obeying the canonical cost function captures the dynamics of the decanter and golden run methods which are used for working with K-trivial oracles, this result shows that benign cost functions capture the dynamics of the box-promotion method when applied to c.e., strongly jump-traceable oracles. Greenberg, Hirschfeldt and Nies [11] showed that every set, not necessarily c.e., which obeys all benign cost functions, must be strongly jump-traceable. In this paper we show that obeying benign cost functions in fact characterises strong jumptraceability on all sets. The fact that every K-trivial set is computable from a c.e. one is also deduced using obedience to the canonical cost function. It is easy to see that if a computable approximation ÜA s Ý witnesses that A obeys a cost function c, then the associated change-set, which records the changes in this approximation for A, is a c.e. set which computes A and also obeys the cost function c. Hence Theorem 1.12 almost gives us Theorem 1.5; the connection between benign cost functions and strong jumptraceability established in [12] shows now that if A is a strongly jump-traceable set, and h is an order function, then there is an h-jump-traceable c.e. set which computes A. (We note that this result implies all the corollaries above). We get Theorem 1.5 by showing: Theorem 1.13. There is a benign cost function c such that for any ∆ 0 2 set A obeying c, there is a c.e. set W computing A, which obeys all cost functions that A obeys. Theorem 1.5 is an immediate consequence of the conjunction of Theorems 1.12 and 1.13. We prove Theorem 1.12 in Section 2 and Theorem 1.13 in Section 3. Strongly jump-traceable sets obey benign cost functions In this section we prove Theorem 1.12. As we mentioned above, one direction of the theorem is proved in [11]. For the other direction, we are given a strongly jump-traceable set A, and a benign cost function c, and show that A obeys c. 2.1. Discussion. Our departure point is a simplified version of the original argument showing that every strongly jump-traceable set is ∆ 0 2 . Suppose that we are given a strongly jump-traceable set A, and we wish to find a computable approximation ÜA s Ý for A. The idea is to test binary strings, potential initial segments of A. For example, to determine AÔ0Õ, we try to test both strings Ü0Ý and Ü1Ý, and hopefully get an indication which one is an initial segment of A. Our belief about which one may change from time to time, but we need to make sure that it changes only finitely many times, and eventually settles on the correct value. While we fluctuate between Ü0Ý and Ü1Ý, we also test strings of length 2, and match up our guess for which string of length 2 is an initial segment of A with the current guess about which string of length 1 is an initial segment of A. Again, our belief about strings of length 2 may change several times, indeed many more than the changes between Ü0Ý and Ü1Ý, but eventually it should settle to the correct value. How do we test strings of a given length? We define a functional Ψ, fix an order function h, which will be designed to grow sufficiently slowly as to enable the combinatorics of the construction, and by the recursion theorem (or by using a universal trace), we have a c.e. trace ÜTÔzÕÝ for the partial function Ψ A , bounded by h. To test, for example, all strings of a length ℓ on some input z, we define Ψ σ ÔzÕ σ for every string σ of length ℓ. We then only believe strings which show up in the trace T ÔzÕ. If hÔzÕ 1 then we are done, since only one string may show up in T ÔzÕ, and the correct string A ℓ must appear in T ÔzÕ. However, h must be unbounded, and once we tested a string σ on some input z, we cannot test any extensions of σ on the same input; for the functional Γ must be kept consistent. What do we do, then, if hÔzÕ 1, and more than one string of length ℓ shows up in T ÔzÕ? This is where box promotion comes into place. Suppose that initially, we use inputs z such that hÔzÕ ℓ to test strings of length ℓ (such inputs are sometimes called ℓ-boxes). So when we test strings of length 2, of the four possibilities, we believe at most two. At first, we believe the first string of length 2 which shows up in the relevant trace component, say Ü00Ý. If another string shows up, say Ü01Ý, we move to test the length 2 on 1-boxes which we have reserved for this occasion. The reason we can do this is that some 2-boxes have been promoted : if Ü00Ý is correct, then boxes z for which Ü01Ý È T ÔzÕ have spent one of their slots on an incorrect strings. If, for example, later, we believe both Ü000Ý and Ü001Ý, since both have appeared in (the trace for) 3-boxes, then we can use the promoted 2-boxes to decide between the two strings of length 3. After all, neither of these strings extend Ü01Ý, as Ü01Ý has been discovered to be incorrect, and so we can test these strings in the promoted boxes without violating the consistency of Γ. In general, the promotion mechanism ensures that we have an approximation for A for which there are at most ℓ changes in our belief about A ℓ. Let Üc s Ý be a monotone approximation for c which witnesses that c is benign; let m k ÔǫÕ be the associated markers. To construct a computable approximation ÜA s Ý for A for which the sum s c s ÔA s Õ is finite, we need, roughly, to give a procedure for guessing initial segments of A such that for all n, for all k kÔ2 ¡n Õ, the number of changes in our belief about A m k Ô2 ¡n Õ is (say) n. The computable bound on kÔ2 ¡n Õ, the number of lengths we need "test at level n", allows us to apportion, in advance, sufficiently many n-boxes to deal with all of these lengths, even though which lengths are being tested at level n is not known in advance. The fact that the lengths themselves are not known in advance necessitates a first step of "winnowing" the strings of new lengths m k Ô2 ¡n Õ, so that instead of dealing with 2 m k Ô2 ¡n Õ many strings, we are left with at most n such strings. This is done by testing all strings of the given length on an n-box reserved for this length, as described above. As is the case with all box-promotion constructions, the heart of the proof is in the precise combinatorics which tell us which strings are tested on which boxes. One main point is that while we need to prepare n-boxes for the possibility that lengths tested at higher levels are promoted all the way down to level n, the number of such promotions must be computably bounded in n, and cannot rely on the computable bound on kÔ2 ¡Ôn 1Õ Õ, kÔ2 ¡Ôn 2Õ Õ, . . . . That is, the number of promotions must be tied to the size (or level) of the boxes, and not on the number of lengths that may be tested at that level. Consider, for example, the following situation: at some level n, we are testing two lengths, ℓ 1 and ℓ 2 , and tests have returned positively for strings σ 0 and σ 1 of length ℓ 1 , and strings τ 0 and τ 1 of length ℓ 2 . If, to take an extreme situation for an example, the strings σ 0 , σ 1 , τ 0 , τ 1 are pairwise incomparable, we could test them all on a single input z before we believe them; when we discover which one of them is correct, the other values are certified to be wrong, and give the box z a promotion by three levels. If, on the other hand, τ 0 extends σ 0 and τ 1 extends σ 1 , then we cannot test τ 0 on boxes on which we already tested σ 0 , and the same holds for τ 1 and σ 1 . We do not want, though, to let both lengths be promoted (moved to be tested on Ôn ¡ 1Õ-boxes) while n-boxes are only promoted by one level (containing only one incorrect value). In this case our action depends on timing: If σ 0 and σ 1 appear before τ 0 and τ 1 appear, we promote the length ℓ 1 . We do not promote ℓ 2 , unless another string of length ℓ 2 appears. If no such new string appears, then our belief about which of σ 0 or σ 1 is an initial segment of A will dictate which of τ 0 or τ 1 we believe too. If τ 0 and τ 1 appear before we see both σ 0 and σ 1 , then we promote the length ℓ 2 . In this case, certainly our belief about which of τ 0 or τ 1 is an initial segment of A would tell us whether to believe σ 0 or σ 1 . In the first case, an important observation is that if another string ρ of length ℓ 2 appears, then ρ cannot extend both σ 0 and σ 1 . If ρ does not extend σ 0 , say, then we can test σ 0 , ρ and τ 1 all on one box, and so this box will be eventually promoted by two levels, justifying the promotion of both lengths ℓ 1 and ℓ 2 to be tested on Ôn ¡ 1Õ-boxes. Of course, during the construction, we need to test strings on a large number of boxes, to allow for all possible future combinations of sets of strings involving the ones being tested, including strings of future lengths not yet observed. 2.2. Construction. As mentioned above, let Üc s Ý be a monotone approximation for c which witnesses that c is benign; let m k ÔǫÕ be the associated markers. We force these markers to cohere in the following way. For n ω and s ω let We summarise the properties of the functions l s in the following lemma. (1) Each function l s is non-decreasing, with n l s ÔnÕ maxØn, sÙ. (2) For each n, the sequence Ül s ÔnÕÝ s ω is non-decreasing, and takes finitely many values. Indeed, the function n # Øl s ÔnÕ : s ωÙ is computably bounded. (3) For all n and s, c s Ôl s ÔnÕÕ 2 ¡n . We fix a computable function g bounding the function n # Øl s ÔnÕ : s ωÙ. n 2¨b e the number of subsets of Ø1, 2, . . . , nÙ of size at most 2. We partition ω into intervals M 1 ,I 1 , M 2 , I 2 , . . . ; the interval M n has size αÔnÕ n gÔnÕ and the interval I n has size n gÔnÕ. We define an order function h so that hÔxÕ n for every x È M n I n . As mentioned, we enumerate a functional Ψ. Either by using the recursion theorem (as was done in [5]) or by using a universal trace (as in [12]), we obtain a number o È ω and a c.e. trace T ÜTÔzÕÝ for Ψ which is bounded by maxØh, oÙ. Each level n o will list an increasing sequence of lengths ℓ n 1 , ℓ n 2 , . . . which will be tested at level n. The list is dynamic -we may extend it during the construction. However, we will need to ensure that the length of the list is bounded by n gÔnÕ. The testing of lengths at level n will be in two parts. A. Initial testing of all strings of length ℓ n k will be performed on a reserved input from the interval I n . We thus enumerate the elements of I n as Øz n 1 , z n 2 , . . . , z n n gÔnÕ Ù; the input z n k is reserved for initial testing of all strings of length ℓ n k . We note here that as the list of lengths ℓ n 1 , ℓ n 2 , . . . may not necessarily reach its maximal length n gÔnÕ, it is possible that some inputs z n k will never be used. This is one reason for the fact that Ψ A will be a partial function. In this way we use the full hypothesis of strong jump-traceability of A; we cannot hope to make Ψ A total, and so the proof would not work for merely c.e. traceable oracles. B. The main bulk of the testing of strings of length ℓ n k would be performed on inputs from M n . To maximise the interaction between the various lengths (to obtain maximal promotion, we need to test large antichains of strings on inputs from M n ), we think of M n as an Ôn gÔnÕÕ-dimensional hypercube, the sides of which each have length αÔnÕ. We let DÔnÕ Ø1, 2, . . . , n gÔnÕÙ be the set of Let n È Öo, . . . , s×. The action at level n at stage s is as follows: 1. If n s and some lengths have just been promoted from level n 1, we append them to the list of lengths ℓ n Suppose that σ n k ÔiÕ has appeared in T Ôz n k Õ. Recall that P ÔnÕ is the collection of subsets of Ø1, 2, . . . , nÙ of size at most 2. For every ν : DÔnÕ P ÔnÕ such that i È νÔkÕ, we test σ n k ÔiÕ on z ν z n ν . Fix such ν. We need to ensure that Ψ remains consistent; the point is that there may be strings comparable with σ σ n k ÔiÕ which are already tested on z ν . To test σ on z ν while keeping Ψ consistent, we define Ψ τ Ôz ν Õ τ for every extension τ of σ of length s which does not extend any string already tested on z ν . Using other notation, we let Z ν,s be the collection of strings ρ for which we defined Ψ ρ Ôz ν Õ ρ by the end of stage s, and let Z ν,s ÖZ ν,s × be the clopen subset of Cantor space 2 ω determined by the set of strings Z ν,s (the collection of all infinite extensions of strings in Z ν,s ). Testing a string σ on z ν at stage s means adding strings of length s to Z ν,s¡1 so as to keep Z ν,s an antichain, but ensuring that Öσ× Z ν,s . Let k n gÔnÕ such that ℓ n k is defined by stage s, and let i n such that σ n k ÔiÕ is defined by stage s, that is, T Ôz n k Õ already contains at least i many elements by stage s. The test of σ n k ÔiÕ is successful if for all ν such that i È νÔkÕ, that is, for all ν such that σ was tested on z ν , we have Öσ× T s Ôz ν Õ À. In other words, if some string which is comparable with σ appears in T Ôz ν Õ by stage s. For the purpose of the following definition, let ℓ n 0 0. We say there is a conflict at length ℓ n k (and level n) if there are two strings σ 0 σ n k ÔiÕ and σ 1 σ n k ÔjÕ of length ℓ n k , both of whose tests are successful by stage s, such that σ 0 ℓ n k¡1 σ 1 ℓ n k¡1 . We note, for future reference, that if there is a conflict at length ℓ n k at stage s, then this conflict persists at every later stage. At stage s, if n o, then we promote to level n ¡1 all lengths ℓ n k for which there is a conflict at stage s, and which are longer than any length already tested at level n ¡ 1. These instructions determine our action for level n at stage s, and so completely describe the construction. 2.3. Justification. Before we show how the construction gives us the desired approximation for A, we first need to show that we can actually implement the construction. We need to prove that we have allocated sufficiently many n-boxes to each level n; that is, we must show that the list of lengths Üℓ n k Ý tested at level n has length at most n gÔnÕ. There are two streams contributing lengths to test at level n: lengths promoted from level n 1, and lengths of the form l s ÔnÕ. Of the latter, there are at most gÔnÕ many. Hence, it remains to show that there are at most n many lengths that are promoted by level n 1. Shifting indices, we show that level n promotes at most n ¡ 1 many lengths. Indeed, we show the following: To prove Lemma 2.2, fix n o and s o. Let N be the number of lengths at which there is a conflict (at level n) at the end of stage s. We show that there is some ν : DÔnÕ P ÔnÕ such that T s Ôz ν Õ ¡ 1 N . Using the fact that n o and z ν È M n we see that T s Ôz ν Õ n, which establishes the desired bound. In order to define ν, we define an increasing sequence of antichains of strings, indexed in reverse C ksÔnÕ 1 C ksÔnÕ C ksÔnÕ¡1 ¤ ¤ ¤ C 1 , starting with C ksÔnÕ 1 À. Each set C k consists of strings of lengths ℓ n k ½ for k ½ k. Let k È Ø1, . . . , k s ÔnÕÙ; we assume that C k 1 has been defined, and we show how to The definition is split into two cases. First, suppose that there is no conflict at stage s in length ℓ n k . We then let C k C k 1 and νÔkÕ À. We assume then that there is a conflict in length ℓ n k at stage s. Let σ 0 σ n k ÔiÕ and σ 1 σ n k ÔjÕ be a pair witnessing this conflict. We let C k be a maximal antichain from C k 1 Øσ 0 , σ 1 Ù containing C k 1 . In other words, if neither σ 0 nor σ 1 are comparable with any string in C k 1 , then we let C k C k 1 Øσ 0 , σ 1 Ù; otherwise, if either σ 0 or σ 1 is incomparable with all the strings in C k 1 , then we let C k be one of C k 1 Øσ 0 Ù or C k 1 Øσ 1 Ù, making sure that we choose so that C k is an antichain; and finally, if both σ 0 and σ 1 are comparable with strings in C k 1 , then we let C k C k 1 . Now given the sequence of sets C k , we can define the index function ν: For k È Ø1, 2, . . . , k s ÔnÕÙ, we let νÔkÕ Øi n : σ n k ÔiÕ È C k Ù . For k È Øk s ÔnÕ 1, . . . , n gÔnÕÙ, we let νÔkÕ À. Since the strings in C k of length ℓ n k are precisely the strings in C k ÞC k 1 , we see that for all k, νÔkÕ is indeed a set of size at most 2, so ν is a function from DÔnÕ to P ÔnÕ. The point of this definition is that the strings tested on z ν are precisely the strings in C 1 . Proof. For every string τ in D k which has no extension in D k 1 , there is an exten- Claim 2.4. If ℓ n k has a conflict at stage s, then p k p k 1 . Proof. Let σ 0 and σ 1 be the strings that were chosen at step k to witness that ℓ n k has a conflict at stage s. By definition of having a conflict, σ 0 ℓ n k¡1 σ 1 ℓ n k¡1 ; we let τ denote this string. There are three cases. In all three cases, we note that every string in D k other than possibly τ has an extension in D k 1 . If C k C k 1 Øσ 0 , σ 1 Ù then we need to show that D k D k 1 1, which follows from the fact we just mentioned, that every string in D k other than τ has an extension in D k 1 . In the second case, we assume that C k is obtained from C k 1 by adding one string, say σ 0 ; we need to show that D k D k 1 . But σ 1 is comparable with some string in C k 1 , and in fact must be extended by some string in C k 1 . Hence σ 1 È D k 1 , i.e. τ is extended by some string in D k 1 , and therefore every string in D k is extended by some string in D k 1 . Finally, suppose that C k 1 C k ; we need to show that D k 1 D k ¡1. Since both σ 0 and σ 1 are comparable with elements of C k , both are elements of D k 1 , and so τ has two extensions in D k 1 , while every other string in D k has an extension in D k 1 . Hence p 1 N . If C 1 is empty, then N 0, so we may assume that C 1 is nonempty, and so C 1 p 1 1 is at least one more than N . Then Lemma 2.2, and with it our justification for the construction, is completed once we establish the following claim. Proof. We show that T s Ôz ν Õ contains only strings which are extensions of strings in C 1 , and that each string in C 1 has an extension in T s Ôz ν Õ. Recall that we let Z ν,s be the collection of strings that were actually tested on z ν by stage s, that, is, the collection of strings ρ for which we defined Ψ ρ Ôz ν Õ ρ by the end of stage s. Our instructions (and the definition of ν) say that the strings tested on z ν are precisely the strings in C 1 . Since C 1 is an antichain, this means that before some string σ is tested on z ν , we have Öσ× Z ν,t À, and so when testing σ, we only add extensions of σ to Z ν,s . Since we assumed that T s Ôz ν Õ Z ν,s , we see that all strings in T s Ôz ν Õ are extensions of strings in C 1 . Let σ È C 1 . Then σ σ n k ÔiÕ for some k and i is part of a pair of strings witnessing that there is a conflict at length ℓ n k (and level n) by stage s. So the test of σ on M n is successful by the end of stage s. Since σ is tested on z ν , we have Öσ× T s Ôz ν Õ À. Since no proper initial segment of σ is tested on z ν , this means that some extension of σ is an element of T s Ôz ν Õ. The approximation of A. We now show how to find a computable approximation for A witnessing that A obeys c. For n o, let kÔnÕ lim s k s ÔnÕ be the number of lengths ever tested at level n. Lemma 2.6. For all n o and all k kÔnÕ, The string A ℓ n k is eventually successfully tested at level n. Proof. Let s 0 be the stage at which the length ℓ n k is first tested at level n. Let ρ A ℓ n k . At stage s 0 , we define Ψ ρ Ôz n k Õ ρ, and so Ψ A Ôz n k Õ ρ. Since T traces Ψ A , we have ρ È T Ôz n k Õ; this is discovered by some stage s 1 s 0 . At stage s 1 we test ρ on elements z ν of M n . Fix such an input z ν . We need to show that Öρ× TÔz ν Õ is nonempty. At stage s 1 , we enumerate strings into Z ν to ensure that Öρ× Z ν . Hence A È Z ν , in other words, z ν È dom Ψ A . Since T traces Ψ A , we have Ψ A Ôz ν Õ È T Ôz ν Õ. All axioms of Ψ are of the form Ψ τ ÔzÕ τ for binary strings τ , so τ Ψ A Ôz ν Õ is an initial segment of A, and so is comparable with ρ. Then Öτ× TÔz ν Õ implies that Öρ× TÔz ν Õ À. For n È Öo, . . . , s×, let ℓ n Ös× ℓ n ksÔnÕ be the longest length tested at level n at the end of stage s. Then for all s o, ℓ o Ös× ℓ o 1 Ös× ¤ ¤ ¤ ℓ s Ös× s, because if we let ℓ n Ös× l s ÔnÕ at stage s, then (Lemma 2.1) l s Ôn 1Õ l s ÔnÕ and so we define ℓ n 1 Ös× l s Ôn 1Õ if this length is longer than previous lengths tested at level n 1. Also, since at stage s we test s l s ÔsÕ at level s, we see that for all s n, ℓ n Ös× n. We note that other than specifying ρ ¦ , the construction is uniform (in the computable index for Üc s Ý). The reason for the nonuniform aspect of the construction is the overhead o charged by the recursion theorem; if we had access to 1-boxes, the construction would be completely uniform. Let s s o and n o. A string σ of length ℓ n Ös× is n-believable at stage s if: σ extends ρ ¦ ; and for all m È Öo, n×, and for all k k s ÔmÕ, the string σ ℓ m k is successfully tested at level m by the end of stage s. Lemma 2.6 shows that for all n, the string A ℓ n is n-believable at almost every stage. Let n o, and suppose that there is at most one string which is Ôn ¡ 1Õbelievable at stage s. Suppose, for contradiction, that there are two strings τ 0 and τ 1 which are both n-believable at stage s. Then both τ 0 ℓ n¡1 Ös× and τ 1 ℓ n¡1 Ös× are Ôn ¡ 1Õ-believable at stage s, and so are equal. Let k be the least index such that τ 0 ℓ n k τ 1 ℓ n k . Of course k exists, since τ 0 τ 1 are both of length ℓ n ksÔnÕ , and ℓ n k ℓ n¡1 Ös×. In other words, ℓ n k is longer than any length tested at level n ¡ 1 at stage s. But then the strings τ 0 ℓ n k and τ 1 ℓ n k witness that there is a conflict at length ℓ n k at stage s, and so we would promote ℓ n k to be tested at level n ¡ 1 by the end stage s, contradicting the assumption that ℓ n k is not tested at level n ¡ 1 at stage s. We can now define the computable approximation for A. We define a computable sequence of stages: the stage s o has been defined above; we may assume that s o o 1. For t o, given s t¡1 , we define s t to be the least stage s s t¡1 at which there is a t-believable string σ t . So s t¡1 t. We let A t σ tˆ0 ω . The fact that A ℓ n is n-believable at almost every stage (and that ℓ n n) implies that lim t A t A. For t o, let x t be the least number x such that A t ÔxÕ A t¡1 ÔxÕ. It remains to show that t o c t Ôx t Õ is finite. For all n 0, let Then c t Ôx t Õ will follow from any polynomial bound on S n . Let n o, and let t È S n . Let s s t¡1 , and let s s t . Since t s and Üc s Ý is monotone, we have c s Ôx t Õ c t Ôx t Õ 2 ¡n . Since c s Ôl s ÔnÕÕ 2 ¡n (Lemma 2.1), and the function c s is monotone, we have x t l s ÔnÕ. So A t l s ÔnÕ A t¡1 l s ÔnÕ. Suppose that t n. Then the strings σ t and σ t¡1 are at least as long as ℓ t¡1 Ös× which is not smaller than ℓ n Ös×, which in turn is not smaller than l s ÔnÕ, by the instruction for testing l s ÔnÕ at level n at stage s if it is a large number. So we actually have σ t ℓ n Ös× σ t¡1 ℓ n Ös×. Let m n be the least such that σ t ℓ m Ös× σ t¡1 ℓ m Ös×; since both σ t and σ t¡1 extend ρ ¦ we have m o. Let k k s ÔnÕ be the least such that σ t ℓ m k σ t¡1 ℓ m k ; the minimality of m implies that ℓ m k ℓ m¡1 Ös×. Let τ 0 σ t¡1 ℓ m k and τ 1 σ t ℓ m k . So τ 0 and τ 1 are distinct. Since σ t¡1 is Ôt ¡ 1Õ-believable at stage s, and m n t ¡ 1, the string τ 0 is successfully tested at level m by stage s, and similarly, τ 1 is successfully tested at level m by stage s. Thus there is a conflict at length ℓ m k at stage s, which implies that ℓ m¡1 Ös× ℓ m k . We observed that ℓ m k ℓ m¡1 Ös×, and so there is no conflict at level ℓ m k at stage s. This means that if t and u are two stages in S n , and u t n, then there is some m n and some length ℓ ℓ m k at which there is no conflict at stage s t¡1 but there is a conflict at stage s t s u¡1 . Lemma 2.2 states this can happen, for each m, at most m ¡ 1 times, and so overall, there are at most n 2¨m any stages greater than n in S n ; that is, S n n n 2¨. This gives a polynomial bound on S n and completes the proof. A c.e. set computing a given set In this section we give a proof of Theorem 1.13: we construct a benign cost function c such that for any ∆ 0 2 set A obeying c, there is a c.e. set W computing A which obeys all cost functions that A obeys. A simplification. Even though the cost function c works for any ∆ 0 2 set A, we may assume that we are given a particular computable approximation ÜA s Ý to a ∆ 0 2 set A which obeys c, and define c using the approximation. To see why this seemingly circular construction is in fact legal, we enumerate as A k s s ω k ω all partial sequences of uniformly computable functions; we think of A k s s ω as the k th potential computable approximation for a ∆ 0 2 set A k . For each k ω, we define a benign cost function c k , together with a monotone approximation c k s for c k and a computable function g k which together witness that c k is benign; all of these, uniformly in k. The important dictum is: even if A k s is not total, we must make c k s and g k total. We ensure that c k ÔxÕ 1 for all x and k. Once these are constructed, we let c k ω 2 ¡k c k . Lemma 3.1. c is a benign cost function. Proof. For s ω let c s ÔxÕ k s 2 ¡k c k s ÔxÕ. Then Üc s Ý is a monotone approximation of c. For benignity, the point is that since c k 1, only finitely many c k can contribute more that ǫ to c. We note that for all k, if I is a set of disjoint intervals of ω such that for all Öx, sÕ È I we have c k s ÔxÕ ǫ, then I g k ÔǫÕ. Fix ǫ 0, and let m 1 ÔǫÕ, m 2 ÔǫÕ, ¤ ¤ ¤, m kÔǫÕ ÔǫÕ be the sequence of markers associated with Üc s Ý. It follows that for some k K we have c k s ÔxÕ ǫß4. Hence which is a computable bound on kÔǫÕ. Suppose that a ∆ 0 2 set A obeys c. By Nies's result from [24], which is repeated as In the sequel, we omit the index k; we assume that we are given a partial sequence ÜA s Ý and construct total Üc s Ý and g with the desired property. Although we have to make Üc s Ý and g total and c s bounded by 1, regardless of the partiality of ÜA s Ý, we note that unless ÜA s Ý is total and is a computable approximation for a set A, and c s ÔA s Õ 2 k , then the construction of W need not be total. 3.2. More on cost functions. Given the approximation ÜA s Ý for A, we need to test whether A obeys a given cost function d, with a given approximation Üd s Ý. But of course it is possible that d s ÔA s Õ is infinite, while some other approximation for A witnesses that A obeys d. Any other approximation can be compared with the given approximation ÜA s Ý, and so it suffices to examine a speed-up of the given approximation. Further, it suffices to test cost functions bounded by 1. This is all ensured by the following proposition. A version of this proposition appears in [24], but as we give it in slightly different form, we give a full proof here for completeness. Let Üe s Ý be a monotone approximation of d and Ü Ô B s Ý be a computable approximation of B such that e s Ô Ô B s Õ is finite. We define increasing sequences Üt s Ý of stages and Üx s Ý of numbers (lengths) as follows. We let t ¡1 x ¡1 1. For s 0, given t s¡1 and x s¡1 , we search for a pair Ôt, xÕ such that t t s¡1 , x x s¡1 and For all y x s¡1 , 2e t ÔyÕ d t ÔyÕ. Such a pair Ôt, xÕ exists because lim e s lim d s , lim Ô B s lim B s , and lim x dÔxÕ 0 (the limit condition for d). We let Ôt s , x s Õ be the least such pair that we find. We note that for all s 0, s t s¡1 . We now let hÔsÕ t s 1 for all s 0. We claim that which is finite. For let s 0, and let y s be the least number y such that B hÔsÕ ÔyÕ B hÔs¡1Õ ÔyÕ; so d s ÔB hÔsÕ Õ d s Ôy s Õ. There are two cases. If y s x s¡1 , then d ts¡1 Ôy s Õ 2 ¡s , and by monotony, since t s¡1 s, we have d s Ôy s Õ 2 ¡s . In the second case, we have y s x s , x s 1 , and so B hÔs¡1Õ Ôy s Õ B ts Ôy s Õ Ô B ts Ôy s Õ and B hÔsÕ Ôy ÔzÕ. Since s t s , we have d s Ôy s Õ d ts Ôy s Õ. Since y s x s¡1 , we have d ts Ôy s Õ 2e ts Ôy s Õ. Again by monotony, we have e ts Ôy s Õ e t Ôy s Õ. And since z y s , we have e t Ôy s Õ e t ÔzÕ. Overall, we get d s Ôy s Õ 2e t ÔzÕ, and e t ÔzÕ is a summand in e s Ô Ô B s Õ, which is counted only against s, as hÔs ¡1Õ t hÔsÕ. To ensure that the set W obeys every cost function that A obeys, we need to monitor all possible cost functions. So we need to list them: we need to show that they are uniformly ∆ 0 2 , indeed with uniformly computable monotone approximations. This cannot be done effectively, because the limit condition cannot be determined in a ∆ 0 2 fashion. However, we will not need the limit condition during the construction, only during the verification, and so we list monotone cost functions which possibly fail the limit condition. Proof. The idea is delaying. In this proof we do not assume that cost functions satisfy the limit condition, but we do assume that they are total. We need to show that given a partial uniformly computable sequence Üd s Ý we can produce, uniformly, a total monotone approximation Ü Ô d s Ý of a cost function Ô d such that if Üd s Ý is a monotone approximation of a cost function d bounded by 1, then Ô d d. To do this, while keeping monotony, for every s ω we let tÔsÕ s be the greatest t s such that after calculating for s steps, we see d u ÔxÕ converge for all pairs Ôu, xÕ such that u t and x t, each value d u ÔxÕ is bounded by 1, and the array Üd u ÔxÕÝ u,x t is monotone (non-increasing in x and non-decreasing in u). We let Ô d s ÔxÕ d tÔsÕ ÔxÕ for all x tÔsÕ, and Ô d s ÔxÕ 0 for all x tÔsÕ. 3.3. Discussion. Returning to our construction, recall that we are given a partial approximation ÜA s Ý and a constant k, and need to produce a (total) monotone approximation Üc s Ý of a cost function c and a computable function g witnessing that c is benign; and we need to ensure that if ÜA s Ý is a total approximation of a ∆ 0 2 set A and c s ÔA s Õ 2 k , then there is a c.e. set W computing A which obeys every cost function that B obeys. The main tool we use is that of a change set. For any computable approximation ÜB s Ý of a ∆ 0 2 set B, the associated change set W ÔÜB s ÝÕ consists of the pairs Ôx, nÕ such that there are at least n many stages s such that B s 1 ÔxÕ B s ÔxÕ. The obvious enumeration ÜW s Ý of W enumerates a pair Ôx, nÕ into W s if there are at least n many stages t s such that B t 1 ÔxÕ B t ÔxÕ. It is immediate that the change set is c.e. and computes B. It is also not hard to show that for any monotone approximation Üd s Ý for a cost function we have ô d s ÔW s Õ ô d s ÔB s Õ, and so if ÜB s Ý witnesses that B obeys d lim d s , then ÜW s Ý witnesses that W obeys d as well. Nies used this argument to show that every K-trivial set is computable from a c.e. K-trivial set. Thus if A lim A s (if it exists) obeys some cost function d, we immediately get a c.e. set computing A which also obeys d. The difficulty arises when we consider more than one cost function. The point is that different cost functions obeyed by B would require faster enumerations of B, and the associated change sets may have distinct Turing degrees. In general, it is not the case that the change set for a given enumeration of a ∆ 0 2 set B would obey all cost functions obeyed by B. For an extreme example, it is not difficult to devise a computable approximation for the empty set for which the associated change set is Turing complete. The point is that a faster approximation of a ∆ 0 2 set may undo changes to some input BÔxÕ, whereas the change set for the original approximation must record the change to BÔxÕ (and also its undoing), and must pay costs associated with such recordings. The idea of our construction is to let W be the change set of some speed-up of the approximation ÜA s Ý. We define an increasing partial computable function f . If ÜA s Ý is total, approximates A, and c s ÔA s Õ 2 k , then f will be total, and we will let W be the change set of the approximation A f ÔsÕ . Roughly, the role of f would be to ensure that not too may undone changes in some AÔxÕ would be recorded by W and associated costs paid. To be more precise, we discuss our requirements in detail. Let d i i ω be a list of cost functions (possibly failing the limit condition) bounded by 1, as given by Lemma 3.3 (with associated approximations d i s ), and let h j j ω be an effective list of all partial computable functions whose domain is an initial segment of ω and which are strictly increasing on their domain. To save indices (we are into the whole brevity thing), we renumber the list of pairs It is important to note that our action at stage u 1 to assuage requirement S e does not require us to wait until we see v h e Ôt 1Õ; it allows us to keep defining c (and f ) even if h e is partial. The catch is that we used the values of A u and A u 1 in order to define c u 1 . Our commitment to make Üc s Ý total even if ÜA s Ý is not, means that our definition of Üc s Ý must be quicker than the unfolding of the values of ÜA s Ý. For s ω, let s be the greatest number below s such that A u ÔxÕ has converged by stage s for all u, x s. Usually, s will be much smaller than s. At stage s we need to define c s , but can read the values of ÜA u Ý only for u s. This is where the function f comes into play. The speed-up of the approximation of A that it allows us to define can be used to prevent unwanted elements from entering W , if A changes back. We return to the situation above, this time with f growing quickly, but still with r e h e . Suppose that n h e ÔtÕ and u f ÔnÕ, and s is a stage with s u 1. We see that A u 1 ÔzÕ A u ÔzÕ, and so at stage s we see that we would have liked to define c u 1 ÔzÕ d e t ÔzÕ. But s is much greater than u 1; at stage u 1, we were not aware of this situation, and so kept c u 1 ÔzÕ small. At stage s we would like to rectify the situation by defining s f Ôn 1Õ and c s ÔzÕ d e t ÔzÕ. Let v f Ôh e Ôt 1ÕÕ, which is presumably greater than s. We now have two possibilities: If A s ÔzÕ A u ÔzÕ, that is, AÔzÕ changed back from its value at stage u 1, then the change in AÔzÕ between stages u and u 1 need not be recorded in W . In this case, W pays no cost related to z, and so we do not need to charge anything to anyone. Otherwise, the change in AÔzÕ from u to u 1 persists at stage s, and is recorded in W , which pays roughly d e d e t ÔA Ôf¥h e ÕÔtÕ Õ. If A v ÔzÕ A u ÔzÕ, then AÔzÕ must have changed at some stage after stage s, and so the cost can be charged to c s ÔA s Õ. All is well, except that we did not consider yet another commitment of ours, which is to make c benign (and in fact, to make the bound g uniformly computable from the index k for the partial approximation ÜA s Ý). The idea is to again charge increases in c s ÔzÕ to either the sum c s ÔA s Õ or the sums d e t ÔA Ôf¥h e Õ ÔtÕÕ. That is, in the scenario above, before defining c s ÔzÕ d e t ÔzÕ, we would like to have evidence that A s ÔzÕ A u ÔzÕ, so the cost would actually be paid by one of the sums. To avoid this seeming circularity, we "drip feed" cost in tiny yet increasing steps. In the scenario above, at stage s 0 s, we would increase c s0 ÔzÕ by a little bit -not all the way up to d e t ÔzÕ -and wait for a stage s 1 s 0 at which we see what A s0 ÔzÕ is (that is, for a stage s 1 such that s 1 s 0 ). If A s0 ÔzÕ A u ÔzÕ then we can let f Ôn 1Õ s 0 . We increased c s0 ÔzÕ by something comparable to c u 1 ÔzÕ, and the change in AÔzÕ between stage u 1 and stage s shows that this amount was added to c s ÔA s Õ. If A s0 ÔzÕ A u ÔzÕ, then we increase c s1 ÔzÕ again (we can double it), but again not necessarily all the way up to d e t ÔzÕ, and repeat, while delaying the definition of f Ôn 1Õ. Also, since there are infinitely many requirements S e , we have to scale our target, so that only finitely many such requirements affect the ǫ-increases in Üc s Ý; that is, instead of a target of d e t ÔzÕ, we look for cÔzÕ to reach 2 ¡Ôe 1Õ d e t . The last ingredient in the proof is the function r e -we have not yet explained why we need r e to provide an even faster speed-up of ÜA s Ý, compared with Now the point is that as slow as the definition of f is, the function h e shows its values even more slowly. After all, even if ÜA s Ý and f are total, many functions h e are not. In the scenario above, there may be several stages added to the range of f before we see that h e ÔtÕ n. This means that in trying to define f Ôn 1Õ, as above, we may suddenly see more requirements S e worry about more inputs z, as more stages enter the range of f ¥ h e . The argument regarding the scenario above breaks down if the stage v f Ôh e Ôt 1ÕÕ is not greater than the stage s. We use the function r e to mitigate this problem. To keep our accounting straight, we need to make the range of r e contained in the range of h e (otherwise we might introduce more changes which we will not be able to charge to the sum d e t ÔA Ôf¥h e ÕÔtÕ Õ). In our scenario above, we now assume that n r e ÔtÕ is in the range of r e . The key now is that by delaying the definition of r e ÔtÕ, we may assume that AÔzÕ does not change between stage u f ÔnÕ and the last stage currently in the range of f ; we use here the assumption that ÜA s Ý indeed converges to A. And so the strategy above can work, because even though we declared new values of f beyond u, at the time we declare that n È range r e , we see that these new values would not spoil the application of our basic strategy. 3.4. Construction. Let ÜA s Ý be a uniformly computable sequence of partial functions, and let k be a constant. As mentioned above, for all s ω, we let s be the greatest number below s such that for all x and u bounded by s, A u ÔxÕ converges at stage s. We define a uniformly computable sequence Üc s Ý. We start with c 0 ÔzÕ 2 ¡z for all z ω. At every stage s, we measure our approximation for c s ÔA s Õ; this, of course, would be the sum of the costs c u Ôx u Õ, where u, x u s and x u is the least x s such that A u Ôx u Õ A u¡1 Ôx u Õ. If at stage s our current approximation for this sum exceeds 2 k , we halt the construction, and let c c s . Otherwise, we let Ô s be the greatest stage before stage s such that c Ô s c Ô s¡1 . So c s c Ô s . Stage Ô s ¡ 1 is the last stage before s at which we took some action toward assuaging the fears of various requirements S e , which is a step toward defining a new value of f . By the beginning of stage s 0, the function f is defined (and increasing) on inputs 0, 1, . . . , m s ; we start with f Ô0Õ 0. We will ensure that f Ôm s Õ s. For all e ω, we define a function r e . To begin with, r e is defined nowhere. Once we see that h e Ô0Õ , say at stage s e , we define r e Ô0Õ h e Ô0Õ. (1) either A ui Ôz i Õ A f Ôr e i ÔtiÕÕ Ôz i Õ, whereas we had A si Ôz i Õ A f Ôr e i ÔtiÕÕ Ôz i Õ; or (2) case (1) fails, and c ui Ôz i Õ δ ei d ei ti Ôz i Õ, whereas we had c si Ôz i Õ δ ei d ei ti Ôz i Õ. Let I be the set of stages s i (where i È Ø2, . . . , kÔǫÕ ¡1Ù) for which case (1) as is calculated at stage max JÔeÕ. Since the requirement S e is still active at that stage, this sum is bounded by 1. We have d e ti Ôz i Õ ǫßÔ2δ e Õ ǫ (Equation (3.1), using c si Ôz i Õ δ e d e ti Ôz i Õ). So the number of such t is bounded by 2 N . In the second case, there is a stage w È Ôu i , v× such that A w Ôz i Õ A w¡1 Ôz i Õ. Because u i s i , we have c v Ôz i Õ c si 1 Ôz i Õ ǫ, so an amount of at least ǫ is added to the sum c s ÔA s Õ as calculated at stage s kÔǫÕ , which as described above, is bounded by 2 k . The contribution of stages s i in distinct JÔe, tÕ is counted disjointly, because if s j È JÔe, t ½ Õ for t ½ t then as t ½ t e sj we have s j v, so w È Ôs i , s j Õ. So the number of such t is bounded by 2 k N . Overall, the number of t such that JÔe, tÕ is nonempty is bounded by 1 2 N 2 k N as required. We now assume that the sequence ÜA s Ý is total and converges to a limit A, and that c s ÔA s Õ 2 k . The construction is never halted. Lemma 3.6. The function f is total. Proof. Suppose, for contradiction, that f is not total; at some stage s ¦ we define the last value m ¦ m s ¦ 1 on which f is defined, and for all s s ¦ we have m s m ¦ . No function r e is extended after stage s ¦ , so for all e ω, the value t e s for all s s ¦ is fixed. Because ÜA s Ý is total, the function s s is unbounded. So there are infinitely many stages s s ¦ for which s Ô s; let T be the collection of these stages. By assumption, at each stage s È T there is some number z m ¦ about which some requirement S e (for e m ¦ ) worries at that stage. For e m ¦ and z m ¦ , let T Ôe, zÕ be the collection of stages s È T at which S e worries about z. There are some e m ¦ and z m ¦ such that T Ôe, zÕ is infinite. Let t t e s for s È T . At each stage s È T Ôe, zÕ we have c s ÔzÕ δ e d e t ÔzÕ. At stage s we define c s 1 ÔzÕ 2c s ÔyÕ for some y z, and c s ÔyÕ c s ÔzÕ. We note that c s ÔzÕ 0 because c 0 ÔzÕ 2 ¡z . This quickly (i.e. in z steps) leads to a contradiction. We let W be the change set of A f ÔnÕ . Then W computes A. It remains to show that every requirement S e is met. Fix e ω. Suppose that h e is total, and that d e t ÔA Ôf¥h e ÕÔtÕ Õ 1. Proof. Suppose, for contradiction, that r e is not total; a final value t ¦ is added to dom r e at some stage s 0 , so t e s t ¦ for all s s 0 . We note that S e is active at every stage. Let s 1 s 0 be a stage such that for all s s 1 , A s t ¦ 1 A t ¦ 1. Let k ω such that f Ôh e ÔkÕÕ s 1 . By Lemma 3.6, there is some stage s s 1 such that m s 1 m s and m s h e ÔkÕ. Then at stage s we are instructed to define r e Ôt ¦ 1Õ, a contradiction. The following lemma concludes the proof of Theorem 1.13. Lemma 3.8. The sum d e t ÔW Ôf¥r e ÕÔtÕ Õ is finite. Proof. Let s ¦ be the stage at which t e s is first defined, that is, the stage at which we define r e Ô0Õ. For t s ¦ , let y t be the least number such that W f Ôr e ÔtÕÕ Ôy t Õ W f Ôr e Ôt¡1ÕÕ Ôy t Õ. We need to show that t s ¦ d e t Ôy t Õ is finite. Let t s ¦ . We may assume that y t t, for otherwise d e t Ôy t Õ 0. Let y t Ôz t , kÕ for some k ω. So z t y t (using the standard pairing function), and A f ÔnÕ Ôz t Õ A f Ôn¡1Õ Ôz t Õ for some n È Ôr e Ôt ¡ 1Õ, r e ÔtÕ×. Taking the least such n, we have A f Ôn¡1Õ Ôz t Õ A f Ôr e Ôt¡1ÕÕ Ôz t Õ and so A f ÔnÕ Ôz t Õ A f Ôr e Ôt¡1ÕÕ Ôz t Õ. Now there are two possibilities: either A f ÔnÕ Ôz t Õ A f Ôr e ÔtÕÕ Ôz t Õ, or not. In the first case, we have A f Ôr e ÔtÕÕ Ôz t Õ A f Ôr e Ôt¡1ÕÕ Ôz t Õ. Since both r e Ôt ¡ 1Õ and r e ÔtÕ belong to the range of h e , and r e ÔxÕ h e ÔxÕ for all x, we see that there is some x t such that A f Ôh e ÔxÕÕ Ôz t Õ A f Ôh e Ôx¡1ÕÕ Ôz t Õ. This means that an amount of at least d e x Ôz t Õ d e t Ôz t Õ is added to the sum d e t ÔA Ôf¥h e ÕÔtÕ Õ, at a stage x such that h e ÔxÕ È Ôr e Ôt ¡ 1Õ, r e ÔtÕ×. Thus the charges for distinct such stages t
16,947
2011-10-07T00:00:00.000
[ "Mathematics" ]
Ethical responsibility and computational design: bespoke surgical tools as an instructive case study Computational design uses artificial intelligence (AI) to optimise designs towards user-determined goals. When combined with 3D printing, it is possible to develop and construct physical products in a wide range of geometries and materials and encapsulating a range of functionality, with minimal input from human designers. One potential application is the development of bespoke surgical tools, whereby computational design optimises a tool’s morphology for a specific patient’s anatomy and the requirements of the surgical procedure to improve surgical outcomes. This emerging application of AI and 3D printing provides an opportunity to examine whether new technologies affect the ethical responsibilities of those operating in high-consequence domains such as healthcare. This research draws on stakeholder interviews to identify how a range of different professions involved in the design, production, and adoption of computationally designed surgical tools, identify and attribute responsibility within the different stages of a computationally designed tool’s development and deployment. Those interviewed included surgeons and radiologists, fabricators experienced with 3D printing, computational designers, healthcare regulators, bioethicists, and patient advocates. Based on our findings, we identify additional responsibilities that surround the process of creating and using these tools. Additionally, the responsibilities of most professional stakeholders are not limited to individual stages of the tool design and deployment process, and the close collaboration between stakeholders at various stages of the process suggests that collective ethical responsibility may be appropriate in these cases. The role responsibilities of the stakeholders involved in developing the process to create computationally designed tools also change as the technology moves from research and development (R&D) to approved use. Computational design uses artificial intelligence (AI) to algorithmically design a variety of artefacts, ranging from software to physical products.The combination of additive manufacturing (for example, 3D printing) and computational design is already being explored as a means of developing patient-specific surgical tools (Desai et al., 2019).3D printing is used for a variety of medical applications, including creating organ models to plan surgery, and fabricating permanent implants (Ahangar et al., 2019;Tuomi et al., 2014;Yan et al., 2018).Computational design allows surgical tools to be optimised for the patient's anatomy (Geng & Bidanda, 2021).Such tools are already being developed to assist clinicians in performing knee arthroscopies (Razjigaev et al., 2019) and laparoscopies (Brecht et al., 2020).For example, flexible laparoscopic shafts and instruments may be designed using computational design to optimise the size of the accessible workspace within a specific patient for the instruments, minimise their geometric dimensions and maximise the manipulability of the tools in the accessible workspace (Brecht et al., 2020).A snake-like manipulator attachment for a surgical robot may be computationally designed to optimise its dexterity in particular orientations for operating within a specific patient (Razjigaev et al., 2019).These 11 Page 2 of 14 tools may be called patient-specific medical instruments or bespoke surgical tools. Incorporating AI into the design process raises questions about the responsibilities of those involved in designing, producing, and using these tools.AI and machine learning (ML) systems that develop their own models to process input into output create the possibility of a 'responsibility gap', as the system's designer did not create the model and may be unable to predict the outputs from the model the AI develops (Matthias, 2004).This is especially true of AI and ML systems that incorporate evolutionary algorithms, which are a key method of computational design and a powerful means of optimising industrial designs (Eiben & Smith, 2015;Matthias, 2004).This capability for unanticipated designs means there is a possibility, however small, that the system may produce a design with harmful characteristics that the system's designers could not predict.This possibility creates uncertainty about responsibility for the design of the tools themselves. Here we consider how computational design may affect the responsibilities of stakeholders who are involved in the broader process of design, production, adoption and implementation of computationally designed products.We extend our focus on the nature of responsibility beyond the computational system designer to examine how the design of such products and their production, regulation and use by other stakeholders are perceived and may also be affected.In this research, we focus on the emerging use of computational design to produce bespoke surgical tools as a case study for exploring this question.Bespoke surgical tools are a useful example as they are computationally designed, 3D printed products with clear high stakes consequences for failure. The structure of this paper is as follows.After briefly describing the relevant features of computational design, 3D printing, stakeholders and responsibility, we describe our interviews with representatives of relevant stakeholder groups on how they attribute responsibility within the process of creating and use bespoke surgical tools.We present our findings and discuss the major themes identified in the participants' responses.Our findings indicate that most stakeholders have responsibilities across multiple stages of the process of creating and using bespoke surgical tools, and the process should also include deciding whether to use a bespoke surgical tool and evaluating the tool's effectiveness by tracking patient outcomes after surgical use.We also find that collective ethical responsibility may be appropriate for parts of the process that require close collaboration between stakeholders, even if the process should emphasise the individual responsibilities of the stakeholders involved to prevent uncertainty about responsibility if the stakeholders' role responsibilities are contested. Computational design and 3D printing Computational design employs AI to automate some or all of a design process that human designers would otherwise perform.Forms of computational design include parametric design and generative design.Parametric design uses changeable parameters and rules that define the limits and relationships between parameters to define a design space where designs may be created by changing the parameters (Caetano et al., 2020).Generative design uses algorithms that process input data to produce a design iteratively until the design produced meets the user-defined selection criteria (Caetano et al., 2020).Generative design systems (particularly those incorporating evolutionary algorithms) have the potential to produce surprising and unexpected results (Caetano et al., 2020;Lehmann et al., 2020). The basic process for creating and using 3D printing for clinical applications can be described as four stages: preprinting, printing, post-printing, and application (Geng & Bidanda, 2021). 1 The pre-printing stage includes the diagnosis that motivates creating a 3D printed object for clinical use, the medical imaging and patient scans that affect the characteristics of the object, the design and customisation of a CAD (computer-aided design) model of the object, the choice of materials and 3D printing method, the simulation of the necessary characteristics of the object to ensure it is fit for purpose, and the export of the CAD model into the necessary format for the 3D printer (Geng & Bidanda, 2021).The printing stage covers the actual creation of the physical object, while the post-printing stage covers the post-processing necessary to clean and remove excess material from the object so that it is fit for purpose.For clinical applications, this also includes sterilising the object (Geng & Bidanda, 2021;Salmi, 2021).Finally, the application stage covers the clinical use of the object. To highlight the important features of the process of creating and using bespoke surgical tools, the pre-print stage may be divided into 'scan' and 'design' stages, and the print and post-print stages combined into a 'fabrication' stage.The scan stage involves scanning the patient and creating medical images.The design stage covers the conversion of the medical images into a 3D model that is used as input for the computational design system, and the operation of 1 Salmi (2021) presents a five-stage process for creating 3D printing medical devices that distinguishes the pre-print stage into medical imaging/3D scanning, scan segmentation, and 3D modelling stages.The remaining two stages of Salmi's model correspond to the print and post-print (or post-processing) stages of Geng and Bidanda's model, as Salmi's model only covers creating the device rather than creation and use. the computational design system itself.The computational design system runs multiple times until the tool design is optimised for the requirements of the surgery and the features of the patient's scan.The print and post-print stages are combined into a fabrication stage, which covers the 3D printing process and the post-processing of the 3D printed tool to make it suitable for clinical use.The application or use stage is the use of the bespoke surgical tool in surgery.Figure 1 illustrates this process. Bespoke surgical tools are just one of a variety of medical applications for 3D printing, such as creating anatomical models for planning surgery and creating implants and prostheses (Ahangar et al., 2019;Geng and Bidana, 2021).Surgical tools have specific requirements that may not be shared by other medical applications.Anatomical models, while also based on patient scans, do not come into contact with the patient's body.Implants may also be customised for specific patients, but unlike surgical tools, implants are integrated into the patient's body and need to withstand the stresses of the patient's movements without harming them.Bespoke surgical tools are single-use, will only be in contact with the patient's body during the surgical procedure, and they must be durable enough to perform their surgical function. Stakeholders and responsibility Stakeholders are persons or groups connected with an action or process.They may participate in the action or process, be able to influence it through defining or enforcing policies, or are affected by the actions or process.Most of the stakeholders involved in the process of creating and using computationally designed products will have role-responsibilities within it.Role responsibilities in this process are duties that accompany having a particular professional role (Hart, 2008).Other stakeholders in this process, such as patients, are affected by the actions of those with professional role responsibilities. The responsibilities that accompany a role may be ethical, legal, or merely descriptive (Hart, 2008).Role responsibility itself is a descriptive conception of responsibility, as is causal responsibility for an event or action taking place (van de Poel, 2011).Normative conceptions of responsibility can be further classified as being backward-looking if they relate to actions in the past (such as accountability, blameworthiness, and liability), or forward-looking if they refer to the potential to be held responsible in the future (such as obligation and the virtue of being responsible) (van de Poel and Fahlquist, 2013). The ethical responsibilities of stakeholders that we consider in this research are accountability, blameworthiness, and obligation.Accountability is the duty to give an account of one's actions or inactions, blameworthiness is the duty to accept ethical condemnation for wrongdoing, and obligation is the duty to perform actions and accept responsibility for them in the future.For example, surgeons have ethical responsibilities towards the patients they treat, which are to improve the patient's condition through planned surgical interventions that are in the patient's best interests (Cook, 1980).They are accountable for their treatment of their patients, blameworthy for any wrongdoing in their work, and are obliged to ensure that their treatments are effective and that they maintain their skills and professional knowledge to ensure that they can make well-informed decisions for their patients (Cook, 1980).We will focus on the ethical responsibilities that accompany the roles of stakeholders in the process of creating and using 3D printed surgical tools.Ethical responsibility does not necessarily correspond with legal responsibility or liability (Hart, 2008).Someone may be ethically blameworthy in how they performed their role without necessarily being legally liable for it.We will not be considering legal responsibility (such as liability) in this paper. Introducing AI into the computational design process raises questions of how this may affect the ethical responsibilities of those involved, and whether existing attributions of responsibilities need to be modified to reflect the use of AI in designing the surgical tool.Traditionally, designers and engineers understand how the products they create operate and can predict how they will operate when in use due to through testing (Porter et al., 2018).For ML systems (such as the forms of computational design that would be used to design bespoke surgical tools), how the system interprets input data and produces output data may change (Porter et al., 2018).As the evolutionary algorithms used in generative design depend on processes that incorporate randomness, there is the possibility that the system will produce unpredictable outputs.Identifying the stakeholders and their role responsibilities is one approach for determining how ethical responsibility should be assigned for errors when AI is used in health care (Schiff & Borenstein, 2019).Such stakeholders may include designers, medical device companies, clinicians, and hospitals (Schiff & Borenstein, 2019). Methods This research uses the example of 3D printed, computationally designed bespoke surgical tools as a case study to explore the nature of responsibility for computationally designed products.The case study approach is appropriate for developing in-depth knowledge and understanding of particular research contexts with the goal of gaining insights that may be generalisable to (or instructive for) other similar contexts (Hadorn, 2017;Yin, 2018).This case study is an exemplar of how 3D printing and computational design may be used in high consequence clinical situations.It was conducted via qualitative interviews with 21 representatives of the stakeholder groups who were identified as most likely to have a role in a process for creating and using bespoke surgical tools. Participant selection We identified key stakeholder groups for this research through initial discussions with a research team developing 3D printed bespoke surgical tools using computational design.The identified stakeholder groups include surgeons, radiologists, computational system designers, fabricators involved in 3D printing, patients, and regulators.Medical insurers were also identified but no representatives of this group responded to requests to participate in this research. This research used a purposive sampling approach, where coverage is determined by the range of stakeholders represented by the sample group (Patton, 2015).Potential participants from the relevant stakeholder groups were identified through an online search for organisations operating in the medical 3D printing field, surgeons with experience of 3D printing, patient advocacy groups, professional organisations representing stakeholders, relevant regulatory agencies, and researchers working in computational design.The search for participants was limited to Australia to maintain a common legal and regulatory background to their responses.Potential participants were invited by email to take part.A distinction worth noting is that while patients are a direct stakeholder in the process with responsibilities pertaining to their provision of informed consent, they do not have professional role responsibilities in the process.The responses from the patient representative are integrated into the analysis of the professional responsibilities within the system. Snowball sampling was used to identify additional participants for the case study and to ensure independent and diverse views were represented (Singleton & Straits, 2005).Bioethicists were identified through this method as an additional stakeholder group for inclusion in the research.While bioethicists are not stakeholders in the process, their perspective is useful for understanding the ethical impact of new clinical technologies.A total of 21 structured interviews were conducted between August and November 2020.Table 1 presents the distribution of research participants by stakeholder group. Data collection We used qualitative interviews in this research to draw on the experience and expertise of these stakeholders in the contexts where bespoke surgical tools may be applied, and to gain a range of perspectives on responsibility (Hoepfl, 1997).The interviews were semi-structured with an interview guide (Sankar & Jones, 2008).Semi-structured interviews allow for follow-up questions inspired by participant responses and to rearrange the order of questions if the participant raised the topic of a planned question at an earlier point in the interview.These aspects of semi-structured interviews allow the interviewer to gain richer information about the participant's perspective by giving the interviewer greater leeway in engaging with the participant's responses to questions.A simple diagram illustrating various stages in the creation and use of bespoke surgical tools was developed for use in the interviews (see Fig. 2).The diagram was based on the initial discussions with the research team developing a computational design system for creating bespoke surgical tools using 3D printing.The diagram identifies two specific stakeholders (radiologists and surgeons) within specific stages of the process (scan and use, respectively).The patient is situated as the beginning and end point for the process. Figure 2 was shared with participants before the interview.The interviewer introduced the diagram to participants during the interview along with a verbal description of the process read from the prepared interview guide to ensure consistency across interviews.If it became apparent during the interview that the participant had misinterpreted this description (such as understanding the 3D printed item to be a surgical implant rather than a tool), the interviewer guided them towards considering surgical tools once this became clear.This was necessary as while surgical implants are another application of medical 3D printing, they have a different set of requirements (such as biocompatibility and durability) compared to single-use surgical tools (Ahangar et al., 2019).The interviewer's description emphasised that the diagram was a simple representation, and the interviewer was seeking their observations on its accuracy and completeness.The participant was asked if there were any omissions in the process, where they would situate themselves within the system, and about the responsibilities different stakeholders would have in this process. Alongside these responses, the interview questions broadly covered: (i) participants' current role and level of familiarity with the use of robotics and bespoke surgical tools; (ii) invited their comment on the key steps and processes involved in designing, manufacturing and using bespoke surgical tools; (iii) invited their assessment of the roles and responsibilities of themselves and others in this context; and (iv) to identify any potential risks in the process and how such risks would be appropriately mitigated.In this study, all interviews were conducted via telephone or video call.Each interview took approximately 29 min and with permission, were audio recorded and transcribed for analysis.Informed consent was sought and obtained for each interview.The transcripts were thematically analysed by the lead author and presented for discussion with co-researchers (Braun & Clarke, 2006).Themes and subthemes were recorded in a spreadsheet (Microsoft Excel 365), with the participant number, interview question, transcript page and line number, and the relevant quote from the transcript recorded.Duplicate and overlapping themes and subthemes were merged during analysis by the lead author in consultation with co-researchers. Results We identified four major themes in the data: (1) stakeholders have responsibilities across multiple process stages; (2) stakeholder responsibilities change as a computationally designed tool moves from R&D to adoption; (3) some responsibilities are shared between stakeholders; and (4) some stakeholders have additional responsibilities in the creation and use stages.The observations of the stakeholders interviewed for this research are identified in parentheses in the presentation of the results. Stakeholder responsibilities exist across multiple process stages The description given to participants explicitly identified only two specific stakeholders (other than the patient), radiologists and surgeons, and these stakeholders were presented alongside specific process stages (scan and use, respectively).Nonetheless, participants identified a range of additional stakeholders that included both individual professional roles and organisations as having responsibilities that extended across multiple process stages.The responsibilities are backward-looking if the stakeholder is actively involved in the process at that stage, and forward-looking if their responsibilities affect stages that follow their active involvement.In terms of ethical responsibility, stakeholders are accountable for their actions in the stages where they participate in the process, and blameworthy for any negligence in performing their role responsibilities.Responsibilities that follow the stakeholder's actions in the process are obligations. The additional stakeholders with roles in the creation and use of bespoke surgical tools that we identified in the interviews are listed in Table 2, along with the process stages where participants attributed responsibilities to them.The specific responsibilities of each stakeholder group are described in further detail below with the professional roles of the participants who expressed these views identified in brackets. Designers Designers are the creators and maintainers of the computational design system that interprets the data collected from patient scans and uses it to create a surgical tool design optimised for the patient and the intended surgical procedure.As the computational design system itself is only directly involved within the design stage of the process, the responsibilities attributed to these stakeholders fall largely within this stage.Designers are responsible for the computational design system itself (Designers 1 and 4) and for its configuration (Designers 2 and 3).Participants also attributed to designers responsibilities to understand the scan inputs used by the system (Designer 2) and to mitigate any risks arising from using computational design to design the surgical tool (Bioethicist 1).Designer 2 also attributed responsibility for design failure and for the tool's suitability for its intended purpose to the designer.Designers would therefore be accountable for these responsibilities and blameworthy for negligence in fulfilling them. However, the responsibilities of designers are not limited to the design stage.One participant noted that designers are responsible for supplying a clear definition of the data required during the scan stage (Designer 3).Like their responsibilities in the design stage, designers are also accountable for their definition of the data required and blameworthy for any negligence in completing this definition.Some responsibilities for the tool itself during the use stage were also attributed to the designer.Several participants (Designers 1 and 4, Regulator 1, and Surgeon 2) attributed responsibility to the designer for mechanical failure of the tool during its use in surgery, while Designer 2 also attributed responsibility for material failure of the tool during surgery to the designer.These responsibilities are obligations for the designer. Fabricators Most of the responsibilities attributed to fabricators (those operating 3D printers) are unsurprisingly related to the fabrication stage.Fabricators are responsible for fabricating the tool (Surgeon 2) and the post-processing necessary to complete it (Fabricator 1), identifying designs able to be 3D printed (Designer 3), and for the materials used for 3D printing (Surgeon 4 and Designer 1).They are also responsible for tool quality (Patient 1) and any mechanical failure of the tool (Radiologist 1) during the fabrication stage.Fabricators are also expected to be aware of the relevant regulatory requirements for surgical tools and clinical uses of 3D printing (Regulator 2), as well as for tool strength and specification (Regulator 1).Fabricators are accountable for these responsibilities and blameworthy for failing to adequately fulfil them. In addition to their fabrication stage responsibilities, fabricators were also identified as responsible for any material (Designer 2) and mechanical (Radiologist 1, Regulator 1, and Surgeon 2) failures with the 3D printed tool during the use stage.Like the designer's responsibilities in the use stage, these are obligations for the fabricator. Hospitals/Medical institutions Hospitals and medical institutions were identified as being responsible for maintaining the equipment used during the scan and use stages (Bioethicist 1).If the 3D printing of the tools occurs on-site, the hospital would also be responsible for creating the tool (Fabricator 4), and oversight of the 3D printing process (Bioethicist 2).The hospital or medical institution employing clinical staff who use tools created using this process is also responsible for training them in the use of bespoke surgical tools (Bioethicist 3).These responsibilities are obligations for the hospital or medical institution. Radiologists Unlike the other stakeholders identified in this process, the identified responsibilities of radiologists are confined to a single stage (the scan stage).Their responsibilities are performing the necessary patient scans (Designer 4 and Fabricator 2), interpreting these scans (Bioethicist 1 and Surgeon 2), verifying that the scans are suitable for use (Fabricator 2), and mitigating the inherent risks of performing medical scans on patients (Bioethicist 1).As these responsibilities occur during the stage where the radiologist is an active participant in the process, they are accountable for them and blameworthy if they are negligent in performing them. Regulators Regulators were identified as having responsibilities within every process stage.Approval of the process by the relevant regulatory body for medical devices (such as the Therapeutic Goods Administration (TGA) in Australia) would be necessary (Fabricator 5, Regulator 2, and Surgeon 1).Regulatory approval would also be necessary at each stage of the process (Surgeon 1).This includes the scanning method used in the scan stage (Fabricator 5 and Surgeon 1), the computational design system (Regulator 1 and Surgeon 1) used in the design stage, and the materials used for 3D printing the tool (Bioethicists 2 and 3).Regulators were identified as having a role in the use stage through regulation of medical devices (Fabricator 3).Like hospitals, regulators are not active participants in the process, so these responsibilities are obligations. Surgeons Surgeons were attributed a range of responsibilities across various stages of the process.The surgeon is responsible for interpreting (Fabricator 2) and approving (Fabricator 5 and Surgeon 1) the patient scan acquired by the radiologist during the scan stage.The surgeon is also responsible for approving the tool after fabrication (Designer 4, and Fabricator 5, and Surgeon 3), and for design faults in the tool (Designer 2).The surgeon is also responsible for the patient during surgery (Bioethicist 2 and Surgeon 1), for the outcome of the surgical procedure (Designer 1 and Surgeon 2), for any unintended damage to the patient that occurs during surgery (Surgeon 4), and for mitigating any risks of surgery (Bioethicist 1).As the surgeon is an active participant in the process, the surgeon is accountable for how they perform these responsibilities and blameworthy if they are negligent in performing them. Surgical colleges Surgical colleges are professional organisations that accredit and represent surgeons.They are involved in surgical education, developing best practice guidelines, and in defining codes of conduct for their members (Whyte, 2019).These organisations were identified as having a role in the process by influencing and defining the standards expected by their members, and by representing their members' interests to governments, regulators, and other groups (Bioethicist 1 and Surgeon 2).Surgical colleges would play a role by auditing the creation and use process as being suitable for use by their members (Bioethicist 1).They would also contribute to establishing processes and responsibilities for surgeons within the use stage (Surgeon 2).Like hospitals and regulators, surgical colleges are not active participants in the process, so these responsibilities are obligations. Changes in role responsibilities from R&D through to adoption A second theme we identified in the data is that the role responsibilities of stakeholders (and the stakeholders themselves) will change as the process for creating and using bespoke surgical tools moves from R&D through initial trials and into wider adoption.The initial R&D is likely to be performed by a closely knit team (Bioethicist 3), which would consist of designers, surgeons, and fabricators. During the initial R&D of the computational design system, the system's designers are responsible for providing a clear definition of the input data required (Designer 3), configuring the system (Designers 2 and 3), mitigating the risks associated with using the system (Bioethicist 1), and for the system's designs being fit for purpose (Designer 2).The surgeons involved will guide the team in developing the process to ensure that it produces tools that will be practical for clinical use (Surgeon 1).The fabricators involved in developing the process are responsible for the materials used to produce the tools (Designer 1 and Surgeon 4), identifying viable tool designs (Designer 3), the fabrication (Surgeon 2) and the post-processing of the tool (Fabricator 1), and the tool's quality (Patient 1).Fabricators are also responsible for mechanical failures of the tool that result from its manufacture (Radiologist 1), the strength and specification of the tool (Regulator 1) and need to be aware of the relevant regulatory requirements (Regulator 2). 11 Page 8 of 14 Hospitals, patients, and radiologists will become involved once human trials of the process begin.Hospitals will be responsible for approving the trials performed by surgeons at their sites (Bioethicist 3) and for confirming that the trials are performed in accordance with their policies on new technologies and procedures (Regulator 2).If the tools are fabricated on the hospital site, the hospital will also be responsible for oversight of 3D printing (Bioethicist 2) and on-site 3D printing itself (Fabricator 4).The patients involved in these trials will need to give informed consent (Bioethicist 2, Patient 1, Regulator 1, and Surgeon 2).Surgeons will also need to note the effects of the new tool on existing surgical procedures (Bioethicist 3), any differences in use compared to existing tools (Regulator 1), and the surgical team will need to be aware that the tool is experimental (Bioethicist 2). As trials continue, regulators will be responsible for approving the process itself (Fabricator 5, Regulator 2, and Surgeon 1), as well as its individual aspects.These include the scanning process performed in the scan stage (Surgeon 1), the computational design system (Regulator 1 and Surgeon 1), the 3D printing material used to create the tool (Bioethicist 3), and approving the 3D printed tools as suitable for clinical use (Fabricator 3).Regulators will also be responsible for collecting evidence of the tool's effectiveness (Bioethicist 1).Surgical colleges will also be responsible for establishing processes and responsibilities relating to the clinical use of the process (Surgeon 2). Stakeholder collaboration and shared responsibilities Multiple participants emphasised the importance of collaboration between stakeholders for the overall success of the process.The most frequently mentioned collaboration was between the radiologist and surgeon (Designer 2, Radiologist 1, and Surgeons 1 and 2).This reflects existing collaborative practices between radiologists and surgeons about the scans necessary for the surgeon's requirements (Radiologist 1).The need for collaboration between the radiologist, fabricator, and surgeon was identified by Designer 3 and Surgeon 2. Other collaborations identified by participants were between the designer, fabricator, and surgeon (Fabricator 2), the designer and radiologist (Designer 2), the designer, radiologist, and surgeon (Designer 2), and between the radiologist and the fabricator (Designer 2). However, some responsibilities were attributed to different stakeholders by participants.Responsibility for interpreting the patient scan was attributed to both radiologists and surgeons (Bioethicist 1 and Surgeon 2).Failures in the design of the tool produced by the computational design system were identified as the responsibility of the designer and the surgeon by Designer 2. Similarly, Designer 2 also assigned responsibility for failures due to the material used for 3D printing the tool to both the designer and the fabricator.Five participants (Designers 1 and 4, Radiologist 1, Regulator 1, and Surgeon 2) attributed responsibility for mechanical failures in the tool to the designer of the computational design system and to the fabricator.How responsibility is shared between stakeholders was interpreted as either a collective (or group) responsibility (Bioethicist 1), or as individual responsibilities within a group (Designer 4 and Surgeon 2). Responsibilities beyond the creation and use stages Finally, some of the responsibilities participants identified did not easily fit within the described stages of the creation and use process.Several of these responsibilities relate to the decision to adopt the process itself.Hospitals and medical institutions have new technology or process policies to assist them in adopting new technologies such as bespoke surgical tools (Regulator 2).Regulators would also play a role in decisions to use these tools by assessing the evidence of the tool's effectiveness (Bioethicist 1) and the effects of using bespoke surgical tools compared to existing ones (Fabricator 4). For surgeons, the additional responsibilities include identifying that a patient has a suitable condition for treatment using a bespoke surgical tool (Designer 1, Patient 1, Radiologist 1, and Surgeons 2, 4 and 5), communicating the risks of surgery and of using a bespoke surgical tool to the patient (Bioethicist 2 and Regulator 1), and describing to the patient surgical alternatives and costs (Bioethicist 2).The surgeon is also responsible for seeking and obtaining informed consent from the patient for using a bespoke surgical tool (Bioethicist 2, Patient 1, Regulator 1, and Surgeon 2). Other responsibilities occur following the tool's use in surgery.Surgeons are responsible for diagnosing the cause of any faults in the tool that emerged during use (Bioethicist 1, Fabricator 2, and Surgeon 2), and for providing feedback about improvements to the creation and use process (Surgeon 2).Collecting data about design faults, mechanical failures, and surgeon feedback is also important for identifying whether faults are occurring regularly (Bioethicist 3).The potential use of data collected during the process and patient monitoring by surgeons and regulators would also occur after the tool's use (Bioethicist 2, Fabricator 1, and Regulator 1).Information about patient outcomes would be used by the regulator to collect ongoing evidence of the tool's effectiveness (Bioethicist 2).Surgical colleges may also consult with regulators about the effectiveness of the process and the tools it creates (Bioethicist 2). Implications and discussion Based on the themes identified in the data, we identify three implications for responsibility and computationally designed products as follows: (1) the importance of extending the creation and use process to cover responsibilities that precede and follow creating and using the tool; (2) the expansion of stakeholders' role responsibilities and their collaborations in the process; and (3) how role responsibilities change as the creation and use process moves from R&D to wider adoption. Extensions to the creation and use process The participants' responses identified that two additional stages are required to accurately represent the range of responsibilities in the process: a consultation stage situated before the scan stage, and a post-operative stage that follows the use stage.The consultation stage is where the decision to use a bespoke surgical tool is made and the patient provides informed consent to be treated with such a tool, and the post-operative stage covers the evaluation of the tool's safety and effectiveness following its clinical use. Figure 3 presents a revised process diagram incorporating these two additional stages. The professional stakeholders involved in the additional consultation stage are surgeons, hospitals, and regulators.The surgeon's initial identification of a patient having a condition suitable to treatment with a bespoke surgical tool occurs in this stage.Surgeons are also responsible for communicating the risks of performing surgery and of using a bespoke surgical tool to the patient, explaining any alternatives to surgery, and for seeking and obtaining the patient's informed consent.Surgeons are accountable for fulfilling these duties, and blameworthy if they fail to do so.Failing to obtain informed consent will prevent the process from progressing further.It is the patient's responsibility to decide whether they accept surgery, and to give informed consent to the surgeon if they agree to the surgery and the use of bespoke surgical tools during the procedure. As mentioned in the results section, hospitals and medical institutions have responsibilities to implement and follow policies for adopting new technologies, such as bespoke surgical tools.Regulators would also assess the evidence of the tool's effectiveness, and how using bespoke surgical tools compares to existing ones.These responsibilities would occur in the consultation stage. The post-operative stage covers the data collection and evaluation that participants described in their responses to the original process diagram.This additional stage covers the diagnosis of faults in the tool by the surgeon, and their feedback on improving the creation and use process itself.Regulators are responsible for collecting data about design faults, mechanical failures, and surgeon feedback about the process.This data would be used to establish the tool's effectiveness compared to alternative surgical methods and tools, and to determine whether there are any patterns in patient outcomes that indicate a problem with the tools created using the process.Surgical colleges would also consult with regulators on these matters.The updated list of stakeholders and the revised set of stages where they bear responsibilities is presented in Table 3. The addition of these stages recognises the need to adequately account for the decisions made prior to admitting a patient for this type of treatment, and the need for ongoing monitoring after the surgery has concluded. Expanding role responsibilities and collaborations The process to create and use bespoke surgical tools has a significant effect on the role responsibilities of surgeons.The involvement of surgeons throughout the process has the potential to impose a significant burden on them, as their regular responsibilities for patient diagnosis and treatment now extend to supervising the design and fabrication of a surgical tool.This expansion of responsibility is due to the surgeon's responsibility for the patient's welfare.While surgeons may be willing to accept this burden if bespoke tools offer them a significant benefit in performing surgery, the process of creating and using them will need to be efficient and relatively straight-forward to incorporate into existing surgical practice for it to be widely adopted.Excessive workloads and inefficient work processes are recognised factors in clinician burnout (West et al., 2018).The benefits of bespoke surgical tools will need to be considered in the context of the existing workloads of surgeons and radiologists, and risks to patient care and clinician health that any additional burden that the creation and use process will place on those involved. The responsibilities of regulators throughout the process highlights their significance to adoption.As they are responsible for both regulating each stage of the process (medical imaging, surgical tool design, medical 3D printing, and surgical practice itself) and the whole process itself, they also need to be involved in its development.During development, regulators serve as consultants and reviewers for the process.Once regulators approve the process, they perform the role of monitoring it in use.This monitoring would reasonably involve reviewing data on the implementation of the process, including effectiveness of the bespoke tools, that may be provided by the hospitals and medical institutions where the process is adopted, the developers of the computational design system, or collected by the regulators themselves.In the case of bespoke surgical tools, a blanket regulatory approval will not relieve surgeons and other stakeholders of their responsibilities in assessing the safety and effectiveness of each tool manufactured and used, thus highlighting the importance of ongoing monitoring. Collaboration between stakeholders is also necessary for the process to be effective.The frequent presence of radiologists as collaborators (with either designers, fabricators, or surgeons) highlights their importance to the process, despite having responsibilities in only one stage (the scan stage).The significance of collaborations highlights how stakeholders are interconnected throughout the process and suggest that there are also collaborative moments across the system that may contribute to collective responsibility arrangements.Some stakeholders may share specific responsibilities, or a stakeholder's role in the process depends on the work completed by a stakeholder active in an earlier stage, and these interdependencies should be made explicit. As mentioned in the results, the participants interpreted shared responsibility as either as a collective (or group) responsibility or as individual responsibilities within a group.These interpretations imply collective ethical responsibility (where all group members are equally ethically responsible for the group's actions) or individual ethical responsibilities within a group (Ludwig, 2020).The first interpretation places significant burdens on all the stakeholders involved in the process: negligence by any stakeholder would make all the stakeholders as ethically blameworthy, even if they were not active participants in the process.This interpretation may be made less demanding by refining the collective responsibility to only those stakeholders who are active in completing specific responsibilities within the process.Similarly, a radiologist and a surgeon collaborating on completing the scan stage of the process may share collective responsibility for that stage.The surgeon involved in this process may share responsibilities with other stakeholders (such as the fabricator, for instance) throughout the process.Any given stakeholder would share collective responsibility for the collaborations they are involved in during the process.A surgeon may share collective ethical responsibility for their collaborations with the radiologist and the fabricator, but the fabricator would not share collective ethical responsibility for the radiologist unless they collaborate directly with them during the scan stage.Instead of all the stakeholders involved in the process being considered as a group with collective ethical responsibility for the process, collective ethical responsibility exists for the series of collaborations between stakeholders. The second interpretation resembles the concept of 'shared responsibility' for passenger safety in aviation, and for patient safety in healthcare (Sittig et al., 2018).For example, Sittig et al. (2018) describe how responsibility for the safety of electronic health records can be shared between developers, users and healthcare organisations, and regulators.Distributing responsibility within a group of individuals raises the 'distributive question' of how ethical responsibility should be allocated between group members (Ludwig, 2020).Shared responsibility would attribute responsibility to the stakeholder best positioned to respond to problems that emerge (Sittig et al., 2018).For groups with a limited number of members (such as the process for creating and using bespoke surgical tools), the dilution principle where the ethical responsibility of a group member "is proportional to the causal contribution of that member to the harms (or benefits)" caused by that group a useful (Ludwig, 2020).In this context, the stakeholders' responsibilities describe their contributions to the outcome of the process.For example, radiologists are ethically responsible for performing their own responsibilities and would not share any ethical responsibility for errors made by other stakeholders, even if they are collaborating with them.The interpretations will frequently overlap in how they allocate responsibility.When they diverge in allocating responsibility (for instance, when there are collaborations between stakeholders where their contributions are significantly unequal), the second interpretation (where responsibility is proportional to the causal responsibility for a stakeholder for the result) should be preferred as it offers a means of determining ethical responsibility if it is contested between stakeholders.This will encourage stakeholders to address ambiguities about the extent of their role responsibilities. Changes in role responsibility from R&D to adoption It became apparent from the interviews that participants identified the role responsibilities of some stakeholders differently depending on the stage of the process ranging from R&D, through initial human trials and finally, to regulation and adoption.The roles of designers and fabricators will change as the process is tested and the characteristics of the computational design system and the necessary settings and materials for reliably 3D printing surgical tools become clearer.The changes in role responsibilities for designers and fabricators represent the maturity of the process as it progresses from being experimental to becoming a practical option for wide adoption.While the individual designers, fabricators and surgeons involved in developing the system would no longer be active stakeholders in the process once the system matures, they would continue to be ethically responsible for their work in developing the system.They would continue to have the backward-looking responsibilities of accountability and blameworthiness.For the forward-looking responsibility of obligation, the stakeholders developing the system may either continue to maintain it and correct any errors in the system revealed during its use, or pass on this obligation to another party able to fulfill this obligation.For example, the R&D team may sell the system to a company that takes responsibility for supporting it. The designer's role will change from heavy development of the computational design system during testing and clinical trials to maintenance, support, and updating the system to keep pace with other changes that may affect the process (such as changes in medical imaging and 3D printing).While the designer and the computational design system are prominent in the development, testing, and early deployment of the process, the computational design system will effectively replace the designer as a participant in the process as it becomes widely adopted.The computational design system itself is unable to bear ethical responsibility for the designs it creates (Douglas et al., 2021).While the designer will not be an active participant in the process once the system is widely deployed, they will continue to be accountable for the computational design system, blameworthy for any negligence in developing it, and have an obligation to maintain the system to ensure that it continues to be fit for purpose as circumstances change. Fabricators will continue to have an active role in the process as it moves from development to wider adoption, as they are required to perform the 3D printing itself and the post-processing necessary to create a tool suitable for clinical use.Nonetheless, the fabricator will be able to rely to the settings and materials that are found to be most effective during the development of the process.The fabricator will have an obligation to revise the settings and materials recommended to those using the system if better alternatives are identified. The role of surgeons will also change as the surgeons involved in the process change from being those participating in the research and development team to surgeons interested in using the process to create tools for treating their patients.While surgeons will necessarily have to consider the unique characteristics of each patient and each planned operation, decisions about the range of possible tool designs and other details about the process itself will not need to be repeated for each operation. The conception of collective ethical responsibility discussed in the previous section captures the shared responsibilities of the development team.The designer, fabricator, and surgeon working together during the R&D phase of the system's development share collective responsibility for the computational design system they develop. Limitations and further research A limitation of this study is the absence of medical insurers as a stakeholder group.Medical insurance plays a key role in the adoption of new medical technologies.Their perspective on the responsibilities of those involved in creating and deploying this technology would be a welcome addition to this research.While clinicians, fabricators, and designers were well represented in this study, a greater representation of patient advocates and regulators would also be useful for further research. There were few unprompted commentaries on the presence of AI within this process.No participants attributed responsibility to the computational design system for designing bespoke surgical tools.Directly asking the participants about the significance of AI within this process is necessary to establish whether the concern about responsibility gaps is uncommon among stakeholders, or if responsibility gaps are more prominent in other applications (such as autonomous vehicles). This research could be expanded by considering other application domains for computationally designed products.While the significant consequences of clinical use of computational design and 3D printing bring questions of responsibility into focus, these consequences may also imply that rigorous regulatory approval methods and testing of the design and use process has limited the risks of harmful designs being produced.This may differ in other application domains.Similarly, considering how stakeholders attribute responsibilities in other high consequence domains (such as aviation and road vehicles) would also be useful to determine if the responsibilities of designers, fabricators, and relevant domain stakeholders change significantly.Exploring how responsibilities are shared in other domains of technology development and use is another possible expansion of this research.2 Conclusion Bespoke surgical tools are an example of how computational design using AI and 3D printing may be used to create new products for high-consequence applications.Creating and using these tools will involve a variety of stakeholders, such as surgeons, radiologists, designers, and fabricators, who will bear responsibilities within this process.To better understand how using computational design for product design may affect the responsibilities of those who play a role in the creation and use process, we interviewed 21 representatives of stakeholder groups who would be involved in the deployment and use of bespoke surgical tools. In this research, we found that the process needs to include role responsibilities that precede and follow the creation and use of bespoke surgical tools.The consultation stage involves surgeons, hospitals, and regulators.Surgeons are responsible for diagnosing the patient's condition and for communicating to the patient the risks of surgery and of using a bespoke surgical tool.Patients need to provide informed consent for a bespoke surgical tool to be used, and the process cannot continue past the consultation stage without patients giving this consent.Hospital policies for assessing new technologies and the responsibility of regulators to assess bespoke surgical tools would also occur in the consultation stage.The post-operative stage involves regulators, surgeons, and surgical colleges.It covers surgeons providing feedback on using the tool and diagnosing faults in the tool that occurred during use, regulators collecting data for assessing the tools' effectiveness and looking for patterns in patient outcomes, and surgical colleges would consult with regulators about the usefulness of these tools. Apart from radiologists, stakeholders also have role responsibilities across several stages of the process.The collaborations between stakeholders throughout the process also mean that collective ethical responsibility may exist between the collaborating stakeholders involved at various process stages.Role responsibilities will also change as the process itself moves from being experimental and under development to wider adoption by surgeons.The stakeholders who comprise the R&D team behind the computational design system (designers, fabricators, and surgeons) will also share collective ethical responsibility for the system both during its development and once it is deployed, even though their roles change from developers to maintainers of the system.Finally, the role responsibilities of stakeholders (especially surgeons) are expanded by introducing bespoke surgical tools.Surgeons will have to perform additional tasks and collaborate with other stakeholders (such as fabricators) to use these tools.For the process of creating and using these tools to be widely adopted, it must be straightforward to incorporate into existing surgical procedures and institutional structures.Otherwise, the burdens of expanding these responsibilities would discourage stakeholders from adopting computational design systems into their practices.The ethical responsibilities of stakeholders also begin before the system is used and endure after the created tools are used in surgery. The broader implications of this study are twofold.First, the examination of the stakeholder system in this research identified that introducing computational design at one stage of a tool development process led to the identification of a series of collaborative moments among stakeholders that lead to a type of collective or 'shared responsibility' Page 13 of 14 11 alongside their existing professional responsibilities.This may have been emphasised in the case of bespoke surgical tools because the case study centred around patient safety, but it suggests that changes such as incorporating computational design have 'flow on' effects within other technology development and deployment pipelines that warrant closer examination.Second, there was also evidence of expanding responsibilities among some of the stakeholders arising from process changes to the overall system.Part of this expansion of responsibilities related to the importance of being able to account for how responsibilities may change or endure 'after the fact'.For example, the transition of a technology process from R&D to approved use is generally accompanied by a shift in the regulatory requirements and in this case, the ongoing monitoring by regulators was identified as a potential change or expansion of stakeholder responsibilities that would be required.There is also a risk that the burdens of expanding these responsibilities would discourage stakeholders from adopting computational design systems into their practices.decisions about incorporating computational design and the associated AI and ML techniques, it is essential that responsibility is understood at both the stakeholder and system levels.It is in this way that the interconnected responsibilities of developing, adopting, using and evaluating such technologies can be made explicit and transparent to all involved. Fig. 1 Fig. 1 The Design and Use Process for Bespoke Surgical Tools (3D printing stages from Geng and Bidana's (2021) model shown in grey) Fig. 2 Fig. 2 The bespoke surgical tool creation and use process diagram presented to participants Fig. 3 Fig.3The revised bespoke surgical tool creation and use process Table 1 Distribution of participants by stakeholder group Table 2 Stakeholders and stages containing their responsibilities
11,757.2
2022-02-23T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
A deep learning system for detecting diabetic retinopathy across the disease spectrum Retinal screening contributes to early detection of diabetic retinopathy and timely treatment. To facilitate the screening process, we develop a deep learning system, named DeepDR, that can detect early-to-late stages of diabetic retinopathy. DeepDR is trained for real-time image quality assessment, lesion detection and grading using 466,247 fundus images from 121,342 patients with diabetes. Evaluation is performed on a local dataset with 200,136 fundus images from 52,004 patients and three external datasets with a total of 209,322 images. The area under the receiver operating characteristic curves for detecting microaneurysms, cotton-wool spots, hard exudates and hemorrhages are 0.901, 0.941, 0.954 and 0.967, respectively. The grading of diabetic retinopathy as mild, moderate, severe and proliferative achieves area under the curves of 0.943, 0.955, 0.960 and 0.972, respectively. In external validations, the area under the curves for grading range from 0.916 to 0.970, which further supports the system is efficient for diabetic retinopathy grading. I t is estimated that approximately 600 million people will have diabetes by 2040, with one-third expected to have diabetic retinopathy (DR)-the leading cause of vision loss in workingage adults worldwide 1 . Mild non-proliferative DR (NPDR) is the early stage of DR, which is characterized by the presence of microaneurysms. Proliferative DR (PDR) is the more advanced stage of DR and can result in severe vision loss. Regular DR screening is important so that timely treatment can be implemented to prevent vision loss 2 . Early-stage intervention via glycemia and blood pressure control can slow down the progression of DR and late-stage interventions through photocoagulation or intravitreal injection can reduce vision loss 3 . In the United Kingdom and Iceland, where systematic national DR screening has been carried out, DR is no longer the leading cause of blindness among working-age adults 4,5 . Although routine DR screening is recommended by all professional societies, comprehensive DR screening is not widely performed [6][7][8][9][10] , facing the challenges related to the availability of human assessors 3,11 . China currently has the largest number of patients with diabetes worldwide 12 . In 2016, the State Council issued the "Healthy China 2030" planning outline, which provided further guidance on the future direction of Chinese health reform 13 . The "Healthy China 2030" outlined the goal that all patients with diabetes will receive disease management and intervention by 2030. In China, there are about 40,000 ophthalmologists, with a 1:3000 ratio to patients with diabetes. As a cost-effective preventive measure, regular retinal screening is encouraged at the community level. Task shifting is one way the public health community can address this issue head-on so that ophthalmologists can do the treatment but not the screening. Task shifting is the name given by WHO to a process of delegation whereby tasks are moved, where appropriate, to less specialized health workers 14 . Recent evidence has established a role for screening by healthcare workers, given prior training in grading DR 3 . However, we still face the issues of insufficiency of their training and where they are placed in the system. Thus, diagnostic system using deep learning algorithms is required to help DR screening. Recently, deep learning algorithms have enabled computers to learn from large datasets in a way that exceeds human capabilities in many areas [15][16][17][18] . Several deep learning algorithms with high specificity and sensitivity have been developed for the classification or detection of certain disease conditions based on medical images, including retinal images [19][20][21][22][23] . Current deep learning systems for DR screening have been predominantly focused on the identification of patients with referable DR (moderate NPDR or worse) or vision-threatening DR, which means the patients should be referred to ophthalmologists for treatment or closer follow-up 21,22,24 . However, the importance of identifying earlystage DR should not be neglected. Evidence suggests that proper intervention at an early stage to achieve optimal control of glucose, blood pressure, and lipid profiles could significantly delay the progression of DR and even reverse mild NPDR to DR-free stage 25 . In addition, the integration of these deep learning advances into DR screening is not straightforward because of some challenges. First, there are a few end-to-end and multi-task learning methods that can share the multi-scale features extracted from convolutional layers for correlated tasks, and further improve the performance of DR grading based on the lesion detection and segmentation, due to the fact that DR grading inherently relies on the global presence and distribution of the DR lesions 21,22,[26][27][28] . Second, despite being helpful in DR screening, there are a few deep learning methods providing on-site image quality assessment with latency compatible with real-time use, which is one of the most needed additions at primary DR screening level and will have the impact on screening delivery at the community level. Here we describe the development and validation of a deep learning-based DR screening system called DeepDR (Deeplearning Diabetic Retinopathy), which was a transfer learning assisted multi-task network to evaluate retinal image quality, retinal lesions, and DR grades. The system was developed using a real-world DR screening dataset consisting of 666,383 fundus images from 173,346 patients. In addition, we annotated retinal lesions, including microaneurysms, cotton-wool spots (CWS), hard exudates, and hemorrhages on 14,901 images, and used transfer learning 29 to enhance the lesion-aware DR grading performance. The system achieved high sensitivity and accuracy in the whole-process detection of DR from early to late stages. Results Data sources and network design. DeepDR was developed using the fundus images of patients with diabetes who participated in the Shanghai Integrated Diabetes Prevention and Care System (Shanghai Integration Model, SIM) between 2014 and 2017 (Supplementary Table 1). A total of 666,383 fundus images from 173,346 patients with diabetes with integrity fundus examination records were enrolled in this study. Two retinal photographs (macular and optic disc centered) 30 were taken for each eye according to the DR screening guidelines of the World Health Organization 31 . Image quality (overall gradability, artifacts, clarity, and field), DR grades (non-DR, mild NPDR, moderate NPDR, severe NPDR, or PDR), and diabetic macular edema (DME) were labeled for each image. In addition, 14,901 images were labeled with retinal lesions, including microaneurysms, CWS, hard exudates, and hemorrhages. Among the 173,346 subjects in the SIM cohort (referred as the local dataset in this study), 121,342 subjects (70%) were randomly selected as the training set, and the remaining 52,004 subjects (30%) served as the local validation set (Fig. 1). In the SIM cohort, each subject was enrolled only once and was recorded with the unique resident ID. So, the data separation was guaranteed between the training and local validation datasets. The prevalence of DR in the study cohorts is shown in Table 1. In the training dataset, 12.85% of images had DR, among which 27.94% were mild NPDR. In the local validation dataset of 200,136 images, 12.99% of images had DR, among which 27.30% were mild NPDR. The DeepDR system consisted of three deep-learning subnetworks: image quality assessment sub-network, lesion-aware sub-network, and DR grading sub-network (Fig. 2). All the 466,247 images in the training dataset were used to train the image quality assessment sub-network to make binary classification of whether the image was gradable and recognize certain quality issues in terms of artifacts, clarity, and field problems of the retinal images; 415,139 images without quality issues were used to train the DR grading sub-network to classify the images into non-DR, mild NPDR, moderate NPDR, severe NPDR, or PDR, and binary classification of whether there was DME. The lesion-aware sub-network was trained using 10,280 images labeled with retinal lesions to achieve detection and segmentation of microaneurysms, CWS, hard exudates, and hemorrhages. As shown in Fig. 2, our DeepDR system was designed as the transfer learning assisted multi-task network. Specifically, a DR base network was first pre-trained on ImageNet classification and then fine-tuned on our DR grading task using 415,139 retinal images. Next, we utilized transfer learning 32 to transfer the DR base network to the three sub-networks of the DeepDR system, rather than directly training randomly initialized subnetworks. During the process of transfer learning, we fixed the pre-trained weights in the lower layers of the DR base network and retrained the weights of its upper layers using backpropagation. This process worked well since the features were suitable to all the DR-related learning tasks (evaluating image quality, lesion analysis, and DR grading). Furthermore, we concatenated the lesion features extracted by the segmentation module of the lesion-aware sub-network with the features extracted by the DR grading sub-network to enhance grading performance. To prevent the network from overfitting, an early stopping criterion 33 was used to determine the optimized number of iterations. For every task, we randomly split the training dataset into two parts, 80% of the data were used to train the network and the rest were used for early stopping. The network was tested on early stopping dataset every epoch during training and the performance of the network was recorded. If the area under the receiver operating characteristic curve (AUC) or intersect over union (IoU) increment was less than 0.001 for 5 epochs continuously, we stopped training and selected the best model as the final model. Performance of the DeepDR system. The image quality assessment sub-network for assessing overall image quality and identifying artifacts, clarity, and field definition problems was tested using 200,136 images in the local validation dataset. DeepDR achieved an AUC of 0.934 (0.929-0.938) for overall image quality. For the identification of artifacts, clarity, and field definition issues, the system achieved AUCs of 0.938 (0.932-0.943), 0.920 (0.914-0.926), and 0.968 (0.962-0.973), respectively. The lesion-aware sub-network was evaluated using 4621 gradable images with retinal lesion annotations from the local validation dataset. The results are shown in Fig. 3 (Fig. 3B). To facilitate usability in clinical settings, a clinical report could be automatically generated for each patient (example report shown in Supplementary Fig. 1). This report showed the original fundus images with highlighted lesions, described the type and location of the retinal lesions along with DR gradings. In addition, we conducted an experiment to evaluate the utility of lesion-aware sub-network by measuring its effect on the grading accuracy of trained primary healthcare workers from community health service centers. Detailed study design is described in the Supplementary Information (Section "Supplementary Methods"). The results were tested using one-sided, twosample Wilcoxon signed rank test and are shown in Table 2. The sensitivities of all DR grades and the specificity of severe DR were significantly improved with the aid of the DeepDR system. This suggested that visual hints of retinal lesions significantly improved the diagnostic accuracy of the primary healthcare workers, which can facilitate the task shifting of DR screening. The DeepDR system achieved the whole-process diagnosis of DR from early to late stages based on the accurate detection of retinal lesions that was especially accurate for microaneurysms. In the local validation dataset, 178,907 gradable images were used to test the DR grading sub-network and the results are shown in Table 3. For the two images per eye, our DR grading sub-network made separate prediction per image, and then we accepted the more severe DR grade obtained from those images as the grading result for that eye, which was used to calculate the AUC of DR grades. The average AUC was 0.955 for DR grading. In particular, for mild NPDR, the AUC, sensitivity, and specificity were 0. External validation. To test the generalization of the system, we further evaluated the performance of DeepDR using two independent real-world cohorts and the publicly accessible dataset EyePACS for external validation. Real-time image quality feedback. We employed DeepDR to provide real-time image quality feedback during the nonmydriatic retinal photography of 1294 elderly subjects from the NDSP cohorts (age over 65 years). Two retinal photographs (macular and optic disc centered) were taken of each eye. If DeepDR determined the quality of the first image of a field to be ungradable, a second image of the same field was recaptured. Only one more photograph was taken of each field to avoid contracted pupils due to the camera flash. The results are shown in Table 4. During this process, 5176 retinal images were initially taken from 1294 patients. Of these, 1487 images (28.7%) were recognized as low-quality with artifacts, clarity, and/or field definition issues. Based on the feedback information, a second photograph was taken of these patients. For the 1487 initial low-quality images, 1065 (71.6%) recaptured images were of adequate quality. After replacing the low-quality images with recaptured images, the diagnostic accuracy of each grade of DR was improved. Especially for mild NPDR, the AUC increased from 0.880 (0.859-0.895) to 0.933 (0.918-0.950) (P < 0.001) and sensitivity increased from 78.5% (72.7-83.4%) to 87.6% (83.2-92.3%). Discussion The DeepDR system achieved high sensitivity and specificity in DR grading. Rather than just generating a DR grading, it offers visual hints that help users to identify the presence and location of different lesion types. Introducing the image quality subnetwork and lesion-aware sub-network into DeepDR improved the diagnostic performance and more closely followed the thought process of ophthalmologists. DeepDR can run on a standard personal computer with average-performance processors. Thus, it has great potential to improve the accessibility and efficiency of DR screening. Several previous studies using deep learning approaches have been conducted on the detection of referable or visionthreatening DR detection. Gulshan Fig. 2 Visual diagram of the DeepDR system. DeepDR system consisted of three sub-networks: image quality assessment sub-network, lesion-aware subnetwork, and DR grading sub-network. We first pre-trained the ResNet to form the DR base network (top row). The trained weights of the pre-trained DR base network were then shared in the three different sub-networks of the system, indicated by the red arrow. These three sub-networks took retinal images as input and performed different tasks one-by-one. Furthermore, the lesion features extracted by the segmentation module of the lesion-aware subnetwork (indicated by the green arrow) were concatenated with the features extracted by the DR grading sub-network (indicated by the blue arrow). DR, diabetic retinopathy; NPDR, non-proliferative diabetic retinopathy; PDR, proliferative diabetic retinopathy. DR 24 . Although these studies achieved excellent accuracy, they focused only on patients with referable DR who are then referred for specialist eye care. However, mild DR was classified into nonreferable DR and was not distinguished from DR-free subjects 21,22,24 . The value of detecting early DR is underestimated, as there is little evidence that ophthalmic treatments, such as photocoagulation or anti-VEGF medications, are indicated at this stage 2 . Furthermore, if all the cases of DR are referred to ophthalmologists, it would likely overwhelm our medical systems. However, from the perspective of diabetes management, the screening for mild DR is of great clinical importance and may improve patients' outcomes. First, the identification of patients with mild DR facilitates health providers, such as family physicians, general practitioners, and endocrinologists, to participate in the patient education and management of blood glucose, lipid profiles, blood pressure, and other risk factors 2 . Secondly, there is no known cure for advanced DR, and some of the damage caused by leakage, oxygen deprivation, and blood vessel growth is permanent 34 . But there is evidence showing that optimal glycemic and blood pressure controls are strongly correlated with the regression from mild DR to DR-free state 25 , and intensive glycemic and lipid control reduces the rate of progression to visionthreatening DR 35 . Thirdly, screening for mild DR provides valuable information for clinical decision making. Although intensive glycemic control reduces the rate of photocoagulation, it increases the risk of severe hypoglycemia and incurs additional burden by way of polypharmacy, side effects, and cost 36 . The optimal glycemic target is controversial. The American College of Physicians guideline 37 set HbA1c levels 7-8% as the optimal target for most patients with diabetes, while the American Diabetes Association guideline 38 set the HbA1c target at 6.5-7.0%. Patients with mild DR could benefit from strict glycemic control 39 . Thus, the detection of mild DR can promote personalized diabetes management. Accurate detection of microaneurysms is still a problem for deep learning systems 40 . In this study, to improve the performance of detecting specific retinal lesions and DR grading, we introduced an efficient retinal lesion-aware sub-network based on ResNet that avoided the problem of vanishing gradients, which made it a more sensitive feature extractor for small lesions compared to other existing network architectures (e.g., VGG and The sensitivity and specificity were tested using one-sided, two-sample Wilcoxon signed rank test. DR diabetic retinopathy, NPDR non-proliferative diabetic retinopathy, PDR proliferative diabetic retinopathy. Inception) 41 . The lesion-aware sub-network contained feature pyramid structure that was designed to capture multi-scale features and mine the relationship of lesion types and position 42 . Meanwhile, transfer learning was used in our study and the lesion-aware sub-network contained the repurposed DR base network layers that were pre-trained by a base DR grading dataset of 415,139 retinal images. This boosted the performance of learning lesion detection and segmentation through the transfer of knowledge from DR grading task that has already been learned. As a result, the DeepDR system achieved AUCs of 0.901-0.967 for lesion detection, including microaneurysms, CWS, hard exudates, and hemorrhages. Retinal lesion detection and segmentation is of great clinical impact. Detecting different types of retinal lesions can provide guidance for clinical decision making. For example, fenofibrate may benefit patient with hard excaudate 43 and antiplatelet drugs should be used carefully in patient with retinal bleeding 44 . More importantly, one of the major problems in DR screening is detecting change or progression, as progression of retinal lesions is indicative of developing sight-threatening DR/DME [45][46][47] . Due to the fact that DR progression could be detected not only between different DR grades, but even within the same grade, our lesion-aware subnetwork has the potential to capture tiny progression of certain kind of retinal lesions through follow-up of DR patients. Further studies are needed to evaluate this application in real-world clinical settings. In previous studies, the deep learning systems were usually trained directly end-to-end from original fundus images to the labels of DR grades 21,22,24 , these end-to-end systems might fail to encode the lesion features due to the black-box nature of deep learning 48 . In our study, instead of direct end-to-end training from fundus images to DR grades, an efficient lesion-aware subnetwork was introduced to increase the ability of capturing lesion features. Due to the fact that embedding prior knowledge into the end-to-end machine learning algorithms can regulate machine learning models and shrink the search space 49 , and the ophthalmologists read fundus images based on the presence of lesions, our DR grading network can leverage lesion features as prior knowledge to enhance the performance of DR grading. Previous studies, such as Michael D. Abràmoff et al.'s work 50 , used multiple CNNs to detect hemorrhages, exudates, and other lesions, and those detected lesion results were used to classify referable DR by a classic feature fusion model. Differently, our DeepDR network was trained end-to-end with the features extracted from both the lesion-aware sub-network and the original image. In this way, our DR grading sub-network can further exploit the features to minimize the training error, thus improving grading results. As a result, DeepDR achieved a sensitivity of 88.8% and specificity of 83.9% for mild NPDR detection on the local validation dataset. Notably, DeepDR achieved the diagnosis of all stages of DR with sufficient accuracy in real-word datasets. Despite the continuous optimization in digital fundus cameras, aging, experience, lighting, and other non-biological factors resulting from improper operation still results in high percentage of low-quality fundus images, and reacquisition is timeconsuming and sometimes impossible 51,52 . Previous studies on image quality assessment have focused on post hoc image data processing 21,22 . In this study, a real-time image quality feedback sub-network was implemented to facilitate the DR screening. Based on the feedback information, the artificial intelligenceassisted image quality assessment can reduce the proportion of poor-quality images from 28.7% to 8.2%. Furthermore, with the improvement of image quality, the diagnostic accuracy was significantly improved, especially for mild DR. This real-time image quality feedback function allows the operators to identify image Table 3 Performance of the DeepDR system for diabetic retinopathy grading. quality issue immediately and the patient does not need to be called back. It is a promising tool to reduce ungradable rate of the fundus images, thus increasing the efficiency of DR screening. The limitation of this study is, firstly, the single-ethnic cohort used to develop the system. However, we used the publicly available EyePACS dataset from the United States for external validation and achieved satisfactory sensitivity and specificity. Secondly, the lesion-aware sub-network was tested only on the local validation dataset, because of the lack of lesion annotations in external cohorts. Further external validation in multiethnic and multicenter cohorts is needed to confirm the robustness of lesion detection and DR grading of the DeepDR system. In conclusion, we developed an automated, interpretable, and validated system that performs real-time image quality feedback, retinal lesion detection, and early-to late-stage DR grading. With those functions, DeepDR system is able to improve image collection quality, provide clinical reference, and facilitate DR screening. Further studies are needed to evaluate deep learning system in detecting and predicting DR progression. Methods Ethical approval. The study was approved by the Ethics Committee of Shanghai Sixth People's Hospital and conducted in accordance with the Declaration of Helsinki. Informed consent was obtained from participants. The study was registered on the Chinese Clinical Trials Registry (ChiCTR.org.cn) under the identifier ChiCTR2000031184. Image acquisition and reading process. In the SIM project, retinal photographs were captured using desktop retinal cameras from Canon, Topcon, and ZEISS (Supplementary Table 1). All the fundus cameras were qualified by the organizer to ensure enough quality for DR grading. The operators of the cameras had all received standard training and the images were read by a centered reading group consisting of 133 certified ophthalmologists. The members in the reading group underwent training by fundus specialists and passed the tests. Original retinal images were uploaded to the online platform, and the images of each eye were assigned separately to two authorized ophthalmologists. They labeled the images using an online reading platform and gave the graded diagnosis of DR (Supplementary Fig. 2). The third ophthalmologist who served as the senior supervisor confirmed or corrected when the diagnostic results were contradictory. The final grading result was dependent on the consistency within these three ophthalmologists. At least 20% of the grading results would be randomly re-read to check the consistency. The total eligibility rate of spot-check was equal to or greater than 90%. If the reading group encountered difficult cases, they could apply for consultation from superior medical institutions. The overall disagreement rate in the SIM dataset was 18.9%. The primary cause of the diagnostic divergence was the decision between mild NPDR and non-DR. For retinal lesion annotation, each fundus image was annotated by two ophthalmologists. For each type of lesion, two ophthalmologists generated two lesion annotations, respectively. We considered the two annotations to be valid if the IoU between them was greater than 0.85. Otherwise, a senior supervisor would check the annotations and give feedback to provide guidance. The image would be re-annotated by the two ophthalmologists until the IoU was larger than 0.85. Finally, we took the union of valid annotations as final ground truth segmentation annotation. Diagnostic criteria. DR severity was graded into five levels (non-DR, mild NPDR, moderate NPDR, severe NPDR, or PDR, respectively), according to the International Clinical Diabetic Retinopathy Disease Severity Scale (AAO, October 2002) 53 . Mild NPDR was defined as the presence of microaneurysms only. Moderate NPDR was defined as more than just microaneurysms but less than severe NPDR, presenting CWS, hard exudates, and/or retinal hemorrhages. Severe NPDR was defined as any of the following: more than 20 intraretinal hemorrhages in each of the 4 quadrants; definite venous beading in 2+ quadrants; prominent intraretinal microvascular abnormalities (IRMA) in 1+ quadrant, and no signs of PDR. PDR was defined as one or more of the following: neovascularization, vitreous/preretinal hemorrhage 53 . DME was diagnosed if hard exudates were detected within 500 μm of the macular center according to the standard of the Early Treatment for Diabetic Retinopathy study 54 . Referable DR was defined as moderate NPDR or worse, DME, or both. Based on the guidelines for image acquisition and interpretation of diabetic retinopathy screening in China 55 , the image quality was graded according to standards defined in terms of three quality factors, artifacts, clarity, and field definition 56 , as listed in Table 5. The total score was equal to the score for clarity plus the score for field definition and minus the score for artifacts. A total score less than 12 was considered as ungradable. Table 4 Impact of real-time quality feedback on DR diagnosis using the DeepDR system. Architecture of the DeepDR system. The DeepDR system had three sub-networks: image quality assessment sub-network, lesion-aware sub-network, and DR grading sub-network. Those sub-networks were developed based on ResNet 41 and Mask-RCNN 57 . Both ResNet and Mask-RCNN could be divided into two parts: (1) feature extractor, which took images as input and output features, (2) task-specific header, which took the features as input and generated task-specific outputs (i.e., classification or segmentation). Specifically, we chose to use the Mask-RCNN and ResNet with the same feature extractor architecture, so the feature extractor of one sub-network can be easily transferred to another. The quality assessment sub-network can identify overall quality including gradability, artifacts, clarity, and field issues for the input images. To train the image quality assessment sub-network effectively, we initialized a ResNet with weights pre-trained on ImageNet and pre-trained the ResNet to form the DR base network. We utilized the weights of the convolution layers in the pre-trained DR base network to initialize the feature extractor of the image quality assessment subnetwork. We assessed image quality in terms of multiple factors to determine if: (a) the artifact covered the macular area or the area of artifact was larger than a quadrant of the retinal image; (b) only Level II or wider vascular arch and obvious lesions could be identified (Level II vascular arch was defined as the veins deriving from the first bifurcation); (c) no optic disc or macula was contained in the image; and (d) the image was not gradable. The lesion-aware sub-network can generate lesion presence and lesion segmentation masks of the input images. There were two modules in our lesionaware sub-network: one was the lesion detection module and the other was the lesion segmentation module. The lesion detection module was a binary classifier that predicted whether any kind of lesions exist in a quadrant of the retinal image, as shown in Supplementary Fig. 3. The lesion segmentation module generated mask images to identify different lesions existing in the retinal images, as shown in Fig. 3B. We used ResNet and Mask-RCNN to form the lesion detection module and lesion segmentation module, respectively. Then we transferred the pre-trained DR base network to the lesion detection module by initializing the feature extractor of lesion detection module using the feature extractor of pre-trained DR base network, followed by fine-tuning the lesion detection module. Then we initialized the feature extractor of lesion segmentation module by reusing the feature extractor of the lesion detection module. The feature extractor layers of the lesion segmentation module were then fixed, and the rest of the layers of the module were updated during training. Non-maximum suppression was used in our lesion segmentation sub-module to select the bounding box with the highest objectiveness score from multiple predicted bounding boxes. Specifically, we first selected the bounding box with the highest objectiveness score, and then compared the IoU of this bounding box with other bounding boxes and removed the bounding boxes with IoU > 0.5. Finally, we moved to the next box with the highest objectiveness score and repeated until all boxes were either removed or selected. The DR grading sub-network can fuse features from lesion-aware network and generate final DR grading results. To retain as much lesion information from the original retinal image as possible, we combined the pre-trained DR base network with the feature extractor of the lesion segmentation module in order to capture more detailed lesion features for DR grading. Then the weights in the extractors of DR grading sub-network were fixed, and the classification header of sub-network was updated during training. The transfer learning assisted multi-task network was developed in our DeepDR architecture to improve the performance of DR grading based on lesion detection and segmentation. Due to the fact that DR grading inherently relies on the global presences of retinal lesions that contain multi-scale local texture and structures, the central feature of our multi-task learning method was designed to extract multiscale features encoding local textures and structures of retinal lesions, where the transfer learning was used to improve the performance of DR grading task. Meanwhile, we used hard-parameter sharing in lesion-aware sub-network, and all the layers in the feature extractors of ResNet and Mask-RCNN are shared. Using hard-parameter sharing was important to reduce the risk of overfitting 58 due to the limited number of lesion segmentation labels. Besides, sharing the pre-trained weights can facilitate the training of both lesion detection task and lesion segmentation task. Additional experimental results demonstrated that hardparameter sharing outperformed soft-parameter sharing for lesion segmentation is shown in Supplementary Table 3. Recommended computer configuration. Any desktop or laptop computer with x86 compatible CPU, 10 GB or more of free disk space, and at least 8 GB main memory is capable to run the DeepDR system. There is no specialized hardware requirement, including GPU or any speed up card, to run the software. A powerful computer with more CPU cores and a GPU will speed up the diagnosis procedure significantly, while the diagnosis time on a typical laptop (i.e., with Intel I3 processor, no GPU, more than 8 GB memory) is also acceptable (less than 20 s per image). Statistical analyses. The performances of DeepDR in assessing image quality, retinal lesion detection, and grading DR were measured by the AUC of the receiver operating characteristic curve generated by plotting sensitivity (the true-positive rate) versus 1-specificity (the false-negative rate). The operating thresholds for sensitivity and specificity were selected using the Youden index. The AUCs were compared using binormal model methods 59 , where a two-sided P value of less than 0.05 was considered statistically significant. For lesion detection, AUC was calculated as a binary classification to determine if a quadrant contained a certain kind of lesion. The performance of lesion segmentation was measured by IoU and Fscore. For CWS, hard exudates, and hemorrhages, we used IoU to measure the performance of segmentation network. The IoU was calculated as: where A and B were set of pixels in the retinal images (e.g., A was the segmented lesion and B was the ground truth). For microaneurysms, the F-score was used instead of the IoU score, because the average diameter of microaneurysms in the retinal image was usually less than 30 pixels, minor change in the predicted map would result in a large change in IoU score. F-score was calculated as: Type Image quality specification Score Artifact No artifacts 0 Artifacts are outside the aortic arch with scope less than 1/4 of the image 1 Artifacts do not affect the macular area with range less than 1/4 4 Artifacts cover more than 1/4 but less than 1/2 of the image 6 Artifacts cover more than 1/2 without fully covering the posterior pole 8 Cover the entire posterior pole 10 Clarity Only Level I vascular arch is visible 1 Level II vascular arch and a small number of lesions are visible 4 Level III vascular arch and some lesions are visible 6 Level III vascular arch and most lesions are visible 8 Level III vascular arch and all lesions are visible 10 Field definition Do not include the optic disc and macula 1 Only contain either optic disc or macula 4 Contain optic disc and macula 6 The optic disc or macula is outside the 1 papillary diameter and within the 2 papillary diameter range of the center 8 The optic disc and macula are within 1 papillary diameter of the center 10 Level I vascular arch was defined as the first bifurcations of major trunk veins; Level II vascular arch was defined as the veins deriving from the first bifurcation; Level III vascular arch was defined as the veins deriving from the second bifurcation. The total score was equal to the score for clarity plus the score for field definition and minus the score for artifact. A total score less than 12 was considered as ungradable. P was the set of all predicted microaneurysms produced by the network, G was the set of all microaneurysms annotated by ophthalmologists. tp ¼ p 2 P; j; 9g 2 G; IoUðp; gÞ ≥ 0:5 À Á represented the set true-positive predicts of microaneurysms, fp ¼ p 2 P; j; 8g 2 G; IoUðp; gÞ<0:5 À Á represented the set false-positive predicts of microaneurysms, fn ¼ g 2 G; j; 8p 2 P; IoUðp; gÞ<0:5 À Á represented the set of false-negative predictions of microaneurysms. j j represented the cardinality (size) of a set. Python version 3.7.1 (Python Software Foundation, Delaware, USA) was used for all statistical analyses in this study. The following third-party python packages were used: OpenCV version 2.4.3 (Intel Corporation, California, USA) was used for image loading and decoding image. Pytorch version 1.0.1 (Facebook, Massachusetts, USA) was used for convolutional neural network computing. Scikitlearn version 0.20.0 (David Cournapeau, California, USA) was used for calculating AUC. Pandas version 0.23.4 (Wes McKinney, Connecticut, USA) was used for loading ground truth and metadata. NumPy version 1.15.4 (Travis Oliphant, Texas, USA) was used for calculating IoU and F-score. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The export of human-related data is governed by the Ministry of Science and Technology of China (MOST) in accordance with the Regulations of the People's Republic of China on Administration of Human Genetic Resources (State Council No.717). Request for the non-profit use of the fundus images and related clinical information in the SIM, NDSP, and CNDCS cohorts should be sent to corresponding author Weiping Jia. The joint application of the corresponding author together with the requester for the data sharing will be generated and submitted to MOST. The data will be provided to the requester after the approval from MOST. The EyePACS dataset is publicly available at https:// www.kaggle.com/c/diabetic-retinopathy-detection/data. The rest of the data are available from the corresponding author upon reasonable request.
8,090.6
2021-05-28T00:00:00.000
[ "Computer Science", "Medicine" ]
Investigation of Phase Transformation of Fe65Ni35 Alloy by the Modified Pulse Method This paper presents the possibility of using a modified-pulse method (MPM) determining the temperature characteristics of thermal diffusivity in order to identify phase transformations in metals. The experiment and attempt of phase identification were conducted for the Fe65Ni35 alloy in the 20–500 °C temperature range during both sample heating and cooling. The estimated error of discrete thermal diffusivity measurements was less than 3%. The method allows us to narrow down the averaging of the interval of this value, as a function of temperature, in the range below 1 K. Recently published analysis of the phase diagrams of Fe–Ni alloys, and the results of the authors’ own research into the Fe65Ni35 alloy, showed very good correlation between changes occurring when heating the alloy and the equilibrium diagram provided by Cacciamani G., Dinsdale A., Palumbo M., and Pasturel A. (Intermetallics 18, 2010, 1148–1162) showing the position of phases with a crystal-lattice structure based on the face-centered cubic (FCC) cell. Introduction Investigations of the thermal properties of alloys or materials, such as thermal diffusivity, conductivity, or expansion play a very important role in today's world. Interesting investigations on the thermal properties of an invar-type material were conducted by Yichun Liu et al. [1]. Wang et al. [2] investigated the thermal properties of ceramic thermal-barrier coatings using the thermal-conductivity parameter to assess the suitability of the materials for technological applications. Thermal diffusivity is an important thermodynamic property because it is suitable for predicting material behavior in many heat-transfer applications. Reza et al. [3] used this parameter in deuterium-implanted tungsten investigations. Bellucci et al. [4] used thermal diffusivity in research on graphene nanoplatelets using the pulse method. Our work shows how to assess the thermodynamic properties of a material using the thermal diffusivity with a modified-pulse method (MPM). Our first work in this area [39] concerned the study and interpretation of the temperature characteristics a(T) of the Fe61Ni39, Fe52Ni48, and Fe40Ni60 alloys in the temperature range of 20-700 • C. The next work [40] concerned the interpretation of characteristics a(T) of the metastable Fe80Ni20 alloy, ranging from ambient temperature to about 650 • C. This work concerns the interpretation of thermal-diffusivity characteristics a(T) of the Fe65Ni35 alloy. The flash or pulse method can be used interchangeably, they refer to the same method of creating a surface heat source on the sample front side. Short Description of Method and Test Bench The MPM for the measurement of thermal diffusivity was previously described in detail in [41,42]. The determination of thermal diffusivity by the MPM means is based on the theoretical determination of the temperature distribution inside an opaque and adiabatic sample, and the difference in temperature between two lateral surfaces after the laser pulse is fired on the front surface. In this case, a one-dimensional model was assumed that approximated the actual heat exchange in the sample-environment system. The next step in the research was to record a temporary temperature difference between the front and back surfaces of the sample, resulting in a one-dimensional temperature equalization process in the sample. Lastly, it was estimated how to best match the results of the experiment with a curve, obtaining them as one of several theoretical curves that solve the problem. The optimization parameter was thermal diffusivity, and the value corresponding to the best fit is considered as accurate. Our first work in this area [39] concerned the study and interpretation of the temperature characteristics a(T) of the Fe61Ni39, Fe52Ni48, and Fe40Ni60 alloys in the temperature range of 20-700 °C. The next work [40] concerned the interpretation of characteristics ( ) of the metastable Fe80Ni20 alloy, ranging from ambient temperature to about 650 °C. This work concerns the interpretation of thermal-diffusivity characteristics ( ) of the Fe65Ni35 alloy. The flash or pulse method can be used interchangeably, they refer to the same method of creating a surface heat source on the sample front side. Short Description of Method and Test Bench The MPM for the measurement of thermal diffusivity was previously described in detail in [41,42]. The determination of thermal diffusivity by the MPM means is based on the theoretical determination of the temperature distribution inside an opaque and adiabatic sample, and the difference in temperature between two lateral surfaces after the laser pulse is fired on the front surface. In this case, a one-dimensional model was assumed that approximated the actual heat exchange in the sample-environment system. The next step in the research was to record a temporary temperature difference between the front and back surfaces of the sample, resulting in a onedimensional temperature equalization process in the sample. Lastly, it was estimated how to best match the results of the experiment with a curve, obtaining them as one of several theoretical curves that solve the problem. The optimization parameter was thermal diffusivity, and the value corresponding to the best fit is considered as accurate. The theoretical temperature distribution ( , ) = ( , ) − in an adiabatic sample, where T0-thermostating temperature of the sample: when the temperature inside the sample at time = 0 is equal to ( , 0), according to [43], it is given by Equation (2); when the surface layer (0  ≪ ) absorbs the radiation-pulse energy of surface density , with respect to initial conditions, the theoretical temperature distribution is given by Equation (3): Materials 2020, 13, 3425 3 of 16 Then, the temperature difference between opposite surfaces of the sample is given by: where τ = l 2 / π 2 a is the characteristic time; Θ ∞ = Q/(ρc P l) is the temperature increase of the sample after the end of the equalization of the heat-exchange process; ρ is the density; c P is the specific heat of the sample. Temperature changes on the front Θ 1 (t) = Θ(0, t), and rear Θ 2 (t) = Θ(l, t), surfaces of the test sample, and the difference ∆Θ(t) of these temperatures are illustrated in Figure 1a. Figure 1b shows how to determine characteristic time τ and thermal diffusivity a(T) of the test sample. Then, the temperature difference between opposite surfaces of the sample is given by: After truncation of the infinite series in Equation (4) for > 1, we obtain: The error of this operation was below 1% if t/τ > 0.58. In order to determine thermal diffusivity (a) on the basis of records of the relevant part of temperature difference ΔΘ′(t), characteristic time τ should first be designated. From Equation (5), we obtain: Then, data after logarithmic transformation were approximated in a specified range [t1, t2]. Leastsquares approximation was used. Parameters Θ∞ and τ were derived from a linear approximation. On the basis of characteristic time τ and sample thickness l, thermal diffusivity a is calculated from: Thermal diffusivity determines the theoretical relationship ΔΘ(t) (see Equation (4)). On the basis of this curve, we calculated the square root of the sum of squares of differences between the recorded and the theoretical value of the temperature difference between two opposite sides of sample ΔΘ(t). This parameter is minimized with respect to t1, t2, and signal shift ΔΘ′(t) in such a manner that the whole procedure is repeated with changed values of these parameters. The process is stopped when After truncation of the infinite series in Equation (4) for n > 1, we obtain: The error of this operation was below 1% if t/τ > 0.58. In order to determine thermal diffusivity (a) on the basis of records of the relevant part of temperature difference ∆Θ (t), characteristic time τ should first be designated. From Equation (5), we obtain: Then, data after logarithmic transformation were approximated in a specified range [t 1 , t 2 ]. Least-squares approximation was used. Parameters Θ ∞ and τ were derived from a linear approximation. On the basis of characteristic time τ and sample thickness l, thermal diffusivity a is calculated from: Thermal diffusivity determines the theoretical relationship ∆Θ(t) (see Equation (4)). On the basis of this curve, we calculated the square root of the sum of squares of differences between the recorded and the theoretical value of the temperature difference between two opposite sides of sample ∆Θ(t). This parameter is minimized with respect to t 1 , t 2 , and signal shift ∆Θ (t) in such a manner that the Materials 2020, 13, 3425 4 of 16 whole procedure is repeated with changed values of these parameters. The process is stopped when satisfactory compliance between the theoretical curve ∆Θ(t) and recorded experimental one ∆Θ (t) is achieved. Measuring signals ∆Θ (t) and Θ 2 (t), which were used to determine the discrete thermal-diffusivity values a(T i ) of the tested sample at T i temperature, were recorded using thermocouples, as shown in Figure 2a, in the form of thermoelectric voltages The procedure for determining the typically unknown Seebeck coefficient k T (T), according to [39,42], is given by: where k T (T), Seebeck coefficient of the junctions "material of the sample-thermoelectric wire"; and k, Seebeck coefficient of the junction "thermoelectric wire B-thermoelectric wire C", could be obtained from, e.g., [44]. Materials 2020, 13, x FOR PEER REVIEW 4 of 16 satisfactory compliance between the theoretical curve ΔΘ(t) and recorded experimental one ΔΘ′(t) is achieved. Measuring signals Δ ′( ) and ( ) , which were used to determine the discrete thermaldiffusivity values ( ) of the tested sample at temperature, were recorded using thermocouples, as shown in Figure 2a, in the form of thermoelectric voltages ∆ ( ) = KΔ ( ) = K ′( ) and ( ) = K ( ) = K ( ). The procedure for determining the typically unknown Seebeck coefficient ( ), according to [39,42], is given by: where ( ), Seebeck coefficient of the junctions "material of the sample-thermoelectric wire"; and , Seebeck coefficient of the junction "thermoelectric wire B-thermoelectric wire C", could be obtained from, e.g., [44]. Temperature value in which the thermal diffusivity ( ) is determined is = + Θ . The value of ( ) is its average value in temperature range ± 0.5 ∆ (see Figure 1a): In this case, the temperature-averaging range ΔT of thermal diffusivity ( ), is equal to: Sample and Its Investigation Preparation Data on the chemical composition, structure, dimensions, and density of the tested sample of the Fe-Ni alloy with the Fe65Ni35 symbol are presented in Table 1. Temperature value T i in which the thermal diffusivity a(T i ) is determined is The value of a(T i ) is its average value in temperature range T i ± 0.5 ∆T (see Figure 1a): In this case, the temperature-averaging range ∆T of thermal diffusivity a(T i ), is equal to: Sample and Its Investigation Preparation Data on the chemical composition, structure, dimensions, and density of the tested sample of the Fe-Ni alloy with the Fe65Ni35 symbol are presented in Table 1. Its location was marked on phase-equilibrium diagrams of iron-nickel alloys, shown in Figure 3. The phase-equilibrium diagram by Reuter et al. [22,23] and Yang et al. [25], presented in Figure 3a, considers both the results of meteorites and alloys exposed to the electron beam, and the results of theoretical calculations made by Chuang et al. [21]. Materials 2020, 13, x FOR PEER REVIEW 5 of 16 Its location was marked on phase-equilibrium diagrams of iron-nickel alloys, shown in Figure 3. The phase-equilibrium diagram by Reuter et al. [22,23] and Yang et al. [25], presented in Figure 3a, considers both the results of meteorites and alloys exposed to the electron beam, and the results of theoretical calculations made by Chuang et al. [21]. The justification for the adoption of the phase diagram shown in Figure 3a as the basis for the interpretation of experiment results was: • during the measuring cycle, the sample face was not shielded from the direct interaction of laserradiation quanta (without oxides or another coating); • the interaction of the photons of the laser pulse with atoms and free electrons in the surface layer of the examined sample caused heat fluxes below the surface, which were created separately by these two carriers; • each time the laser pulse impacted on the material of the tested sample, it positively influenced the ordering of the sample's structure. Measurements of temperature and temperature changes of ∆Θ′( ) and Θ (t) were carried out by means of thermocouples that were electrically welded to the opposite surfaces of the investigated samples, as shown in Figure 2a. In order to measure thermoelectric-voltage difference ∆Eth(t), i.e., to measure temperature difference ∆Θ′(t), pairs of CuNi or Fe thermocouple wires were used (( Figure 2a)-thermocouple wire B was attached to the extreme surfaces of the sample). However, to measure thermoelectric voltage Eth(t), and thus temperature Θ′2(t), only on the back surface of the sample, the Fe-CuNi thermocouple (see Figure 2a-thermocouple wires B and C) was attached only to the back side of the sample. It was assumed that the temperature range of the tests would be between ambient temperature to approximately 500 °C, and the full measurement cycle would include both heating and cooling of the examined sample. Additionally, time interval Δt between subsequent discrete measurements a(Ti) and a(T(i+1)), required for changing the temperature of the examined sample from Ti to T(i+1), must be experimentally determined each time-observing the dynamics of temperature changes in the sample-thermostating T(0,i)-after setting a new value of T(0,i+1) to the automatic power-supply heater in the vacuum furnace. The next measurement ∆Θ(T(i+1)), and thus a(T(i+1)), was taken when changes dT(0,i+1)/dt reached a small previously assumed value. In [22,23] and Yang et al. [25], and (b) part of the comparison of cluster-variation and spinodal calculations with the equilibrium boundaries proposed by Chuang et al. [21] and Swartzendruber [24]. The justification for the adoption of the phase diagram shown in Figure 3a as the basis for the interpretation of experiment results was: • during the measuring cycle, the sample face was not shielded from the direct interaction of laser-radiation quanta (without oxides or another coating); • the interaction of the photons of the laser pulse with atoms and free electrons in the surface layer of the examined sample caused heat fluxes below the surface, which were created separately by these two carriers; • each time the laser pulse impacted on the material of the tested sample, it positively influenced the ordering of the sample's structure. Measurements of temperature T 0 and temperature changes of ∆Θ (t) and Θ 2 (t) were carried out by means of thermocouples that were electrically welded to the opposite surfaces of the investigated samples, as shown in Figure 2a. In order to measure thermoelectric-voltage difference ∆E th (t), i.e., to measure temperature difference ∆Θ (t), pairs of CuNi or Fe thermocouple wires were used (( Figure 2a)-thermocouple wire B was attached to the extreme surfaces of the sample). However, to measure thermoelectric voltage E th (t), and thus temperature Θ 2 (t), only on the back surface of the sample, the Fe-CuNi thermocouple (see Figure 2a-thermocouple wires B and C) was attached only to the back side of the sample. It was assumed that the temperature range of the tests would be between ambient temperature to approximately 500 • C, and the full measurement cycle would include both heating and cooling of the examined sample. Additionally, time interval ∆t between subsequent discrete measurements a(T i ) and a(T (i+1) ), required for changing the temperature of the examined sample from T i to T (i+1) , must be experimentally determined each time-observing the dynamics of temperature changes in the sample-thermostating T (0,i) -after setting a new value of T (0,i+1) to the automatic power-supply heater in the vacuum furnace. The next measurement ∆Θ(T (i+1) ), and thus a(T (i+1) ), was taken when changes dT (0,i+1) /dt reached a small previously assumed value. In turn, the selection of values ∆T 0 = T (0,i+1) − T (0,i) , i.e., between successive temperature values in thermostating, an examined sample depends heavily on changes in thermal diffusivity da(T)/dT of the sample material in the function of temperature. The larger the changes of this derivative are, the smaller the value ∆T 0 that must be selected so as to not overlook the initial or final value of the phase-transformation point. Results The structural tests, in particular microscopic observations of scanning electron microscopy (SEM), supported by the microanalysis of the chemical composition of energy-dispersive X-ray spectroscopy (EDS) and the X-ray diffraction phase analysis, of the material carried out before the thermal-diffusivity tests showed, for the entire alloy, a homogeneous, fine-grained morphology of the granular structure (Figure 4b,c) of nickel austenite γ (Fe, Ni) with trace amounts of ferrite α (Fe, Ni) ( Figure 5a). Supersaturation was observed above 340 • C, i.e., above the eutectoid temperature no further visible differences in the phase structure of the material were observed, as shown by XRD analysis (Figure 5a). However, it was possible to observe two grain size populations, fine and coarse (Figure 5b,c), but both with the same chemical composition. This observation suggested that the alloy's structure may form areas with different thermal stability related to the spinodal decomposition or the formation of metastable phases [21][22][23][24][25][26]31]. Nevertheless, in order to uniquely identify the phase changes taking place by means of classical metallurgical techniques, it is necessary to use high-resolution transmission electron microscopy (HRTEM) diffraction or to conduct electron-backscatter-diffraction (EBSD) testing. The results of tests of thermal-diffusivity a(T i ) changes of the Fe65Ni35 alloy in the temperature range from room temperature to approximately 500 • C are shown in Figure 6. The qualitative nature of the thermal-diffusivity changes was comparable with results obtained indirectly for the Fe64Ni36 alloy, using the dependence of a = λ/(ρc P ) (a is a thermal diffusivity, λ is a thermal conductivity, ρ is density, c p is specific heat) and data from the MPDB database [45]. The results of discrete tests as a function of temperature of thermal-diffusivity values ( ) of Fe65Ni35 alloy by the MPM during the heating and cooling of the sample are shown in Figure 7. The experiment was carried out for the sample during one test cycle, which consisted of results ( ) obtained in the course of heating and cooling in the temperature range taken for the tests. During sample heating, the time interval Δ = − between consecutive discrete measurements ∆ ′( ) was 20 min. During sample cooling, the time interval Δt was determined by the time constant of the heat exchange between the sample and heating element of the vacuum furnace, and was above 20 min. Figure 6. Results of own investigation of thermal diffusivity a(T i ) for Fe65Ni35 alloy during sample heating and cooling, and results obtained indirectly for Fe64Ni36 alloy using dependence of a = λ/(ρc P ) and data from MPDB database [45]. The results of discrete tests as a function of temperature of thermal-diffusivity values a(T i ) of Fe65Ni35 alloy by the MPM during the heating and cooling of the sample are shown in Figure 7. The experiment was carried out for the sample during one test cycle, which consisted of results a(T i ) obtained in the course of heating and cooling in the temperature range taken for the tests. During sample heating, the time interval ∆t = t i+1 − t i between consecutive discrete measurements ∆Θ (T i ) was 20 min. During sample cooling, the time interval ∆t was determined by the time constant of the heat exchange between the sample and heating element of the vacuum furnace, and was above 20 min. Similarly, as in [39,40], in this case, the sample of Fe65Ni35 was subjected to an examination using an approximate-value external magnetic-field to identify changes in its structure as a function of temperature (ferromagnetic ↔ paramagnetic). The test was carried out on an identical stand as that in [39] during one cycle, including the heating and cooling of the tested sample. Test-bench testing was carried out on a Ni999 sample with a diameter of 12 mm and a thickness of 1.85 mm. The layout of the test bench and the results of these tests are shown in Figure 8. Similarly, as in [39,40], in this case, the sample of Fe65Ni35 was subjected to an examination using an approximate-value external magnetic-field to identify changes in its structure as a function of temperature (ferromagnetic Similarly, as in [39,40], in this case, the sample of Fe65Ni35 was subjected to an examination using an approximate-value external magnetic-field to identify changes in its structure as a function of temperature (ferromagnetic ↔ paramagnetic). The test was carried out on an identical stand as that in [39] during one cycle, including the heating and cooling of the tested sample. Test-bench testing was carried out on a Ni999 sample with a diameter of 12 mm and a thickness of 1.85 mm. The layout of the test bench and the results of these tests are shown in Figure 8. paramagnetic). The test was carried out on an identical stand as that in [39] during one cycle, including the heating and cooling of the tested sample. Test-bench testing was carried out on a Ni999 sample with a diameter of 12 mm and a thickness of 1.85 mm. The layout of the test bench and the results of these tests are shown in Figure 8. Comparing the Tc measurement results of the tested samples using an external magnetic field with the results of other researchers available in the literature, we obtained very narrow temperature hysteresis loops for the magnetic transformations in the case of the Ni999 ( ↑ = 357.5 ℃ and ↓ = 356.9 ℃) and Fe65Ni35 ( ↑ = 240 ℃ and ↓ = 239 ℃) alloys. According to [24] and Figure Figure 9. The determination of the temperature value of magnetic transitions for during heating ↑ and cooling ↓ of the tested sample are shown in Figure 10. Comparing the Tc measurement results of the tested samples using an external magnetic field with the results of other researchers available in the literature, we obtained very narrow temperature hysteresis loops for the magnetic transformations in the case of the Ni999 (T C↑ = 357.5°C and T C↓ = 356.9°C) and Fe65Ni35 (T C↑ = 240°C and T C↓ = 239°C) alloys. According to [24] and Figure 8, for Ni, T C = 354.3 − 360°C, and for FeNi at 35.3 at % Ni, T C = 228 − 261 • C. After the initial elaboration of discrete results of our own tests of thermal diffusivity a(T i ), shown in Figure 6 Figure 9. The determination of the temperature value of magnetic transitions for during heating T c↑ and cooling T c↓ of the tested sample are shown in Figure 10. Discussion The Fe-Ni alloy selected for testing was an alloy with the formula Fe65Ni35 whose changes in thermal diffusivity a(T) were experimentally determined in one measurement cycle during sample heating (from about 50 to 500 • C) and cooling (from approximately 400 to 100 • C). The characteristics of a(T) of this alloy, together with discrete temperature values at points (A, B, C, . . . ) where there were rapid changes in da(T)/dT derivatives, which resulted in phase transformations and magnetic transformation, are shown in Figure 9. Preliminary analysis of the effects associated with the changes of a(T) shows that they were dominant during the heating of the examined alloy, Fe65Ni35. However, the determined characteristic temperatures of changes in da(T)/dT (Figure 7), compared with the transformation temperatures described in the Fe-Ni equilibrium systems, listed in Figure 3, are completely different. This led the authors to conduct a deeper analysis of the phase equilibrium, especially in the low-temperature area below 400 • C. Excellent and very thorough analysis of this issue was made by Zemtsowa in [30], which compared the results of the investigations within the last 50 years concerning stable, metastable, and spinodal areas of the Curie temperature, and the extent of the occurrence of superstructures L10 and L12 for specific proportions of iron and nickel. In conclusion, it is recommended to use the Fe-Ni phase system as it is the most credible one, supported by experimental data proposed by Chamberod et al. [31], and shown in Figure 11a. However, the findings obtained in this work with regard to the designated temperature characteristics were not in agreement with the proposed phase diagram. Discussion The Fe-Ni alloy selected for testing was an alloy with the formula Fe65Ni35 whose changes in thermal diffusivity ( ) were experimentally determined in one measurement cycle during sample heating (from about 50 to 500 °C) and cooling (from approximately 400 to 100 °C). The characteristics of ( ) of this alloy, together with discrete temperature values at points (A, B, C, ...) where there were rapid changes in d ( ) d ⁄ derivatives, which resulted in phase transformations and magnetic transformation, are shown in Figure 9. Preliminary analysis of the effects associated with the changes of ( ) shows that they were dominant during the heating of the examined alloy, Fe65Ni35. However, the determined characteristic temperatures of changes in d ( ) d ⁄ (Figure 7), compared with the transformation temperatures described in the Fe-Ni equilibrium systems, listed in Figure 3, are completely different. This led the authors to conduct a deeper analysis of the phase equilibrium, especially in the low-temperature area below 400 °C. Excellent and very thorough analysis of this issue was made by Zemtsowa in [30], which compared the results of the investigations within the last 50 years concerning stable, metastable, and spinodal areas of the Curie temperature, and the extent of the occurrence of superstructures L10 and L12 for specific proportions of iron and nickel. In conclusion, it is recommended to use the Fe-Ni phase system as it is the most credible one, supported by experimental data proposed by Chamberod et al. [31], and shown in Figure 11a. However, the findings obtained in this work with regard to the designated temperature characteristics were not in agreement with the proposed phase diagram. (a) (b) Figure 11. Investigated Fe65Ni35 alloy marked on fragments of Fe-Ni phase diagrams proposed by (a) Chamberod et al. [31], and (b) Cacciamani et al. [26], covering only phases based on a FCC crystal structure. Another approach to clarify and validate the phase diagram of the Fe-Ni system is the use of thermodynamic calculations. Connecting thermodynamic modeling with experiment data, most importantly with atomistic calculations (making it possible to designate the thermodynamic functions of metastable states) presented by Cacciamani et al. in [26], the Fe-Ni equilibrium diagram covering phases based on the FCC crystal structure was proposed-split into an area of stable, metastable equilibrium phases, spinodal region, and Curie temperature (Figure 11b). Considering previously published data, and comparing them with registered values of thermal diffusivity based on the heating curve (Figure 9), it could be observed that the obtained experimental results of temperature changes were most similar to the equilibrium system, presented in Figure 11b. The temperature of the eutectoid transformation γ-(Fe,Ni)FM → α-Fe + FeNi3 (Point B, Figure 9) was occurred the temperature interval 272.3-275.6 °C, close to 277 °C, as shown in Figure 11b. The eutectoid temperature had a constant value. The presented experimental data were included in a Figure 11. Investigated Fe65Ni35 alloy marked on fragments of Fe-Ni phase diagrams proposed by (a) Chamberod et al. [31], and (b) Cacciamani et al. [26], covering only phases based on a FCC crystal structure. Another approach to clarify and validate the phase diagram of the Fe-Ni system is the use of thermodynamic calculations. Connecting thermodynamic modeling with experiment data, most importantly with atomistic calculations (making it possible to designate the thermodynamic functions of metastable states) presented by Cacciamani et al. in [26], the Fe-Ni equilibrium diagram covering phases based on the FCC crystal structure was proposed-split into an area of stable, metastable equilibrium phases, spinodal region, and Curie temperature (Figure 11b). Considering previously published data, and comparing them with registered values of thermal diffusivity based on the heating curve (Figure 9), it could be observed that the obtained experimental results of temperature changes were most similar to the equilibrium system, presented in Figure 11b. The temperature of the eutectoid transformation γ-(Fe,Ni) FM → α-Fe + FeNi3 (Point B, Figure 9) was occurred the temperature interval 272.3-275.6 • C, close to 277 • C, as shown in Figure 11b. The eutectoid temperature had a constant value. The presented experimental data were included in a range of approximately 3 K. This effect was undoubtedly linked with the registration of thermal diffusivity at a slow yet constant temperature change of 0.5 K/min. Very high convergence was also obtained in the case of the temperature recorded at Point C (339.8 • C) that, on a comparable equilibrium diagram, corresponded to 342 • C, in which the spinodal area of phases γ-(Fe,Ni) PM + γ-(Fe,Ni) FM lost its domain arrangement and was being rebuilt in a metastable area of solubility variable γ-(Fe,Ni) PM + γ-(Fe,Ni) FM . At 372.7 • C (Point D, Figure 9) and 380 • C, in accordance with Figure 11b, there was a monotectoid transformation of a mixture of γ-(Fe,Ni) PM + γ-(Fe,Ni) FM into the solid solution of γ-(Fe,Ni) FM . However, the Curie temperature T C , both during cooling (241.4 • C) and heating (244.4 • C), although clearly differing from the value shown in Figure 11b (∼260 • C), was comparable with data shown in Figure 8, hence acknowledging and validating the possibility of applying the research technique to analyze phase transformations. The observed difference ∆T C of approximately 20 • C may be explained on the basis of research results by M.R. Gallas and J.A.H. da Jornada published in [33]. They demonstrated that the observed effect may be caused by changes in the mutual interactions of Fe-Fe and Fe-Ni in areas of the solubility variable and spinodal decomposition, which affect diffusion changes. Curie temperature changes may also be due to slight structural changes already occurring at a crystallite level deriving from material purity and the rate of temperature changes. The significant differences in the designation of T C values in Fe-Ni alloys were also discussed in [30]. The authors observed that, for the same alloy of comparable and very high purity, achieved by melting and vacuum casting of high-purity components, despite the application of the same research techniques in two different cases T C , was first registered at 415 and then at 270 • C. The authors claim that the observed effect may have been caused by the presence of interstitial elements in the alloy, e.g., at a carbon content of 0.001%, and temperature, T C , equal to 470 • C, while increasing the carbon content to 1.7% resulted in a rise in T C of up to 580 • C. The most difficult feature to interpret seems to be the temperature recorded at Point A. On the basis of analysis carried out in the available literature, the authors postulate that, at this temperature, in accordance with Figure 3b, the rebuilding of a metastable area in the solubility variable γ-(Fe,Ni) PM + γ-(Fe,Ni) FM into a superstructure L1 2 of phase Fe 3 Ni occurs. However, this should be confirmed by structural research, which is not the subject of this work. Clarification is also required with regard to a thermal diffusivity hysteresis loop of the examined Fe65Ni35 alloy, observed between the heating and cooling of the sample. However, this effect is presumably linked with the lack of transformations registered during cooling, which is undoubtedly connected with the phenomenon of overcooling. The high thermal stability of this material, causing stable structures compatible with the equilibrium system only at cooling speeds of approximately 1K/(10 6 years) [22], results in overcooling when cooling at a speed of 0.5 K/min of the γ-(Fe,Ni) PM of the A1 lattice, up to the temperature of approximately 200 • C, in which an intermetallic phase Fe 3 Ni of the L1 2 structure is formed. Conclusions The application of the modified-pulse method to identify phase transformation in the case of the Fe65Ni35 alloy during heating confirmed the theoretical results reported in the phase diagram by Ciaciamani et al. [26] This allows the use of the MPM to identify and verify phase changes occurring in the tested alloys on the basis of changes in their thermal-diffusivity characteristics, a(T). This can be used as an additional research tool to identify and verify tests carried out by other methods, and in some cases, it can be an important aid in analyzing phase transformations occurring at the domain level that are difficult to identify by classical metallurgical methods, or in identifying order-disorder or/and magnetic transformation.
7,653.8
2020-08-01T00:00:00.000
[ "Materials Science" ]
Development of Epirubicin-Loaded Biocompatible Polymer PLA–PEG–PLA Nanoparticles: Synthesis, Characterization, Stability, and In Vitro Anticancerous Assessment Epirubicin (EPI) is an anti-cancerous chemotherapeutic drug that is an effective epimer of doxorubicin with less cardiotoxicity. Although EPI has fewer side effects than its analog, doxorubicin, this study aims to develop EPI nanoparticles as an improved formula of the conventional treatment of EPI in its free form. Methods: In this study, EPI-loaded polymeric nanoparticles (EPI-NPs) were prepared by the double emulsion method using a biocompatible poly (lactide) poly (ethylene glycol) poly(lactide) (PLA–PEG–PLA) polymer. The physicochemical properties of the EPI-NPs were determined by dynamic light scattering (DLS), transmission electron microscopy (TEM), differential scanning calorimetry (DSC), entrapment efficiency and stability studies. The effect of EPI-NPs on cancer cells was determined by high throughput imaging and flow cytometry. Results: The synthesis process resulted in monodisperse EPI-NPs with a size of 166.93 ± 1.40 nm and an elevated encapsulation efficiency (EE) of 88.3%. In addition, TEM images revealed the spherical uniformness of EPI-NPs with no aggregation, while the cellular studies presented the effect of EPI-NPs on MCF-7 cells’ viability; after 96 h of treatment, the MCF-7 cells presented considerable apoptotic activity. The stability study showed that the EPI-NPs remained stable at room temperature at physiological pH for over 30 days. Conclusion: EPI-NPs were successfully encapsulated within a highly stable biocompatible polymer with minimal loss of the drug. The used polymer has low cytotoxicity and EPI-NPs induced apoptosis in estrogen-positive cell line, making them a promising, safe treatment for cancer with less adverse side effects. Preparation of EPI Polymeric NPs The NPs were prepared using a double emulsion method. An amount of 40 mg of PLA-PEG-PLA polymer was dissolved in 2 mL of chloroform and 100 µM of EPI dissolved in DMSO was subsequently added. To create the first emulsion, the sample was placed in an ice bath and ultrasonicated for five minutes (50 s on, 10 s off) at 65% amplitude. Then, 3 mL of 1.5% PVA (prepared in advance by dissolving PVA powder in dH 2 O) was added slowly to the solution and the sample was ultrasonicated using the same previous settings to create the double emulsion. The final formed nanosuspension was stirred for one hour at room temperature (RT) under a fume hood to facilitate the complete evaporation of chloroform. The sample was then centrifuged (Hermle Z 36 HK; HERMLE Labortechnik GmbHk, Wehingen, Germany) in an Eppendorf tube at 14,000 rpm for one hour at RT. The precipitated NPs were next washed with 2 mL of distilled water and recentrifuged for 30 min at the same settings. After collecting the supernatant, the NPs were air-dried overnight under a fume hood. For the control, void NPs were prepared by adding free DMSO instead of the drug using the same exact method. Particle Size and Polydispersity Index Analysis The average particle size and polydispersity index (PdI) were determined by dynamic light scattering (DLS) using a particle size analyzer (ZetaPALS; Brookhaven Instruments, Holtsville, NY, USA) with an angle of detection of 90 • . Both the particle size and PdI were measured five times consecutively. The average of the five instrument runs for each was calculated. The statistical analysis of particle size and polydispersity index were performed through Particle Solutions Software-Brookhaven Instruments, NY, USA. Measurement of Zeta-Potential Using the aforementioned particle size analyzer (ZetaPALS; Brookhaven Instruments, NY, USA), the zeta-potential of the EPI-NPs was measured by enacting the laser Doppler velocimetry mode. The statistical analysis of zeta potential was performed through Particle Solutions Software-Brookhaven Instruments, NY, USA. Transmission Electron Microscopy (TEM) TEM images were collected using a JEM-1400 electron microscope (JEOL, Tokyo, Japan) operating at an acceleration voltage of 120 kV. A drop of the sample (1 mg/mL) was placed on a 400-mesh, carbon-coated copper grid. The samples were air-dried at RT prior to recording measurements. Measurement of Drug Entrapment Efficiency (%EE) and Drug Release Study The supernatant collected after the centrifugation processes (mentioned in Section 2.2), was measured using ultraviolet spectrophotometry to determine the amount of excess EPI. The concentration of EPI entrapped in the NPs was measured from the precipitated NPs by (1) For the release study, 4 mg of the EPI-NPs resuspended in 1.5 mL pbs inserted into a dialysis tube (W 25 mm) made from cellulose membrane, Mwt cut off = 14,000 (Sigma-Aldrich, Co., MO, USA). The sample-containing tube was immersed in 14 mL of PBS in a small dark glass bottle containing magnet. The bottle was then closed and placed on a magnetic stirrer at a speed of 4 rpm and temperature of 37 • C. To determine the amount of the released EPI, UV absorbance was measured; each time, 1 mL of the sample was taken for measurement and replaced by PBS to maintain sink conditions. The data were obtained through SoftMax Pro Software-Molecular Devices, CA, USA. Stability Study At this stage, 200 µL of void NPs and 200 µL of EPI-NPs were dispersed in 800 µL of the following five different solutions for analysis: ultra-pure water (pH: 7.02), PBS (pH: 7.15), DMEM media with FBS and penicillin-streptomycin (pH: 7.10), HCl (pH: 3.26), and KOH (pH 14.05). The final concentrations of the void NP and EPI-NP dispersions were 2.2 mg/mL and 7.671 µM. The NP dispersions were stored in sealed Eppendorf tubes in the dark at RT. The stability of the NPs was tested daily in all solutions for a duration of 30 days and using the particle size analyzer (ZetaPALS; Brookhaven Instruments, NY, USA), the average of the five instrument runs for each parameter (particle size, PDI and zeta potential) was calculated. Differential Scanning Calorimetry The thermodynamic properties of EPI-NPs were studied by differential scanning calorimetry (DSC 412 Polyma; NETZSCH, Selb, Germany) to identify the purity degree of EPI HCl and the level of epirubicin-copolymer interaction. The DSC system was calibrated using the indium calibration standard. Then, a small amount (5-7 mg) of pure EPI, EPI-NPs, and void NPs were weighed in the DSC aluminum pans separately to be analyzed in three separate runs. The starting temperature was 30 • C and was gradually increased up to 250 • C at a rate of 10 • C per minute using nitrogen as a purging gas at a flow rate of 40 mL/min. 2.6. Anticancer Activity of EPI-NPs 2.6.1. Assessment of Anticancer Activity Using Flow Cytometry An annexin V-FITC apoptosis staining was used to evaluate cell viability as per the manufacturer's recommendations. Briefly, MCF-7 cells at passage number 12 were seeded (0.4 million) in a T25 culture flask in 3 mL of 10% FBS DMEM complete medium. After 90 min of incubation, cells were treated with EPI (6 nM, 12 nM, 24 nM, and 48 nM) of EPI-NPs. As a control, the cells were also treated with EPI in its free form at the same concentrations. In addition, two T25 flasks containing binding buffer and annexin-V were used as controls. After 90 min of treatment, supernatant and attached cells were collected by centrifugation. The collected cells were washed with PBS, then centrifuged (600× g, 5 min, RT); this step was repeated twice. After that, 5 µL of annexin V-FITC was added to the cell suspension; subsequently, cells were incubated for 10 min at RT and then washed with binding buffer. Next, 10 µL of propidium iodide (20 µg/mL) was added to the cells' suspension and the cell viability was determined by fluorescence-activated cell sorting (FACS), performed using the FACS Canto II flow cytometry system (BD Biosciences, San Jose, CA, USA). 2.6.2. Fluorescence High-Content Imaging MCF-7 cells were plated in 96-well plates at a density of 10,000 cells per well in an appropriate amount of DMEM media. Cells were treated with free EPI (5 µM, 10 µM, Polymers 2021, 13, 1212 5 of 18 and 15 µM), EPI-loaded NPs (5 µM, 10 µM, and 15 µM), and void NPs for zero, 24,48,72, and 96 h. The cells were incubated at 37 • C under the condition of 5% CO 2 . Prior to microscopy, cells were stained with calcein acetoxymethyl (2 µg/mL), HOECHST33342 (5 µg/mL), and propidium iodide (2.5 µg/mL) for 20 min at 37 • C and 5% CO 2 . Then, cells were imaged using the ImageXpress ® Micro system and analyzed with the MetaXpress ® software (both Molecular Devices, Downingtown, PA, USA). Nuclei were counted in each well and the average fluorescence intensity was calculated. All experiments were performed in triplicate and their outcomes were averaged; resultant values were reported as the mean ± standard deviation. Statistical Analysis Data for the stability study were expressed as the mean of five independent experiments ± standard deviation (SD). Linear regression analysis was performed to describe the relationships between a set of independent variables and the dependent variable. Ordinary least squares (OLS) regression tests were performed to compare two groups of quantitative variables: days (X) and zeta potential (Y) and days (X) and particle size (Y). The significance of the results was at the level of p-value < 0.05. Statistical operations and calculations were performed using Microsoft ® Excel ® 2016. Results and Discussion Here, the synthesis of EPI-NPs is described. The EPI-NPs were prepared using a previously reported method [19,41]. The double emulsion method allows for the encapsulation of hydrophobic and hydrophilic drugs [42]. In addition, the double emulsion method provides the nanoparticles with the controlled release feature, which makes them suitable candidates as sustained release drug delivery systems [43,44]. The PLA-PEG-PLA amphiphilic polymer was dissolved in chloroform, and the EPI was dissolved in DMSO; both solutions were ultra-sonicated to form the first emulsion. The ultrasonication breaks down the polymer which results in the formation of self-assembled polymeric micelles. The poly lactic acid (PLA) block forms the central hydrophobic core, while the poly ethylene glycol (PEG) block forms the hydrophilic outer layer of the EPI-NPs [42,43]. Subsequently, PVA was added as an emulsifying agent to form the second emulsion. Figure 1 demonstrates the synthesis method of the EPI-NPs. Furthermore, the drug release was monitored over a period of 432 h ( Figure 2). The EPI release is sustained over the whole period of the study. The release profile of EPI shows a sharp release at the beginning followed by a steady sustained release ( Figure 2). The release of the EPI in its free form (Supplementary Information), shows the burst release of the EPI within few hours. The results obtained from the drug release assay show the advantage of the PLA-PEG-PLA EPI-NPs due to their sustained release properties. The PLA-PEG polymer-based NPs provide sustained release for different types of active ingredients, such as growth hormones, and other hydrophilic and hydrophobic drugs [41,42,44,45]. Dynamic Light Scattering The EPI-NPs were synthesized successfully via the previously described double emulsion method [46]. Figure 3 shows the particle sizes of the EPI-NPs and void NPs, respectively; the EPI-NPs had a mean size of around 166 nm, while the void NPs measured 172 nm. This difference in particle size might be due to the loading of EPI, and its presence in the environment during the synthesis. Furthermore, the synthesized EPI-NPs had a PdI of 0.23, which suggests uniform monodisperse NPs were present. Additionally, the Polymers 2021, 13, x FOR PEER REVIEW measured 172 nm. This difference its presence in the environment d NPs had a PdI of 0.23, which Additionally, the ƺ of the EPI-NP (Figure 3c) [32]. The NPs with a deemed to be neutral [47], th physiological pH values. of the EPI-NPs was 4.58 mV, while that of the void NPs was 1.97 mV (Figure 3c) [32]. The NPs with a zeta potential within the range of −10 and +10 mV are deemed to be neutral [47], therefore the synthesized EPI-NPs will be stable at physiological pH values. Dynamic Light Scattering The EPI-NPs were synthesized successfully via the previously described double emulsion method [46]. Figure 3 shows the particle sizes of the EPI-NPs and void NPs, respectively; the EPI-NPs had a mean size of around 166 nm, while the void NPs Dynamic Light Scattering The EPI-NPs were synthesized successfully via the previously described double emulsion method [46]. Figure 3 shows the particle sizes of the EPI-NPs and void NPs, respectively; the EPI-NPs had a mean size of around 166 nm, while the void NPs Transmission Electron Microscopy TEM was used to determine the morphology of the synthesized EPI-NPs. Figure 4 includes images of the EPI-NPs, where spherical, homogeneous NPs with no aggregation are clearly visible. The particle size obtained by TEM was similar to that revealed by DLS. Figure 4B shows a fine membrane surrounding the EPI-NPs; this polymeric membrane represents a protective feature to support the delivery of the active ingredient (EPI) to the target site. measured 172 nm. This difference in particle size might be due to the loading of EPI, and its presence in the environment during the synthesis. Furthermore, the synthesized EPI-NPs had a PdI of 0.23, which suggests uniform monodisperse NPs were present. Additionally, the ƺ of the EPI-NPs was 4.58 mV, while that of the void NPs was 1.97 mV ( Figure 3c) [32]. The NPs with a zeta potential within the range of −10 and +10 mV are deemed to be neutral [47], therefore the synthesized EPI-NPs will be stable at physiological pH values. Transmission Electron Microscopy TEM was used to determine the morphology of the synthesized EPI-NPs. Figure 4 includes images of the EPI-NPs, where spherical, homogeneous NPs with no aggregation are clearly visible. The particle size obtained by TEM was similar to that revealed by DLS. Figure 4B shows a fine membrane surrounding the EPI-NPs; this polymeric membrane Entrapment Efficiency In this study, the synthesis of the EPI-NPs was optimized to achieve a high encapsulation efficiency, thus improving the biopharmaceutical properties, to ensure an enhanced efficacy of EPI. The encapsulation efficiency was obtained by measuring EPI UV absorbance (for both the encapsulated drug and the excess in the supernatant); the %EE was 82% from both measurements. The EE mainly depends on the polymer composition, drug solubility and functional groups. The use of PVA as a surfactant maintains the stability of the emulsion, specifically during solvent evaporation. Some studies have shown that the use of PLA-PEG copolymers and its end-group derivative nanoparticles has the advantage of increasing drug loading and entrapment efficiency. This can be obtained by adjusting the PEG/PLA ratio to increase the efficiency of hydrophobic drugs [44,48]. Furthermore, the emulsifier plays a key role in the EE [19]. For instance, in an attempt to encapsulate EPI, Chang et al. modified the emulsifying agent and the pH values of the polymerization medium to increase the entrapment efficiency of the EPI [49]. In the same context, Esim et al. encapsulated EPI within poly D,L-lactic-co-glycolic acid (PLGA) and they used several surfactants to increase the encapsulation of EPI [50]. represents a protective feature to support the delivery of the active ingredient (EPI) to the target site. Entrapment Efficiency In this study, the synthesis of the EPI-NPs was optimized to achieve a high encapsulation efficiency, thus improving the biopharmaceutical properties, to ensure an enhanced efficacy of EPI. The encapsulation efficiency was obtained by measuring EPI UV absorbance (for both the encapsulated drug and the excess in the supernatant); the %EE was 82% from both measurements. The EE mainly depends on the polymer composition, drug solubility and functional groups. The use of PVA as a surfactant maintains the stability of the emulsion, specifically during solvent evaporation. Some studies have shown that the use of PLA-PEG copolymers and its end-group derivative nanoparticles has the advantage of increasing drug loading and entrapment efficiency. This can be obtained by adjusting the PEG/PLA ratio to increase the efficiency of hydrophobic drugs [44,48]. Furthermore, the emulsifier plays a key role in the EE [19]. For instance, in an attempt to encapsulate EPI, Chang et al. modified the emulsifying agent and the pH values of the polymerization medium to increase the entrapment efficiency of the EPI [49]. In the same context, Esim et al. encapsulated EPI within poly D,L-lactic-coglycolic acid (PLGA) and they used several surfactants to increase the encapsulation of EPI [50]. Stability Study The stability and aggregation behavior of the EPI-NPs and void NPs in different solutions were evaluated. Measurements were taken at different time points over a duration of 31 days. This study evaluates the influence of pH and different solutions on the stability of the EPI-NPs. The stability of the EPI-NPs was evaluated in different pH solutions. The different pH solutions used in this study covered the different environments the NPs might encounter if used in vivo. The pH ranged from acidic, physiological pH and basic pH values; the information reported from this study will determine how to best store EPI-NPs to maintain maximum efficacy [28,44]. The surface charges of the NP suspension were assessed by measuring the ƺ (Tables 1 and 2). Generally, the results of EPI-loaded NP suspensions showed a steady trend, with similar ƺ averages −2.7 mV and 0.21 mV for days 0 and 31 consecutively, although the KOH suspension exhibited high fluctuations among the readings. Moreover, the EPI-NPs in HCl suspension showed a slight decrease in positive charges. On the other hand, the results for the void NP suspensions showed notable changes in ƺ with variations in measurements; specifically, the ƺ of both the HCl and PBS NPs suspensions shifted Stability Study The stability and aggregation behavior of the EPI-NPs and void NPs in different solutions were evaluated. Measurements were taken at different time points over a duration of 31 days. This study evaluates the influence of pH and different solutions on the stability of the EPI-NPs. The stability of the EPI-NPs was evaluated in different pH solutions. The different pH solutions used in this study covered the different environments the NPs might encounter if used in vivo. The pH ranged from acidic, physiological pH and basic pH values; the information reported from this study will determine how to best store EPI-NPs to maintain maximum efficacy [28,44]. The surface charges of the NP suspension were assessed by measuring the Polymers 2021, 13, x FOR PEER REVIEW measured 172 nm. This difference in partic its presence in the environment during the NPs had a PdI of 0.23, which suggests Additionally, the ƺ of the EPI-NPs was 4.5 ( Figure 3c) [32]. The NPs with a zeta pote deemed to be neutral [47], therefore physiological pH values. (Tables 1 and 2). Generally, the results of EPI-loaded NP suspensions showed a steady trend, with similar 1, 13, x FOR PEER REVIEW 7 of 20 measured 172 nm. This difference in particle size might be due to the loading of EPI, and its presence in the environment during the synthesis. Furthermore, the synthesized EPI-NPs had a PdI of 0.23, which suggests uniform monodisperse NPs were present. Additionally, the ƺ of the EPI-NPs was 4.58 mV, while that of the void NPs was 1.97 mV (Figure 3c) [32]. The NPs with a zeta potential within the range of −10 and +10 mV are deemed to be neutral [47], therefore the synthesized EPI-NPs will be stable at physiological pH values. measured 172 nm. This difference in particle size might be du its presence in the environment during the synthesis. Further NPs had a PdI of 0.23, which suggests uniform monodi Additionally, the ƺ of the EPI-NPs was 4.58 mV, while that of ( Figure 3c) [32]. The NPs with a zeta potential within the ra deemed to be neutral [47], therefore the synthesized E physiological pH values. measured 172 nm. This difference in particle size might be due to the loading of EPI, and its presence in the environment during the synthesis. Furthermore, the synthesized EPI-NPs had a PdI of 0.23, which suggests uniform monodisperse NPs were present. Additionally, the ƺ of the EPI-NPs was 4.58 mV, while that of the void NPs was 1.97 mV (Figure 3c) [32]. The NPs with a zeta potential within the range of −10 and +10 mV are deemed to be neutral [47], therefore the synthesized EPI-NPs will be stable at physiological pH values. measured 172 nm. This difference in particle size might be due to the loading of EPI, and its presence in the environment during the synthesis. Furthermore, the synthesized EPI-NPs had a PdI of 0.23, which suggests uniform monodisperse NPs were present. Additionally, the ƺ of the EPI-NPs was 4.58 mV, while that of the void NPs was 1.97 mV (Figure 3c) [32]. The NPs with a zeta potential within the range of −10 and +10 mV are deemed to be neutral [47], therefore the synthesized EPI-NPs will be stable at physiological pH values. Regarding the particle size ( Figure 5A,B and Table 3), the EPI-loaded and void NPs showed similar trends. For both EPI-NPs and void NPs suspended in water and HCl, the average particle size increased significantly. EPI-loaded and void NPs immersed in PBS showed a slight increase in the average particle diameter (193 nm when freshly prepared). For the NP KOH suspension, no significant change in particle size was observed even after 31 days of preparation. Meanwhile, the average diameter of both void and EPI-NPs immersed in DMEM media increased significantly to become 495 nm after 31 days. In regard to the polydispersity index, PdI, ( Figure 5C,D and Table 4) EPI-loaded NPs immersed in water, DMEM media, and KOH showed no significant change, with results being slightly polydispersed. The PdI for EPI-loaded NPs immersed in HCl showed a small increase (Day 0: 0.56 vs. Day 31: 0.76). In contrast with EPI-loaded NPs in PBS suspension, NPs showed a very high PdI index (4.63) immediately after their addition; however, the PdI decreased significantly (0.24) and was within the monodisperse range by the end of the study. 31: 0.36). In a similar way to EPI-loaded NPs, after the initial addition of void NPs to PBS, the PdI indicated high polydispersity (1.18) that decreased (0.16) over the course of the study. The PdI results confirm that stable monodispersed NPs remained monodispersed and non-aggregated even after 30 days of synthesis. The DLS results indicated that the NPs were sufficiently stable during 31 days of storage in the dark at RT (Figure 6). In general, EPI-NPs have a better stability profile than void NPs; Figure 7 below shows that the EPI-NPs are stable for the period of 30 days at physiological pH values. The particle size and surface charge of EPI-NPs remained within the acceptable range for a period of 30 days. The increase in size of the EPI-NPs immersed in DMEM media can be attributed to the adsorption of FBS found in the media. Following the cross-linkage of polymeric NPs with bovine serum albumin (BSA), Palanikumar et al. analyzed the 26 most abundant serum proteins and found that PLGA NPs had higher adsorption than the BSA-PLGA NPs did [51]. However, the increase in size observed could be attributed to other reasons such as the presence of the hydrophilic PEG layer with low surface charges, which may reduce the serum protein absorption [41]. The increase in the average diameter of particles in HCl is combined with an increase in polydispersity. The acidic environment might lead to an aggregation of particles, causing the hydrolysis of the ester bonds in the polymer chain, which leads to the degradation of the particle cores [51]. A study by Lazzari et al. in 2012 showed that polymeric NPs do aggregate in gastric juice [52]. In addition to the pH values, many factors can cause NPs aggregation, including salts and enzymes [52]. This may explain the immediate aggregation of NPs when added to PBS and media. The variations found in Polymers 2021, 13, x FOR PEER REVIEW measured 172 nm. This difference in particle size might be due to the loadi its presence in the environment during the synthesis. Furthermore, the sy NPs had a PdI of 0.23, which suggests uniform monodisperse NPs Additionally, the ƺ of the EPI-NPs was 4.58 mV, while that of the void NP (Figure 3c) [32]. The NPs with a zeta potential within the range of −10 an deemed to be neutral [47], therefore the synthesized EPI-NPs will physiological pH values. Transmission Electron Microscopy TEM was used to determine the morphology of the synthesized EPI includes images of the EPI-NPs, where spherical, homogeneous NPs with n are clearly visible. The particle size obtained by TEM was similar to that rev Figure 4B shows a fine membrane surrounding the EPI-NPs; this polyme measurements in void NPs suspensions in PBS and media and the EPI-NPs' suspension in HCl could be attributed to the same reasons. In conclusion, from the above results, it can be determined that the EPI-NPs are most stable at physiological pH values, which makes them suitable to be formulated in different dosage forms without restrictions. Figure 6 below represents timepoint particle size and zeta potential measurements for EPI-NPs and void-NPs over a period of 30 days. Differential Scanning Calorimetry The thermodynamic properties of EPI-NPs were studied by differential scanning calorimetry (DSC 412 Polyma; NETZSCH, Selb, Germany) to identify the purity degree of EPI HCl and the level of epirubicin-copolymer interaction. The DSC system was calibrated using the indium calibration standard. Then, a small amount (5-7 mg) of pure EPI HCl, EPI-NPs, and void NPs were weighed in the DSC aluminum pans separately to be analyzed in three separate runs. The starting temperature was 30 °C and was gradually increased up to 250 °C at a rate of 10 °C per minute using nitrogen as a purging gas at a flow rate of 40 mL/min. The DSC thermograms are presented in Figure 7. Figure 7 shows the thermograms of pure EPI, EPI-NPs, and void NPs. The SDC thermogram of pure EPI elicited an endothermic (melting) peak at 184.6 °C, corresponding to its melting point, which is very closed to the labeled value (185 °C) [53,54]. The void and EPI-NPs showed mid-broad endothermic peaks at 114.8 °C and 102.7 °C, respectively, due to the melting of the copolymer (PLA-PEG-PLA) matrix of NP formulations. The difference between the void and EPI-NPs is reflected as a little peak shift, where the presence of EPI within the polymeric matrix led to a reduction in its strength of intermolecular interaction force and a decrease in the required transition energy, consecutively [55]. Furthermore, as the melting temperature of the PLA is 130-180 °C, the merging of the PEG inside the PLA matrix in a form of PLA-PEG-PLA triblock copolymer (66.7% PLA) leads to a reduction in its strength and a decrease in the melting temperature below 115 °C [56]. This explains the appearance of the EPI HCl and void NPs peaks at 102.7 and 114.8 °C, respectively. The EPI-NPs' thermogram presented in Figure 7 shows no endothermic peak in the area of 180 °C to 190 °C, which indicates the collapse of EPI's crystalline structure and turns into the molecular level, allowing the EPI to interact with the polymeric matrix system, and reinforce its thermal stability as well. This theory is supported in a previous study by Lavor et al. [57]. Additionally, this theory is supported by the appearance and shifting of the small endothermic peak of the pure EPI HCl from 184.6 °C to 222.1 °C, indicating the high stability of EPI HCl within the polymeric NP system. Differential Scanning Calorimetry The thermodynamic properties of EPI-NPs were studied by differential scanning calorimetry (DSC 412 Polyma; NETZSCH, Selb, Germany) to identify the purity degree of EPI HCl and the level of epirubicin-copolymer interaction. The DSC system was calibrated using the indium calibration standard. Then, a small amount (5-7 mg) of pure EPI HCl, EPI-NPs, and void NPs were weighed in the DSC aluminum pans separately to be analyzed in three separate runs. The starting temperature was 30 • C and was gradually increased up to 250 • C at a rate of 10 • C per minute using nitrogen as a purging gas at a flow rate of 40 mL/min. The DSC thermograms are presented in Figure 7. Figure 7 shows the thermograms of pure EPI, EPI-NPs, and void NPs. The SDC thermogram of pure EPI elicited an endothermic (melting) peak at 184.6 • C, corresponding to its melting point, which is very closed to the labeled value (185 • C) [53,54]. The void and EPI-NPs showed mid-broad endothermic peaks at 114.8 • C and 102.7 • C, respectively, due to the melting of the copolymer (PLA-PEG-PLA) matrix of NP formulations. The difference between the void and EPI-NPs is reflected as a little peak shift, where the presence of EPI within the polymeric matrix led to a reduction in its strength of intermolecular interaction force and a decrease in the required transition energy, consecutively [55]. Furthermore, as the melting temperature of the PLA is 130-180 • C, the merging of the PEG inside the PLA matrix in a form of PLA-PEG-PLA triblock copolymer (66.7% PLA) leads to a reduction in its strength and a decrease in the melting temperature below 115 • C [56]. This explains the appearance of the EPI HCl and void NPs peaks at 102.7 and 114.8 • C, respectively. The EPI-NPs' thermogram presented in Figure 7 shows no endothermic peak in the area of 180 • C to 190 • C, which indicates the collapse of EPI's crystalline structure and turns into the molecular level, allowing the EPI to interact with the polymeric matrix system, and reinforce its thermal stability as well. This theory is supported in a previous study by Lavor et al. [57]. Additionally, this theory is supported by the appearance and shifting of the small endothermic peak of the pure EPI HCl from 184.6 • C to 222.1 • C, indicating the high stability of EPI HCl within the polymeric NP system. The Effect of EPI-NPs on Estrogen Positive Cancer Cells MCF-7 Flow cytometry has been used in this study to evaluate the effect of EPI-NPs on the MCF-7 cell line and to compare it with the effect of free-form EPI on the same cells ( Figure 8). The MCF-7 cells were treated with 6 nM, 12 nM, 24 nM, and 48 nM of EPI-NPs and 6 nM, 12 nM, 24 nM, and 48 nM EPI for 90 min. Figure 6A-C show the effect of EPI on the MCF-7 cells; notably, early apoptosis, late apoptosis, and necrosis were observed within the 90-min treatment period. Similarly, in the presence of EPI-NPs, the MCF-7 cells showed a similar apoptotic profile as that triggered by the free-form EPI. In conclusion, the synthesized EPI nanocarriers showed more apoptosis toward the breast cancer cells, which suggests greater cytotoxicity is correlated with the EPI-NPs. Flow cytometry was used in this study to evaluate the effect of EPI-NPs on the MCF-7 cell line and to compare it with the effect of free-form EPI on the same cells. The MCF-7 cells were treated with 6 nM, 12 nM, 24 nM, and 48 nM of EPI-NPs and 6 nM, 12 nM, 24 Flow cytometry was used in this study to evaluate the effect of EPI-NPs on the MCF-7 cell line and to compare it with the effect of free-form EPI on the same cells. The MCF-7 cells were treated with 6 nM, 12 nM, 24 nM, and 48 nM of EPI-NPs and 6 nM, 12 nM, 24 nM, and 48 nM EPI for 90 min. Figure 8A-C show the effect of EPI on the MCF-7 cells; notably, early apoptosis, late apoptosis, and necrosis were observed within the 90-min treatment period. Similarly, in the presence of EPI-NPs, the MCF-7 cells showed a similar apoptotic profile as that triggered by the free-form EPI. In conclusion, the synthesized EPI nanocarriers showed the synthesized EPI nanocarriers showed more apoptosis toward the breast cancer cells, which suggests greater cytotoxicity is correlated with the EPI-NPs. Fluorescence Imaging of MCF-7 Cells Treated with EPI-NPS Different concentrations of EPI-NPs were incubated with MCF-7 cells to identify necrotic cells by staining with HOECHST33342/PI staining. Figure 9 shows the images of the MCF-7 cells after treatment at different time points (0, 48, 72, and 96 h). It is clear that the EPI-NPs achieved more apoptosis at 96 h compared with the control; this is believed to be due the sustained release properties of the PLA-PEG-PLA polymer used to synthesize the EPI-NPs. The high-content imaging experiment showed that treatment with EPI-NPs causes cell death as compared with the control and void-NPs treatments. Figure 9 shows that the cells tend to condense and decrease in number, especially when incubated for longer durations. The observed cell condensation indicates that apoptosis is occurring among the cells; the dye used to stain the cells, HOECHST, binds to the cellular DNA, therefore it reveals the nuclear condensation when cells undergo apoptosis [58][59][60][61]. Moreover, Figure 10 shows a time-and dose-dependent cellular response. At zero hours, the cell count was not affected by the added EPI-NPs, while, on the other hand, at 96 h, the number of cells had dramatically decreased, especially after being dosed with higher concentrations of EPI-NPs. The EPI-NPs are encapsulated with the PLA-PEG-PLA triblock polymer, which gives these NPs the characteristic of sustained release. Conclusions EPI-NPs were successfully synthesized and optimized. The EPI-NPs were characterized by different methods. They possess a relatively small particle size, which allows them to be internalized and delivered in cancerous tissue. With the optimized method of synthesis, the EPI-NPs had an encapsulation efficiency of around 82%, which proves the efficient synthesis method used for the preparation. The flow cytometry studies showed that the EPI-NPs have an apoptotic effect on MCF-7 breast cancer cells. In addition, the high-content imaging studies revealed a gradual decrease in cancer cells number after treatment with EPI-NPs. The stability of EPI-NPs was intensively investigated to determine their potential use as DDSs. In addition, the EPI-NPs showed a sustained drug release profile, and they were stable at different pH values and in different conditions. However, they were mostly stable as physiological pH values, which makes them prime candidates for different pharmaceutical dosage forms. Further in vivo studies are recommended as a next step to study the pharmacokinetics of the EPI-NPs in animal models. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: In vitro epirubicin release profile. Conclusions EPI-NPs were successfully synthesized and optimized. The EPI-NPs were characterized by different methods. They possess a relatively small particle size, which allows them to be internalized and delivered in cancerous tissue. With the optimized method of synthesis, the EPI-NPs had an encapsulation efficiency of around 82%, which proves the efficient synthesis method used for the preparation. The flow cytometry studies showed that the EPI-NPs have an apoptotic effect on MCF-7 breast cancer cells. In addition, the high-content imaging studies revealed a gradual decrease in cancer cells number after treatment with EPI-NPs. The stability of EPI-NPs was intensively investigated to determine their potential use as DDSs. In addition, the EPI-NPs showed a sustained drug release profile, and they were stable at different pH values and in different conditions. However, they were mostly stable as physiological pH values, which makes them prime candidates for different pharmaceutical dosage forms. Further in vivo studies are recommended as a next step to study the pharmacokinetics of the EPI-NPs in animal models.
8,195.6
2021-04-01T00:00:00.000
[ "Materials Science", "Biology" ]
An Ultra-Low-Power K-Band 22.2 GHz-to-26.9 GHz Current-Reuse VCO Using Dynamic Back-Gate-Biasing Technique : An ultra-low-power K-band LC-VCO (voltage-controlled oscillator) with a wide tuning range is proposed in this paper. Based on the current-reuse topology, a dynamic back-gate-biasing technique is utilized to reduce power consumption and increase tuning range. With this technique, small dimension cross-coupled pairs are allowed, reducing parasitic capacitors and power consumption. Implemented in SMIC 55 nm 1P7M CMOS process, the proposed VCO achieves a frequency tuning range of 19.1% from 22.2 GHz to 26.9 GHz, consuming only 1.9 mW–2.1 mW from 1.2 V supply and occupying a core area of 0.043 mm 2 . The phase noise ranges from − 107.1 dBC/HZ to − 101.9 dBc/Hz at 1 MHz offset over the whole tuning range, while the total harmonic distortion (THD) and output power achieve − 40.6 dB and − 2.9 dBm, respectively. but consumes more power consumption. To solve this problem, extracting error voltage (i.e., V E ) from the central tap is used to dynamically adjust bulks of M N1 and M P1 . The detailed principle will be discussed in the next section. Due to the asymmetric output waveform, Rs is used to relieve the problem and VCO operates in current-limited mode [7]. Introduction To demand the high data rate of a wireless communication system, millimeter wave (mm-wave) is the most promising candidate due to its wide available bandwidth [1]. It is a very challenging task to design a voltage-controlled oscillator (VCO) with low power consumption, low phase noise and wide tuning range, which are limited by each other. Nowadays, handheld devices are rapidly increasing and circuits with low power dissipation are critical. VCOs occupy most power consumption in PLLs, of which more than half comes from oscillators. Recently, VCOs have evolved from a single LC-tank (e.g., class-B [2] and class-C [3]) to the multi-resonant tank (e.g., class-F) [4][5][6]. In order to tackle the power consumption challenge, current-reuse topology has become more and more popular. The authors in [7] first demonstrated NMOS-PMOS cross-coupled pairs to form current-reuse topology. Compared with conventional structure, negative pairs are switched on or switched off simultaneously, reducing half of the power consumption. Based on [7], several topologies are developed, such as [4,5]. Although this topology can achieve excellent phase noise performance and a wide tuning range, the design of transformers is difficult and complicated. The authors in [8,9] proposed a transformer feedback technique to realize low power consumption with low phase noise. The authors in [10] achieved low power with Colpitts VCO and proposed the Gm-Boosting technique and forward-body self-biased technique to reduce power consumption. However, the phase noise performance of this structure is not so good. In addition, two capacitors are needed to divide the resonant cavity and the transistor, resulting in more die area. At the same time, multi-core VCOs are an area of focus in mm-wave [11,12] which can achieve a wide tuning range and a low phase noise but with the cost of complicated design flow, power consumption and die area. Therefore, single-core VCO remains the most popular topology. Based on the current-reuse topology [7], a more effective and simple method was proposed in this paper to achieve low power and wide tuning range simultaneously. The method relies on the dynamic back-gate-biasing technique, which periodically adjusts cross-coupled pairs' bulk to change threshold voltage in different status. At the same time, the technique can relax start-up condition with direct voltage in cross-couple pairs, so smaller dimension negative cross-coupled pairs are allowed, which can significantly reduce power dissipation and parasitic capacitance. This technique uses the asymmetry of the output in current-reuse topology, so an error voltage can be extracted from inductance with central tap and fed back to bulk. In mm-wave VCOs, parasitic capacitance is a serious problem and small dimension transistors are necessary. The detailed principle will be discussed in the next section. The paper is organized as follows: Section 2 introduces the proposed topology and its simplified small signal model. Additionally, the relevant design principle is discussed. Section 3 demonstrates the post-layout simulation results, and Section 4 draws a conclusion. Design and Analysis of CR_VCO In this part, the topology of the proposed current-reuse VCO will be briefly introduced. To understand the start-up condition, a simplified equivalent small signal model is presented. Additionally, the behavior model is then shown to explain the principle of dynamic back-gate-biasing technique. Figure 1 presents the schematic of the proposed VCO. The resonance tank consists of MOS varactors, Cvar, inductors L with central tap and a fixed MOM capacitor Cp. To sustain LC-tank oscillating, MN1 and MP1 form a cross-coupled pairs, called a Gm-cell, providing enough negative resistor to compensate for the losses caused by the equivalent resistors Rp of LC-tank. Cc, connected in/out-put node, can filter out noise. To make the circuit schematic more readable, the DC bias in bulk is neglected. An appropriate biased voltage in bulk is selected to guarantee that the body-to-source of NMOS is positive (i.e., V bs > 0) and the body-to-source of PMOS is negative (i.e., V bs < 0). In this way, threshold voltage can be decreased to boost the transconductance of MN1 and MP1, but consumes more power consumption. To solve this problem, extracting error voltage (i.e., V E ) from the central tap is used to dynamically adjust bulks of M N1 and M P1 . The detailed principle will be discussed in the next section. Due to the asymmetric output waveform, Rs is used to relieve the problem and VCO operates in current-limited mode [7]. smaller dimension negative cross-coupled pairs are allowed, which can significa duce power dissipation and parasitic capacitance. This technique uses the asymm the output in current-reuse topology, so an error voltage can be extracted from ind with central tap and fed back to bulk. In mm-wave VCOs, parasitic capacitance is a problem and small dimension transistors are necessary. The detailed principle wil cussed in the next section. Schematic and Its Small Signal Model The paper is organized as follows: Section II introduces the proposed topolo its simplified small signal model. Additionally, the relevant design principle is dis Section III demonstrates the post-layout simulation results, and Section IV draw clusion. Design and Analysis of CR_VCO In this part, the topology of the proposed current-reuse VCO will be briefl duced. To understand the start-up condition, a simplified equivalent small signa is presented. Additionally, the behavior model is then shown to explain the prin dynamic back-gate-biasing technique. Figure 1 presents the schematic of the proposed VCO. The resonance tank con MOS varactors, Cvar, inductors L with central tap and a fixed MOM capacitor sustain LC-tank oscillating, MN1 and MP1 form a cross-coupled pairs, called a G providing enough negative resistor to compensate for the losses caused by the equ resistors Rp of LC-tank. Cc, connected in/out-put node, can filter out noise. To m circuit schematic more readable, the DC bias in bulk is neglected. An appropriate voltage in bulk is selected to guarantee that the body-to-source of NMOS is positi > 0) and the body-to-source of PMOS is negative (i.e., < 0). In this way, th voltage can be decreased to boost the transconductance of MN1 and MP1, but co more power consumption. To solve this problem, extracting error voltage (i.e., the central tap is used to dynamically adjust bulks of MN1 and MP1. The detailed p will be discussed in the next section. Due to the asymmetric output waveform, Rs to relieve the problem and VCO operates in current-limited mode [7]. Based on [13], the simplified small signal equivalent circuit is shown in Figur Based on [13], the simplified small signal equivalent circuit is shown in Figure 2. and represent, respectively, the admittance of PMOS and NMOS looking into the drains. Due to the fact that and can be ignored, and can be expressed as follows: Schematic and Its Small Signal Model where and are the transconductance of PMOS and NMOS, respectively. and represent body parameters, η is defined as ⁄ . Additionally, , , and are the gate-to-source and back-to-source voltage of PMOS and NMOS, respectively. Here, the LC-tank is represented as the source. is the current flowing through LCtank and is the voltage at its both ends [14]. The effective admittance can be calculated as follows: assuming the cross-coupled pairs are NMOS-only or PMOS-only (i.e., = ). Additionally, is much smaller than VDD, while is higher than GND. The effective admittance can be expressed as From Equation (4), it can be seen that the body effect can increase the admittance of Gm-cell to relax startup condition. To compensate for the loss of an LC-tank, the negative resistance must be large enough [14]. Therefore, the start-up condition is shown as follows: where is the equivalent series resistor of the inductor, is the equivalent capacitor in parallel with LC-tank and is dependent on the dimension of transistors. According to (5) and (6), there are several significant ways to improve performance in the start-up condition. Firstly, increasing the dimension of the cross-coupled pairs and supply can boost effective admittance, but the circuit will dissipate more power and introduce more parasitic capacitance. Therefore, the dimension of transistors should be carefully taken into consideration. Secondly, the optimization of layout can decrease the parasitic capacitor connected to the LC-tank and the inductor series resistor. Y p and Y n represent, respectively, the admittance of PMOS and NMOS looking into the drains. Due to the fact that r op and r on can be ignored, Y p and Y n can be expressed as follows: where g mp and g mn are the transconductance of PMOS and NMOS, respectively. η p and η n represent body parameters, η is defined as g mb /g m . Additionally, V sbp , V sgp , V bsn and V gsn are the gate-to-source and back-to-source voltage of PMOS and NMOS, respectively. Here, the LC-tank is represented as the source. I x is the current flowing through LC-tank and V x is the voltage at its both ends [14]. The effective admittance Y total can be calculated as follows: assuming the cross-coupled pairs are NMOS-only or PMOS-only (i.e., Y p = Y n ). Additionally, V BP is much smaller than VDD, while V BN is higher than GND. The effective admittance can be expressed as From Equation (4), it can be seen that the body effect can increase the admittance of Gm-cell to relax startup condition. To compensate for the loss of an LC-tank, the negative resistance must be large enough [14]. Therefore, the start-up condition is shown as follows: where R S is the equivalent series resistor of the inductor, C T is the equivalent capacitor in parallel with LC-tank and is dependent on the dimension of transistors. According to (5) and (6), there are several significant ways to improve performance in the start-up condition. Firstly, increasing the dimension of the cross-coupled pairs and supply can boost effective admittance, but the circuit will dissipate more power and introduce more parasitic capacitance. Therefore, the dimension of transistors should be carefully taken into consideration. Secondly, the optimization of layout can decrease the parasitic capacitor connected to the LC-tank and the inductor series resistor. Dynamic Back-Gate-Biasing Technique Before discussing the principle of the back-gate-biasing technique, it is worth introducing the behavior model and parasitic effect of current-reuse VCO. The behavior model of current-reused VCO is depicted in Figure 3. The circuit can operate in two states periodically. M P1 and M N1 can be simplified as switches, SP1 and SN1. Additionally, C x and C y represent equivalent capacitors in node X and node Y, respectively, including parasitic capacitors connected to LC-tank. The oscillation frequency f 0 is shown as follows: where C var (V) is the average capacitance of MOS varactor over a single period and can be calculated by the method described in [15]. C var (V) is determined by current i(t) of MOS varactors and voltage difference V(t) between the tuning voltage (V tune ) and dynamic node voltage of X or Y (V x or V y ). C par is the parasitic capacitors from transistors. In mmwave application, parasitic capacitors cannot be ignored. When the size of cross-coupled pairs is large, the parasitic capacitance will be comparable with variable capacitance C var . According to the capacitance distribution of an MOS transistor and the Miller effect, Figure 3 presents the simplified equivalent capacitance model of the cross-coupled pairs. If the substrate is virtual ground, the parasitic capacitance C gb should be considered. Dynamic Back-Gate-Biasing Technique Before discussing the principle of the back-gate-biasing technique, it is worth introducing the behavior model and parasitic effect of current-reuse VCO. The behavior model of current-reused VCO is depicted in Figure 3. The circuit can operate in two states periodically. MP1 and MN1 can be simplified as switches, SP1 and SN1. Additionally, and represent equivalent capacitors in node X and node Y, respectively, including parasitic capacitors connected to LC-tank. The oscillation frequency is shown as follows: where (V) is the average capacitance of MOS varactor over a single period and can be calculated by the method described in [15]. (V) is determined by current ( ) of MOS varactors and voltage difference ( ) between the tuning voltage ( ) and dynamic node voltage of X or Y ( or ). is the parasitic capacitors from transistors. In mm-wave application, parasitic capacitors cannot be ignored. When the size of crosscoupled pairs is large, the parasitic capacitance will be comparable with variable capacitance . According to the capacitance distribution of an MOS transistor and the Miller effect, Figure 3 presents the simplified equivalent capacitance model of the cross-coupled pairs. If the substrate is virtual ground, the parasitic capacitance should be considered. When MN1 and MN2 are equal, the equivalent parasitic capacitance can be expressed as follows: It can be seen that the parasitic capacitance introduced by the gate-drain capacitance is the largest. Therefore, in the circuit layout design process, the layout should be optimized to reduce the gate-drain capacitance. Assuming that the outputs nodes and are equal, the average capacitance of varactors equivalent to the LC-tank is The comparison of calculation and simulation results with different tuning voltages, as shown in Figure 4a. When considering parasitic capacitance, the calculated frequency will be lower than the simulation results. On the contrary, the calculated frequency will be higher than the simulation. However, the error will increase in the high frequency cases. Figure 4a shows that there is a certain error in the model in the high frequency range. After considering the parasitic capacitance, the trend of calculation results is close to the simulation results. Figure 4b shows the average capacitance that varactors variate with When M N1 and M N2 are equal, the equivalent parasitic capacitance can be expressed as follows: It can be seen that the parasitic capacitance introduced by the gate-drain capacitance C gd is the largest. Therefore, in the circuit layout design process, the layout should be optimized to reduce the gate-drain capacitance. Assuming that the outputs nodes V x and V y are equal, the average capacitance of varactors equivalent to the LC-tank is C var (V)/2. The comparison of calculation and simulation results with different tuning voltages, as shown in Figure 4a. When considering parasitic capacitance, the calculated frequency will be lower than the simulation results. On the contrary, the calculated frequency will be higher than the simulation. However, the error will increase in the high frequency cases. Figure 4a shows that there is a certain error in the model in the high frequency range. After considering the parasitic capacitance, the trend of calculation results is close to the simulation results. Figure 4b shows the average capacitance that varactors variate Electronics 2021, 10, 889 5 of 11 with tuning voltage. As V tune increases, C var (V) will decrease. This can explain why the frequency will drop as the tuning voltage increases. s 2021, 10, x FOR PEER REVIEW 5 of 11 tuning voltage. As increases, (V) will decrease. This can explain why the frequency will drop as the tuning voltage increases. (a) (b) When is "low" and is "high", MP1 and MN1 are turned on simultaneously. A current path flowing from VDD to GND is formed, and as seen from Figure 4, is the current. When is "high" and is "low", the transistors are cut off, so energy stored in capacitors is used to compensate for a loss of LC-tank. From Figure 5, the current is flowing from to instead of to the ground, achieving an effective way to decrease power consumption. Star-up condition is a critical problem to limit the ultra-low-power VCO. There are two major ways to decrease power consumption. The first one is to decrease supply voltage, the second way is to reduce the dimension of transistors. In a word, the key to lower power is to limit the current flowing into the LC-tank. In order to limit the parasitic capacitors, the proposed VCO employs small size transistors. However, Gm-cell may not provide sufficient negative resistors. To solve this problem, an appropriate DC bias is applied to bulk to boost transconductance of the transistors. Additionally, the fixed capacitor Cp can improve the quality factor of an LC-tank [16]. However, threshold voltage will decrease to cause more power consumption. Therefore, dynamic back-gate-biasing technique is proposed to further reduce power consumption. Here, is the basic principle of the technique. Due to the imbalance of current-reused VCO, an error voltage of node X and node Y appear in the central tap of inductor. Assuming voltage in X and Y is inverse, the error voltage is expressed as follows: When V op is "low" and V on is "high", MP1 and MN1 are turned on simultaneously. A current path flowing from VDD to GND is formed, and as seen from Figure 4, I S1 is the current. When V op is "high" and V on is "low", the transistors are cut off, so energy stored in capacitors is used to compensate for a loss of LC-tank. From Figure 5, the current I S2 is flowing from C x to C y instead of to the ground, achieving an effective way to decrease power consumption. When is "low" and is "high", MP1 and MN1 are turned on simultaneously. A current path flowing from VDD to GND is formed, and as seen from Figure 4, is the current. When is "high" and is "low", the transistors are cut off, so energy stored in capacitors is used to compensate for a loss of LC-tank. From Figure 5, the current is flowing from to instead of to the ground, achieving an effective way to decrease power consumption. Star-up condition is a critical problem to limit the ultra-low-power VCO. There are two major ways to decrease power consumption. The first one is to decrease supply voltage, the second way is to reduce the dimension of transistors. In a word, the key to lower power is to limit the current flowing into the LC-tank. In order to limit the parasitic capacitors, the proposed VCO employs small size transistors. However, Gm-cell may not provide sufficient negative resistors. To solve this problem, an appropriate DC bias is applied to bulk to boost transconductance of the transistors. Additionally, the fixed capacitor Cp can improve the quality factor of an LC-tank [16]. However, threshold voltage will decrease to cause more power consumption. Therefore, dynamic back-gate-biasing technique is proposed to further reduce power consumption. Here, is the basic principle of the technique. Due to the imbalance of current-reused VCO, an error voltage of node X and node Y appear in the central tap of inductor. Assuming voltage in X and Y is inverse, the error voltage is expressed as follows: Star-up condition is a critical problem to limit the ultra-low-power VCO. There are two major ways to decrease power consumption. The first one is to decrease supply voltage, the second way is to reduce the dimension of transistors. In a word, the key to lower power is to limit the current flowing into the LC-tank. In order to limit the parasitic capacitors, the proposed VCO employs small size transistors. However, Gm-cell may not provide sufficient negative resistors. To solve this problem, an appropriate DC bias is applied to bulk to boost transconductance of the transistors. Additionally, the fixed capacitor Cp can improve the quality factor of an LC-tank [16]. However, threshold voltage will decrease to cause more power consumption. Therefore, dynamic back-gate-biasing technique is proposed to further reduce power consumption. Here, is the basic principle of the technique. Due to the imbalance of current-reused VCO, an error voltage of node X and node Y appear in the central tap of inductor. Assuming voltage in X and Y is inverse, the error voltage is expressed as follows: where ω is the oscillation frequency, V p is the peak amplitude and δV is the mismatch voltage of amplitude in node X and node Y. Ideally, if the output is symmetric, δV equals to zero and central tap behave as virtual AC ground. The proposed VCO employs a capacitive divider to extract the error voltage V E which is passed to the bulk of negative device, so the threshold voltage periodically varies with output voltage. To explain in more detail how threshold voltage changes, Figure 6a where is the oscillation frequency, is the peak amplitude and is the mismatch voltage of amplitude in node X and node Y. Ideally, if the output is symmetric, equals to zero and central tap behave as virtual AC ground. The proposed VCO employs a capacitive divider to extract the error voltage VE which is passed to the bulk of negative device, so the threshold voltage periodically varies with output voltage. To explain in more detail how threshold voltage changes, Figure 6a,b show the transient simulation of PMOS and NMOS, respectively. Due to the dynamic back-gate-biasing technique, the threshold voltage of MOS decreases to approximate 0.3 V. From Figure 6, the voltage in bulk (i.e., ) is in opposite phase with that in the gate (i.e., . This means that the threshold voltage will increase while cross-coupled pairs are gradually turning on. Accordingly, the transconductance of MOS decreases and the current will be limited so that less power is consumed. Although the technique can slow down the time of turning off, it can be ignored. According to [17], the lowest available frequency is limited by the start-up condition while the highest available frequency is limited by the parasitic capacitance. In a word, the quality factor of passive components and parasitic capacitors limits the tuning range, shown as in Equation (11): where ∆ is the parasitic capacitors. Based on Equation (8), if ∆ were comparable to value of varactors, the ratio of and will be significantly reduced. The proposed VCO can effectively reduce the parasitic capacitance by allowing transistors with small sizes. This is another advantage introduced by this technique. As mentioned above, the threshold voltage will increase when cross-coupled pairs are switched on and parasitic capacitors from transistors slightly decrease, so that the tuning range can be extended. To further minimize the parasitic capacitor, the channel lengths of the transistors, called L, are set to be shortest. Post-Layout Simulation Results The modified LC-VCO is implemented in SMIC 55 nm 1 P7M CMOS low-power technology. It is essential to minimize the parasitic capacitors, especially parasitic capacitors of cross-coupled pairs. Figure 7 shows the whole layout, with a core area of 0.043 mm 2 . The oscillation frequency is tuned from 22.2 GHz to 26.9 GHz (19.1%) with the control voltage from 0 to 1.3 V. The MOS varactors operate in accumulation mode. To increase the tunning linearity, the output DC voltage should be set at an appropriate value based on the C-V characteristic of MOS varactors. As shown in Figure 8, the phase noise at 1 MHz offset is −101.9.4 dBc/Hz at 26.9 GHz and −107.1 dBc/Hz at 22.2 GHz. In mm-wave, Q of varactors greatly limits the perfor- Due to the dynamic back-gate-biasing technique, the threshold voltage of MOS decreases to approximate 0.3 V. From Figure 6, the voltage in bulk (i.e., V BN ) is in opposite phase with that in the gate (i.e., V GN . This means that the threshold voltage will increase while cross-coupled pairs are gradually turning on. Accordingly, the transconductance of MOS decreases and the current will be limited so that less power is consumed. Although the technique can slow down the time of turning off, it can be ignored. According to [17], the lowest available frequency is limited by the start-up condition while the highest available frequency is limited by the parasitic capacitance. In a word, the quality factor of passive components and parasitic capacitors limits the tuning range, shown as in Equation (11): C max C min > C max + ∆C par C min + ∆C par (12) where ∆C par is the parasitic capacitors. Based on Equation (8), if ∆C par were comparable to value of varactors, the ratio of C max and C min will be significantly reduced. The proposed VCO can effectively reduce the parasitic capacitance by allowing transistors with small sizes. This is another advantage introduced by this technique. As mentioned above, the threshold voltage will increase when cross-coupled pairs are switched on and parasitic capacitors from transistors slightly decrease, so that the tuning range can be extended. To further minimize the parasitic capacitor, the channel lengths of the transistors, called L, are set to be shortest. Post-Layout Simulation Results The modified LC-VCO is implemented in SMIC 55 nm 1 P7M CMOS low-power technology. It is essential to minimize the parasitic capacitors, especially parasitic capacitors of cross-coupled pairs. Figure 7 shows the whole layout, with a core area of 0.043 mm 2 . The oscillation frequency is tuned from 22.2 GHz to 26.9 GHz (19.1%) with the control voltage from 0 to 1.3 V. The MOS varactors operate in accumulation mode. To increase the tunning linearity, the output DC voltage should be set at an appropriate value based on the C-V characteristic of MOS varactors. −40.6 dB. Figure 10 shows a measured operation frequency and power consumption in different tuning voltage from 0 V to 1.3 V. As shown in Figure 11, the output power is positively related to the tuning voltage Vtune, but the THD will deteriorate due to the increase in frequency. The proposed circuit has a THD performance of−40 dB~−19.7 dB and an output power of −6.9~−2.9 dBm over the tuning range. As shown in Figure 8, the phase noise at 1 MHz offset is −101.9.4 dBc/Hz at 26.9 GHz and −107.1 dBc/Hz at 22.2 GHz. In mm-wave, Q of varactors greatly limits the performance of VCOs. When the MOS varactors are working in accumulation region or depletion region, the Q of the capacitor is proportional to L −2 and L −1 , respectively [2]. To optimize the phase noise, the channel length should be the shortest. affect the phase noise performance, so a more effective way to improve the symmetry of current-reused topology is needed. From the spectrum of output waveform, the total harmonic distortion (THD) can be calculated. Since the fundamental wave is much larger than the third harmonic, higher harmonic components are ignored and the THD can be regarded as the difference between the fundamental wave and the second harmonic component. When VCO operates in the lowest frequency. The approximate value of THD is −40.6 dB. Figure 10 shows a measured operation frequency and power consumption in different tuning voltage from 0 V to 1.3 V. As shown in Figure 11, the output power is positively related to the tuning voltage Vtune, but the THD will deteriorate due to the increase in frequency. The proposed circuit has a THD performance of−40 dB~−19.7 dB and an output power of −6.9~−2.9 dBm over the tuning range. When the VCO operates at the lowest operating frequency of 22.2 GHz, it has the minimum phase noise of −107.1 dBc/Hz and the lowest power consumption of 1.9 mW. The transient simulation is depicted in Figure 9. The proposed structure has an asymmetry differential output with an output swing of around 300 mV. The imbalance of outputs will affect the phase noise performance, so a more effective way to improve the symmetry of current-reused topology is needed. From the spectrum of output waveform, the total harmonic distortion (THD) can be calculated. Since the fundamental wave is much larger than the third harmonic, higher harmonic components are ignored and the THD can be regarded as the difference between the fundamental wave and the second harmonic component. When VCO operates in the lowest frequency. The approximate value of THD is −40.6 dB. shown in Figure 13. At 1 MHz offset, the FoM can changes from −182.8 dBc/Hz to −185.1 dBc/Hz over the whole tuning range, while the FoMT is 5.6 dB lower than FoM. The larger the absolute value of FoM and FoMT, the better the overall performance will be. If the VCO can operate in mm-wave with low power consumption, low phase noise and wide tuning range, an excellent FoM and FoMT can be achieved. However, due to the limitations of passive components and the parasitics parameter, high FoM and FoMT are difficult to realize. Figure 10 shows a measured operation frequency and power consumption in different tuning voltage from 0 V to 1.3 V. As shown in Figure 11, the output power is positively related to the tuning voltage Vtune, but the THD will deteriorate due to the increase in frequency. The proposed circuit has a THD performance of−40 dB~−19.7 dB and an output power of −6.9~−2.9 dBm over the tuning range. Figure 12 presents the phase noise at 1 MHz and 10 MHz offset across the tuning range. To measure the overall performance in terms of operating frequency, phase noise, power and tunning range, whilst FoM and FoMT are defined as shown in Table 1 and shown in Figure 13. At 1 MHz offset, the FoM can changes from −182.8 dBc/Hz to −185.1 dBc/Hz over the whole tuning range, while the FoMT is 5.6 dB lower than FoM. The larger the absolute value of FoM and FoMT, the better the overall performance will be. If the VCO can operate in mm-wave with low power consumption, low phase noise and wide tuning range, an excellent FoM and FoMT can be achieved. However, due to the limitations of passive components and the parasitics parameter, high FoM and FoMT are difficult to realize. Figure 12 presents the phase noise at 1 MHz and 10 MHz offset across the tuning range. To measure the overall performance in terms of operating frequency, phase noise, power and tunning range, whilst FoM and FoMT are defined as shown in Table 1 and shown in Figure 13. At 1 MHz offset, the FoM can changes from −182.8 dBc/Hz to −185.1 dBc/Hz over the whole tuning range, while the FoMT is 5.6 dB lower than FoM. The larger the absolute value of FoM and FoMT, the better the overall performance will be. If the VCO can operate in mm-wave with low power consumption, low phase noise and wide tuning range, an excellent FoM and FoMT can be achieved. However, due to the limitations of passive components and the parasitics parameter, high FoM and FoMT are difficult to realize. To measure the overall performance in terms of operating frequency, phase noise, power and tunning range, whilst FoM and FoM T are defined as shown in Table 1 and shown in Figure 13. At 1 MHz offset, the FoM can changes from −182.8 dBc/Hz to −185.1 dBc/Hz Electronics 2021, 10, 889 9 of 11 over the whole tuning range, while the FoM T is 5.6 dB lower than FoM. The larger the absolute value of FoM and FoM T , the better the overall performance will be. If the VCO can operate in mm-wave with low power consumption, low phase noise and wide tuning range, an excellent FoM and FoMT can be achieved. However, due to the limitations of passive components and the parasitics parameter, high FoM and FoM T are difficult to realize. Table 1 shows the performance comparison of different state-of-the-art VCOs. Compared with these VCOs, the proposed VCO can achieve a lower power consumption and wide tuning range with a simple topology. In the mm-wave band, Q of LC-tank is determined by the varactor and capacitor array, whose Q is extremely low. At the same time, Table 1 shows the performance comparison of different state-of-the-art VCOs. Compared with these VCOs, the proposed VCO can achieve a lower power consumption and wide tuning range with a simple topology. In the mm-wave band, Q of LC-tank is determined by the varactor and capacitor array, whose Q is extremely low. At the same time, to demand a wide bandwidth, the wide tuning range of VCO is critical. The proposed VCO can realize lower power consumption with a wide tuning range without switching capacitor array while keeping the phase noise of −101~−107 at 1 MHz offset. However, the variation of phase noise is too large over the tuning range. Due to it having the simplest structure, our VCO's area is the smallest. Table 1 shows the performance comparison of different state-of-the-art VCOs. Compared with these VCOs, the proposed VCO can achieve a lower power consumption and wide tuning range with a simple topology. In the mm-wave band, Q of LC-tank is determined by the varactor and capacitor array, whose Q is extremely low. At the same time, to demand a wide bandwidth, the wide tuning range of VCO is critical. The proposed VCO can realize lower power consumption with a wide tuning range without switching capacitor array while keeping the phase noise of −101~−107 at 1 MHz offset. However, the Conclusions A novel ultra-low-power K-band current-reuse LC-VCO with excellent balancedamplitude was proposed, implemented using SMIC 55 nm 1 P7M CMOS low power process. Thanks to the self-adaptive capacitive feedback network, the proposed currentreuse LC-VCO achieves not only ultra-low power consumption but also a very wide tuning range. The DC biasing technique was used to start up the VCO, thereby ensuring that small dimension transistors can be used. When cross-couple pairs are turned on, the feedback network is used to extract the output error voltage V E (ωt) and reduce the threshold voltages, resulting in very low dynamic power consumption. At the same time, the small-sized transistors reduce the influence of parasitic capacitors, a wider tuning range can be gained. This ultra-low-power consumption VCO can be applied in various fields such as low-power dissipation PLLs, and frequency triplers. Since the experimental results are only post-layout simulation, a further tape-out test is needed to verify the reliability of the circuit. Conflicts of Interest: The authors declare no conflict of interest.
8,128.4
2021-04-08T00:00:00.000
[ "Physics" ]
Privacy-Preserving Self-Helped Medical Diagnosis Scheme Based on Secure Two-Party Computation in Wireless Sensor Networks With the continuing growth of wireless sensor networks in pervasive medical care, people pay more and more attention to privacy in medical monitoring, diagnosis, treatment, and patient care. On one hand, we expect the public health institutions to provide us with better service. On the other hand, we would not like to leak our personal health information to them. In order to balance this contradiction, in this paper we design a privacy-preserving self-helped medical diagnosis scheme based on secure two-party computation in wireless sensor networks so that patients can privately diagnose themselves by inputting a health card into a self-helped medical diagnosis ATM to obtain a diagnostic report just like drawing money from a bank ATM without revealing patients' health information and doctors' diagnostic skill. It makes secure self-helped disease diagnosis feasible and greatly benefits patients as well as relieving the heavy pressure of public health institutions. Introduction With the rapid development of science, more and more advanced technologies such as the internet of things and cloud computing are utilized in the area of modern medicine and this trend further pushes healthcare into the digital era [1][2][3]. Currently, numerous healthcare devices such as heart rate monitor, blood pressure monitor, and electrocardiogram are already popular in people's normal life. It makes it convenient for people to be aware of their health situation by viewing the reports of these devices. Especially, by the growing use of sensor technology in telecare, the new field known as wireless body area networks (WBAN) [1,4] has designed various sensor devices that can be used to supervise critical body parameters and activities anytime and anywhere. People can easily and conveniently get the health data by these advanced sensor devices [5] such as temperature measurement, respiration monitor, heart rate monitor, pulse oximeter SpO2, blood pressure monitor, pH monitor, glucose sensor, cardiac arrhythmia monitor/recorder, brain liquid pressure sensor, and endoscope capsule. What is more, these devices are becoming more functional and portable. More and more mobile medical monitors have already been used to serve us [2]. Therefore, people no longer worry about how to obtain the health data but are concerned about how to securely deal with these sensitive data to have disease diagnosis with a medical institution. Traditionally, the issue of privacy of medical data has been dealt with primarily as a policy problem [6,7]. Many related laws have been issued to protect the privacy of patients. However, it is still far away from satisfactory and people still fear the leakage of their private data. Hence, the most efficient solution to this problem is to protect patients' privacy in technology rather than in policy alone. In this aspect, most of previous literatures have introduced homomorphic encryption (HE) [8][9][10] to protect patients' privacy in some privacy-preserving medical applications [11]. However, HE will inevitably introduce tremendous cost and is not applicable to practical large-scale applications. Therefore, in this paper, we focus on building a secure and practical privacy-preserving medical diagnosis system that can serve us in our daily life. Starting from the aspiration of the patient, the most secure and plausible diagnostic method is to apply the processed data rather than the original data to interact with the hospital which owns a disease database to diagnose the health status privately. Moreover, it requires that after diagnosis, the hospital gets nothing about the patient's health data and the patient has no idea of the hospital's disease database. Inspired by daily used bank automated teller machine (ATM), we introduce the privacy-preserving self-helped medical diagnosis ATM (MD-ATM) so that after obtaining a healthcare card that stores some information about the health data which is collected by various sensor medical devices, patients can privately diagnose himself by inserting the health card into the MD-ATM to obtain diagnostic report just like drawing money from a bank ATM without revealing patient's health information and the disease database or doctors' diagnostic skill. When needing local computing, storing, or inputting some information, the patient uses his own portable device, called portable medical diagnostic device (PMDD). In this paper, we will show how to realize this modern diagnosis system without HE. The main idea and technology we used in this scheme are secure two-party computation (STC) and oblivious transfer (OT). Firstly, we assume that patients themselves collect related data by various wireless sensor medical devices and further process and store them in their own health cards using PMDD. When diagnosing, the patient firstly transforms the original data locally and then inserts the card into the MD-ATM of the hospital to check up his health. Operating following the instructions of the MD-ATM, the patient will finally obtain a diagnostic report through OT and the patient then completes the selfhelped diagnosis. In brief, our main contributions can be summarized as follows. Our Contributions. (i) We build a new "patient-centered" medical diagnosis model in wireless sensor networks where patients themselves collect health data by various sensor medical devices while the hospital provides a disease database to help patients to complete disease diagnosis by themselves. Compared with traditional "doctorcentered" medical diagnosis model where patients have to depend on the doctor, our system is more appropriate especially when people pay more and more attention to privacy in wireless sensor networks. (ii) We firstly propose the privacy-preserving self-helped MD-ATM to construct a secure medical diagnosis scheme following the idea of STC. It makes secure self-helped medical diagnosis feasible and convenient just like drawing money from a bank ATM. It will greatly benefit patients as well as relieving the heavy pressure of public health institutions. (iii) We construct the self-helped medical diagnosis system based on OT without expensive HE. It provides us with another perspective to consider the problem of secure medical diagnosis for patients. The rest of this paper is organized as follows. In Section 2, we briefly give an overview of secure two-party computation and oblivious transfer, and then we present our medical diagnosis system model in Section 3. In Section 4, we propose our privacy-preserving self-helped medical diagnosis scheme in detail and give a strict proof based on real-ideal simulation paradigm in Section 5. Finally, we summarize our work of this paper in the last section. Preliminaries 2.1. Secure Two-Party Computation. Secure multiparty computation (SMC) is dedicated to deal with the problem of secure computation among distrustful participants. It was first introduced by Yao in 1982 [12] and then was extended by Goldreich et al. [13] and many other researchers [14][15][16][17][18][19]. Generally speaking, SMC is a method to implement cooperative computation with participants' private data, ensuring the correctness of the computation as well as not disclosing additional information except the necessary results. It has become a research focus in the international cryptographic community due to its wide applications in various areas and a mass of research results have been published one after another. Secure two-party computation (STC) [20] is a special case in SMC where there are only two participants. The wellknown millionaires' problem [12] put forward by Yao is the representative problem of STC. In our discussing, we will consider the two-party case. Generally speaking, STC is dedicated to computing a certain function between two mutually distrusted participants on their private inputs without revealing their private information. Informally, assuming that there are 2 participants, 1 , 2 , each of them has a private number, 1 , 2 , respectively. They want to cooperate to compute the function = ( 1 , 2 ). A STC protocol is dubbed secure if no participant can learn more from the description of the public function and the result of the global calculation than what he can learn from his own information. Formally, we usually analyze the security of a STC protocol using the real-ideal paradigm in the semihonest model where both of the two parties act semihonestly, following the protocol but making effort to gain more information about other parties' inputs, intermediate results, or overall outputs by the transcripts of the protocol [15]. We can overview the real-ideal paradigm as follows. Firstly, in the ideal world, we assume that the computation of the functionality on users private inputs is conducted by an additional trusted party, who receives from user , = 1, 2, and returns the result ( 1 , 2 ) to , = 1, 2. However, there is no trusted party in the real world and so the two parties have to run a protocol Π to get the desired result. During executing protocol Π, both parties act semihonestly. Herein, the view of the th party during an execution of Π on 1 , 2 is denoted as VIEW Π ( 1 , 2 ), which contains 's input, random tape, and the messages received from the other party. For a deterministic private function , we say that Π privately computes if there exist probabilistic polynomialtime algorithms 1 , 2 , such that the simulated distribution Computational and Mathematical Methods in Medicine 3 OT 1 Inputs: inputs a set of messages: ( 1 , 2 , . . . , ); inputs an index: ; Outputs: obtains , which means . obtains . (1) Oblivious Transfer. In cryptography, OT is a type of protocol in which a sender transfers one of potentially many pieces of information to a receiver but remains oblivious as to which piece has been transferred. It was firstly introduced by Rabin [21] in 1981. Therein, the sender sends a message to the receiver with probability 1/2, while the sender remains oblivious as to whether or not the receiver received the message. Rabin's oblivious transfer scheme is based on the RSA cryptosystem. In 1985, Even et al. [22] proposed a more useful OT called 1-out-of-2 OT (OT 2 1 ) to build protocols for secure multiparty computation. Afterwards, it has been generalized to 1-out-of-OT (OT 1 ) [23] where the receiver gets exactly one message without the sender getting to know which message was queried and the receiver getting to know anything about the other messages that were not retrieved. OT 1 has become a fundamental tool in cryptography and is usually used as a black-box when constructing protocols. Formally, we can describe an OT 1 protocol as follows. There are 2 participants called the sender and the receiver . Specifically, has messages, and has an index . wishes to receive the th message of the sender's messages without leaking to , while knowing nothing about the rest − 1 messages. A simplified OT 1 protocol can be presented as in Algorithm 1. System Model In this section, we present the system model including the goals we aim to achieve in detail. In this paper, we consider the privacy-preserving medical diagnosis system with two participants: the patient and the hospital. We assume that each patient can collect his own health data such as heart beat and blood pressure, in the form of a vector, called query vector, easily by various advanced medical devices. Herein, we call the heart beat, blood pressure, and so forth, as parameter items and the health data corresponding to heart beat, blood pressure, and so forth, as parameter values. For example, = ( 1 , . . . , ) is the query vector of the patient , where all { } =1,..., are the necessary parameters the hospital needs for diagnosis, and is the parameter value of the parameter item heart beat. Each patient has a health card to store related data and a portable device PMDD to read the data stored in the card and to do some related computations after inserting the card. The hospital has a disease database DB = { } =1,..., , which in fact is the standard to determine which disease the patient has got. Each record of the disease database is presented as a triple = ( , , ), = 1, . . . , , where is the capacity of the disease database; is the index of a disease; , called the trait vector of the disease , is a vector that covers all necessary parameters the hospital needs for diagnosis; and is the disease diagnostic report including the disease name, doctors' advices, and prescriptions corresponding to the th disease . Concerning these parameters, we have some illustrations as follows. (i) : it includes all necessary parameter items the hospital needs for diagnosis such as heart beat and blood pressure. (4) Figure 1: Self-helped medical diagnosis model of our scheme. factors as possible such as adding more personal feelings, symptoms, and previous medical features from the patient as parameter items. Although we only can diagnose some simple diseases currently, it is believed that it will be feasible for more complicated diseases in the future by extending the dimension of the parameter items. (iv) : it includes the disease name, doctors' advices, and prescriptions corresponding to the th disease . Each report may conclude many doctors' advices and prescriptions. Herein, we assume that every report obtained from the MD-ATM following the self-helped medical diagnosis is authorized by the hospital and all advices and prescriptions of a report are signed by corresponding doctors. After receiving the diagnostic report, patient can choose one doctor's advice and prescription to treat himself. In this paper, the system makes medical diagnosis according to the Euclidean distances of two vectors. Specifically, taking the query vector of the patient and a trait vector of the database as an example, given a patient's query vector = ( 1 , . . . , ) and a disease trait vector = ( 1 , . . . , ), ∈ {1, . . . , }, their Euclidean distance [3] denoted by dist ↔ is Herein, we compare the squares of the Euclidean distances, It is obvious that we can figure out which one has smaller distance with patient's query vector just by checking the sign of (3) without exact result of dist ↔ or dist ↔ . Assuming that the report corresponding to the trait vector , ∈ {1, . . . , }, is the diagnosed disease report, we have the following result, for all = 1, . . . , , ̸ = : In our scheme, we will compare the squares of the Euclidean distances of the query vector and the trait vectors to find the diagnostic report that satisfies (4). In real application, the hospital provides a MD-ATM, which is connected with the disease database and can read the data of the card, to direct patients to complete self-helped disease diagnosis. Specifically, we assume that each patient registers to the hospital for the first time and gets a health card. The hospital provides a self-helped MD-ATM in public just like a bank ATM. Whenever wants to have a diagnosis, inserting his health card into the MD-ATM and following the instructions, can complete the self-helped diagnosis by himself. The basic model can be illustrated in Figure 1. Apart from the above, to enable a privacy-preserving medical diagnosis system, our scheme should simultaneously fulfill the following two security goals. (i) Confidentiality of disease database should be protected during the self-helped diagnosis process. (ii) Confidentiality of patient's private health data should be protected during the self-helped diagnosis process. Our Scheme In this section, we propose our privacy-preserving selfhelped medical diagnosis scheme (PP-SH-MDS) in detail to show how a patient can diagnose by himself using his PMDD and the self-helped MD-ATM. The core of our construction can be summarized in Figure 2. Specifically, the patient executes as follows to make a self-helped diagnosis using his PMDD and the MD-ATM. In the setup phase, registers to a hospital as traditional medical diagnosis and gets a health card. In the diagnosis phase, there are three subphases. (1) Local Preprocessing. Whenever wants to have a diagnosis, he firstly conducts the following two transformations on PMDD locally. (ii) Vector-to-Matrix. After completing the above steps, stores the matrix in the health card. After executing the OT 1 protocol, gets the diagnostic report corresponding to the disease according to the index , while the MD-ATM gets , denoted by . Analysis In this section, we analyze our scheme in detail. We firstly have a look at the correctness and then give a strict security proof following the real-ideal simulation paradigm of STC in the scenarios of semihonest adversaries. Correctness. In this aspect, we follow the steps of our scheme and make sure that the patient indeed finds out the most possible disease from the disease database of the hospital using his health data by comparing Euclidean distances. Security. In this subsection, we strictly prove the security of our scheme. From the whole process, we can specify that the two parties in our system are the patient and the hospital. They cooperate to compute the function ( , ( 1 , 2 , . . . , )) = , where is the disease diagnostic report corresponding to the disease and the distance dist ↔ satisfies the condition dist 2 ↔ = min{dist 2 ↔ } =1,..., . As mentioned in Section 3, we should Computational and Mathematical Methods in Medicine 7 achieve two security goals, that is, keeping both parties' inputs private. We apply the real-ideal simulation paradigm to prove that our scheme has achieved the two goals in the scenarios of semihonest adversaries assuming the OT 1 protocol we used is secure. Theory 1. Our privacy-preserving self-helped medical diagnosis scheme is secure against semihonest adversaries if the OT 1 protocol is secure. Proof. Notice that the view of , {VIEW ( , { 1 , . . ., })} =1,2 , in the real execution consists of three parts, the private input, random tape, and the messages received from the other party including the output. Therefore, we can get the views of 1 and 2 , respectively, in the real execution as follows: where {VIEW In the following discussion, we follow the real-ideal simulation paradigm to construct such probabilistic polynomialtime algorithms 1 , 2 . We separately prove the case when 2 is semihonest and when 1 is semihonest. Case 1 ( 2 is semihonest). In this case, we only need to construct a simulator 2 so that, given 2 's input { } =1,..., and output , 2 can simulate 2 's view in the real execution presented above as (12). Firstly, since we assume that the OT 1 protocol used in our scheme is secure and can be taken as a black-box, there exists an algorithm Next, notice that 2 is given ({ } =1,..., ; ); it can easily simulate the remaining parts of (12) by randomly choosing a × ( + 2) matrix which is indistinguishable to the blinded matrix . Then, 2 outputs the simulated view, Obviously, we can conclude that Case 2 ( 1 is semihonest). Similar to Case 1, we only need to construct a simulator 1 so that given 1 's input and output , 1 can simulate 1 's view in the real execution presented above as (11). Conclusions In this paper, we consider the problem of how to securely make diagnosis without leaking patient's health data, diagnosed result, and hospital's disease database in wireless sensor networks. By applying the idea of secure two-party computation and the technology of oblivious transfer, we propose a privacy-preserving self-helped medical diagnosis scheme so that patients can privately diagnose themselves by inserting a health card into a self-helped MD-ATM to obtain the diagnostic report just like drawing money from a bank ATM. We also have a detailed analysis about the correctness and further strictly prove the security following the real-idea simulation paradigm. We expect to provide people another perspective on future medical care.
4,596.2
2014-07-14T00:00:00.000
[ "Medicine", "Computer Science" ]
The role of alcohol extract of cranberry in improving serum indices of experimental metaproterenol‐induced heart damage in rats Abstract Cranberry offers numerous cardiovascular benefits. According to several studies, this fruit promotes the oxidation of low‐density lipoprotein, enhances high‐density lipoprotein, reduces platelet coagulation, and improves vascular activity. Albino male rats were divided into five groups (n = 5 per group). The control group received intraperitoneal administration of normal saline. The second group was injected with metaproterenol (MET) 3 days a week for 4 weeks. The third, fourth, and fifth groups were given cranberry extract in doses of 75, 100, and 150, respectively, along with heart‐damaging drugs. Blood samples were collected and sent to the laboratory on the fourth weekend and 1 week after completing the injections in the fourth week (the sixth weekend) for analyzing serum factors such as cardiac creatine kinase MB, cardiac troponin I (cTnI), and aspartate aminotransferase (AST). The serum activity of the cardiac evaluation parameters in the fourth week demonstrated a highly significant correlation among the groups with respect to AST and cTnI (p < .001). Additionally, a significant relationship was observed between AST and cTnI within the target groups (p < .05). Ultimately, the findings indicated that the consumption of cranberry extract, due to its impact on heart function, could effectively modify serum indicators associated with heart damage. The utilized extract also exhibited efficacy, albeit with variable effects. Therefore, it is recommended to use cranberry extract synergistically with other chemical and herbal medications to achieve more sustained effects. Cranberry belonging to the Ericaceae family is consumed in most countries and possesses antioxidant activities.Cranberry decreases the risk of cardiovascular diseases and has countless cardiovascular benefits (Diarra et al., 2020).According to previous studies, this fruit causes oxidation of low-capacity lipoprotein and improves highcapacity lipoprotein, reduces platelet coagulability, and improves vascular activity (El-Belbasy et al., 2021).Myocardiotoxic drugs such as allylamine, cyclosporin A, doxorubicin (DOX), isoproterenol (ISO), and MET cause skeletal muscle damage, which can be detected by the AST and creatine kinase (CK) activities (Oda & Yokoi, 2021).In recent years, micro-RNAs have been potential biomarker candidates for tissue damage.MiRNA-208 is described in the heart, and miRs-1 and miRs-133a/b are developed in skeletal and heart muscles compared to other tissues.In current study compared the level of miRNA-208, miRs-1, and mirs-133a/b with traditional tissue damage biomarkers; in cardiac (cTnI and FABP3) and skeletal muscle serum biomarkers (MYL3, sTnI, and AST) in rat administered several hearts and muscle toxication consist of ISO, MET, allylamine, and mitoxantrone.ISO and MET are catecholamines and nonselective β-adrenergic receptors causing heart and skeletal muscle necrosis in long-term usage (Calvano et al., 2016).As a result of damage to the myocardial, a large concentration of diagnostic myocardial infarction markers is released into the extracellular fluid (Ebenezar et al., 2003).These enzymes and macromolecules leaked from the damaged tissue are the best diagnostic indicators of tissue damage (Hearse, 1979).Due to the high prevalence of cardiovascular diseases in Iran, further studies are recommended to control and prevent such diseases (Sarrafzadegan & Mohammmadifard, 2019). This study aimed to investigate the effect of cranberry extract on the serum levels of cTnI, creatine kinase MB (CK-MB), and AST in rats with experimental heart damage with MET. | Chemicals Cranberry (Vaccinium macrocarpon) extract: This formulation was created by utilizing dried cranberry fruits and an alcoholic solvent. | Experimental animals Albino male rats weighing 120-150 g and 6 weeks old were obtained from the Faculty of Veterinary Medicine of the Islamic Azad University of Tabriz.The tested animals were kept at a standard temperature under humidity conditions in the research center of the Islamic Azad University of Tabriz.A suitable diet was provided to the mice. | Experimental design Albino male rats were divided into five groups (n = 5 per group).In the control group, normal saline was administered intraperitoneally.The second group was injected with metaproterenol (MET) 3 days a week for 4 weeks.The third, fourth, and fifth groups received 75, 100, and 150 doses of cranberry extract and heart-damaging drugs.Then the blood samples were taken and sent to the laboratory on the fourth weekend and a week after finishing the injections in the fourth week (the sixth weekend) to check the serum factors, including cardiac CK-MB, cardiac troponin I (cTnI), and aspartate aminotransferase (AST) (Calvano et al., 2016;Galal et al., 2019;Hussien et al., 2015). | Serum collection for analysis Twenty-four hours after the last dose of specific treatment, half of the animals were anesthetized; the blood samples were obtained, and serum was separated by centrifugation for 10 min at room temperature.Two weeks later (in the sixth week), the process was repeated for the other half of the rats.Cranberry extract was not used from week 4 to week 6 to evaluate the stability of the cranberry's effect in rats. | Biochemical assays Following the serum separation, the levels of cTnI, CK-MB, and AST were evaluated using special commercial kits (Pars Azmoun, Iran) and an autoanalyzer device (WS-ROCHE 912, Roche Hitachi, Japan). | Measurement of serum cTnI The quantitative measurement of troponin is underpinned by the immune-quantitative luminescence sandwich method.A monoclonal antibody covers the solid phase (magnetic particles), and a polyclonal antibody is used for the tracer.During the incubation of the troponin in the calibrator, the sample or control is attached to the solid phase of the monoclonal antibody.Subsequently, the conjugated antibody reacts with the troponin attached to the solid phase.This test must be performed in the LIAISON® analyzer.The analysis operation includes the following steps: 100 μL of a serum sample, calibrator or control, 200 μL of tracer conjugate, 20 μL of coated magnetic particles, 10-min incubation and subsequent washing cycle, measurement within 3 s. | Measurement of serum cardiac CK-MB CK-MB is a dimer enzyme consisting of two subunits, M (muscle) and B (brain), which combine to form the CK-MM, CK-MB, and CK-BB isozymes.In this method, by using a specific antibody, the activity of the M subunit is inhibited, and CK-MB corresponding to the activity of the remaining B subunit is measured by the CK-NAC method. Since CK-MB consists of two identical subunits, its activity is obtained by multiplying the obtained value by 2. The CK-MB measurement is basically similar to the cTnI measurement test. | Measurement of serum AST This reaction is underpinned by the optimized method proposed by ECCLS, which is the same as the IFCC method with no pyridoxal being used. | Data analysis The collected data were analyzed with SPSS software version 24. The ANOVA test was used to compare the mean scores of the groups.In this study, p < .05 was set as the significance level. | RE SULTS Tables 1 and 2 present the serum activity of AST, cTnI, and CK-MB 4 and 6 weeks after consuming cranberry alcoholic extract in the positive control and negative control groups, and the group receiving 75, 100, and 150 mg/kg doses.Duncan's postvariance test results also confirmed the variance analysis results. According to the findings, there was no significant relationship between the time of consumption of the cranberry alcoholic extract (4 or 6 weeks) and the serum variations of the measured parameters (Table 3). | DISCUSS ION The present study evaluated the beneficial effect of cranberry extract on the cardiotoxicity caused by metaproterenol in laboratory mice. Similar to some of their reported antitumor activities, the antioxidant properties of phenolics in cranberry fruit play a major role in the observed ability to reduce cardiovascular and age-related diseases.Cranberry's role in preventing oxidative processes included a decrease in the oxidation of lipoproteins.Cranberry ranks high among fruits in antioxidant quality and quantity for their inherent flavonoid content, including proanthocyanidins, anthocyanins, flavonols, and phenolic acids (Galal et al., 2019). TA B L E 1 Comparison of mean serum parameters 4 weeks after consuming cranberry alcoholic extract in rats with heart damage using MET.Muhammad et al., 2011).These enzymes do not include contractile proteins and are not found in the bloodstream.In the case of myocardial necrosis, these are released into the blood (Acikel et al., 2005). In previous studies, the induction of ISO, MET, and allylamine to rats caused heart damage (Calvano et al., 2016).In the present study, the induction of MET in rats increased parameters indicating cardiac damage.In some cases, increased cTnI represents ischemic damage induced by increased oxygen consumption, decreased blood pressure (perfusion), and decreased oxygen supply to the heart muscle (Ammann et al., 2001).The superiority of measuring cardiac troponins compared to other commonly used indicators has made them a gold standard for diagnosing myocardial infarction.Troponins are sensitive and specific indicators even for small amounts of myocardial necrosis (Coudrey, 1998).In the present study, cTnI levels significantly increased in the negative control group compared to the positive control group in the fourth and sixth weeks.on cardiotoxicity caused by DOX (Eman et al., 2011). Many studies have reported that rutin has a protective effect on the heart in case of myocardial infarction caused by ISO induction (Annapurna et al., 2009).In the present study, cranberry extract had favorable protective effects on the heart in rats damaged by metaproterenol.This healing effect on the heart can be justified by reducing serum parameters in the groups receiving the extract. Troponin 1 is one of the cardiac regulatory proteins and improves the contractile function of the heart (Sharma et al., 2004).In the present study, cTnI was present only in the myocardium; therefore, it was used as a marker of cardiac damage.When the heart cell dies, this protein is released from the heart into the bloodstream.Hence, the level of this serum parameter was higher in animals receiving MET than in the group not receiving this drug.The level of cTnI, however, was low in the groups receiving both heart-damaging drugs and cranberry extract, suggesting that the cranberry extract can improve cardiac function. Cardiac marker enzymes include AST, CK-MB, and cTnI and act as markers for diagnosing myocardial damage (Lim et al., 2013).Our study showed a significant decrease in the levels of these serum parameters in the groups receiving cranberry extract with a dose of 150 mg/kg compared to the negative control group.Cranberry extract has an effect even in a low dose; as the dose increases, its effect on improving cardiac function also increases.Accordingly, cranberry plays a comprehensive role in preventing cardiac damage and reducing the parameters of cardiac damage. This study also showed the oxidative damage caused by increased free radicals in the heart tissues after MET administration.Damage to the heart myocardium leads to the release of serum parameters of cardiac indicators such as AST, CK-MB, and cTnI in the blood, leading to the diagnosis of cardiac damage (Nimbal & Koti, 2017). Cardiac cell damage significantly decreased in rats receiving different doses of cranberry extract reduced compared to those receiving MET.As a result of heart damage, the level of cTnI, one of the most reliable and common biomarkers, increased.However, the effects and longevity of cranberry extract were not permanent. After stopping the administration of this extract, the values of the cTnI parameter were higher in the group receiving 150 doses of this extract in the sixth week than in the negative control group.This implies that the stability of cranberry extract is short, and it should be consumed for a more prolonged period (Henri et al., 2016;O'brien et al., 2006). In Kharadi et al.'s study, the administration of Allium cepa aqueous extract at a dose of 400 mg/kg resulted in the recovery of increased parameters (troponin I, CK-MB, and AST; Kharadi et al., 2016).In our study, the administration of cranberry extract in 75, 100, and 150 doses led to the recovery of the aforementioned parameters, especially in the fourth week.The recovery of troponin I, CK-MB, and AST by this extract was significant in the present study. CK enzymes, especially CK-MB, convert ATP into ADP and transfer energy to cardiac myosin filaments.The sensitivity of CK-MB measurement is 95% in many studies, and it is a highly specific marker in confirming cardiac damage (Hettling & van Beek, 2011;Muralidharan et al., 2008).A remarkable increase in this enzyme was observed in the studied rats with heart damage.While the level of this enzyme decreased in rats receiving different doses of cranberry extract.This implies that cranberry extract is a strong cardiac protector inhibiting cardiac necrosis caused by MET. | CON CLUS ION It can be concluded that consuming cranberry extract with its effect on heart function can effectively correct serum indicators related to heart damage.The tested extract effectively improved heart damage by reducing the release of these serum factors.As the effect of this extract was not stable, after stopping the administration of this extract for 2 weeks, serum factors unfortunately reincreased in the sixth week, suggesting that the used extract was also effective; however, its effects were not stable.Accordingly, it is recommended to be used synergistically with other chemical and herbal medicines to achieve more prolonged effects. Kalın et al. conducted a study in 2015 under the title Antioxidant activity and polyphenol content of cranberries (Vaccinium macrocarpon).Kalın et al.'s study helps to understand the antioxidant properties of blueberry extract and the presence of specific polyphenolic compounds.Our study provides insights into the effect of cranberry extract on cardiac function and its potential as a modifier of serum indices associated with cardiac injury.While both studies support the idea that cranberry extract offers health benefits, particularly in relation to cardiovascular health and antioxidant effects.Studies show that cranberry extract can be used synergistically with other drugs or antioxidants to increase effectiveness and create lasting effects.More research and explorationinto the potential health benefits of cranberry extract are warranted(Kalin et al., 2015).The diagnosis of heart damage and myocardial infarction is made by evaluating cardiac marker enzymes, including CK, CK-MB, AST, alanine aminotransferase (ALT), lactate dehydrogenase (LDH), alkaline phosphatase (ALP), and cholesterol(Chrostek & Szmitkowski, 1989; Compared to the control group, an excessive increase in CK and CKMB concentrations caused by DOX in serum indicates myocardial damage.The present findings are consistent with Afsar et al.'s reports, indicating that MET, including DOX, increases the serum activity of the mentioned parameters (the most basic biomarkers of myocardial cell damage; Afsar et al., 2017).The normalization of CK, CK-MB, and AST serum values in the tested groups receiving cranberry compared to those receiving heart-damaging drugs suggested that cranberry extract could improve cardiac function.The findings of this study are in line with previous findings regarding the protective effect of plant extracts Correlation between measured serum parameters and time of receiving cranberry alcoholic extract in rats suffering from heart damage using MET.
3,346
2023-08-09T00:00:00.000
[ "Biology" ]
Longer Telomere Length in COPD Patients with α1-Antitrypsin Deficiency Independent of Lung Function Oxidative stress is involved in the pathogenesis of airway obstruction in α1-antitrypsin deficient patients. This may result in a shortening of telomere length, resulting in cellular senescence. To test whether telomere length differs in α1-antitrypsin deficient patients compared with controls, we measured telomere length in DNA from peripheral blood cells of 217 α1-antitrypsin deficient patients and 217 control COPD patients. We also tested for differences in telomere length between DNA from blood and DNA from lung tissue in a subset of 51 controls. We found that telomere length in the blood was significantly longer in α1-antitrypsin deficient COPD patients compared with control COPD patients (p = 1×10−29). Telomere length was not related to lung function in α1-antitrypsin deficient patients (p = 0.3122) or in COPD controls (p = 0.1430). Although mean telomere length was significantly shorter in the blood when compared with the lungs (p = 0.0078), telomere length was correlated between the two tissue types (p = 0.0122). Our results indicate that telomere length is better preserved in α1-antitrypsin deficient COPD patients than in non-deficient patients. In addition, measurement of telomere length in the blood may be a suitable surrogate for measurement in the lung. Introduction Chronic obstructive pulmonary disease (COPD) is a complex trait with both genetic and environmental risks factors that is characterized by non-reversible airway obstruction and chronic inflammation. The morphologic manifestations of this disorder include small airway remodeling and emphysema. The predominant environmental risk factor for COPD is cigarette smoking, [1,2] although other factors such as air pollution [3,4] and respiratory infections [5] play a role. While several novel susceptibility genes for COPD have been identified in recent years, the underlying mechanisms are largely unknown. In contrast, the association between deficiency of a 1antitrypsin and emphysema has been known for several decades [18,19] and the pathophysiology is understood [20,21]. a 1antitrypsin is a proteinase inhibitor and acute phase reactant, and its major role is the inhibition of neutrophil elastase. a 1antitrypsin deficiency is caused by alleles of the SERPINA1 gene. Severe deficiency of a 1 -antitrypsin is most often caused by homozygosity for the Z allele (Glu342Lys) of SERPINA1 and is a risk factor for early-onset emphysema, although the clinical manifestations are highly variable [22,23]. A recent focus of COPD research has been the role of premature aging of the lung and other organs. Emphysema is characterized by reduced cell proliferation [24] and increased markers of cellular senescence [25], including shortened telomeres [26]. COPD patients are at increased risk for cardiovascular disease [27], osteoporosis [28], depression [29], and skin wrinkling [30], all of which have been associated with premature senescence [31][32][33]. Telomeres shorten with each round of cell division and this results in replicative cell senescence. Telomere length is reduced during DNA replication because of the ''end replication problem'', i.e., the 59 end of the lagging strand is unable to be replicated. This loss of telomeric DNA is predicted to be ,10 base pairs (bp) per cell cycle. However, the observed rate of loss can be higher and in humans has been estimated to be 50-200 bp per division [34,35]. Oxidative stress is one of the main factors in causing this higher rate of loss [36,37]. Several studies have examined telomere length in the context of COPD but there is little consistency in the results [26,[38][39][40][41][42][43][44][45] and the relationship between telomere length and lung function in a 1antitrypsin deficiency has not been previously studied. Oxidative stress plays an important role in the pathogenesis of airway obstruction in a 1 -antitrypsin deficient patients [46]. Furthermore, a 1 -antitrypsin has anti-apoptotic effects [47,48] and anti-inflammatory effects on cytokine production [49,50]. Therefore, cell senescence due to reduction in telomere length may be particularly important in patients with a 1 -antitrypsin deficiency. We investigated telomere length in a group of COPD patients with a 1antitrypsin deficiency and a group of COPD controls. We also determined whether the length of telomeres in peripheral blood DNA is correlated with that in lung tissue samples in order to test the hypothesis that COPD is a ''systemic'' disease, and that telomeric shortening in this condition affects both lung and the hematopoietic systems. Subjects We studied 217 a 1 -antitrypsin deficient patients and 217 COPD control patients (Table 1). Approval for the project was obtained from the University of British Columbia -Providence Health Care Research Ethics Board (REB H09-02042 and H11-02780). The a 1 -antitrypsin deficient patients were selected from the Alpha-1 Foundation (AATF) DNA and Tissue Bank located at the University of Florida (IRB 659-2002). The COPD controls were selected from the Lung Health Study (LHS), a clinical trial sponsored by the National Heart, Lung and Blood Institute [51]. Participants in the LHS were cigarette smokers between 35-60 years of age with mild to moderate airflow obstruction, defined by a ratio of forced expiratory volume in one second (FEV 1 ) to forced vital capacity #0.70 and FEV 1 between 55-90% of predicted. Selected LHS samples were matched to the a 1 -antitrypsin deficient samples for age, gender, ethnicity and pack years. An additional 51 patients were selected from the lung tissue biobank at the James Hogg Research Centre (JHRC). For the JHRC samples, both lung tissue and blood samples were obtained from patients admitted to St. Paul's Hospital who underwent lobar or lung resection surgery for localized lung cancer. The lung tissue samples were taken from a site distant from the tumor. All subjects provided written informed consent. DNA samples A sample of peripheral blood DNA was used to measure telomere length in the AATF DNA and Tissue Bank samples. Measurement of telomere length in the LHS samples was performed as previously reported [43]. For the JHRC samples, we measured telomere length in DNA samples from both blood and lung tissue in 51 subjects. DNA was extracted from these tissues using the QIAamp DNA Mini Kit (Qiagen, Mississauga, ON, Canada). Measurement of Telomere Length Telomere length was measured using a previously published qPCR based protocol [43,52]. Briefly, DNA samples were quantified using the Nanodrop 8000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA). Telomere length measurement was performed in triplicate using 5 ng of DNA. Intra-plate coefficients of variance (CV) were calculated between the replicates, and samples with CV$5% were excluded from further analysis. Reference DNA samples obtained from the Coriell Institute (Camden, NJ) were assayed as calibrator samples in triplicate on each PCR plate to control for variation between plates. Inter-plate CV for the calibrator sample was calculated to be 16%. 36B4 was used as a reference gene. The primer sequences used were: tel 1: GGTTTTTGAGGGTGAGGGTGAGGGT-GAGGGTGAGGGT; tel 2: TCCCGACTATCCCTATCCC-TATCCCTATCCCTATCCCTA; 36B4u: CAGCAAGTGG-GAAGGTGTAATCC; and 36B4d: CCCATTCTATCATCA-ACGGGTACAA. Six qPCR reactions (i.e. triplicates of the telomere and reference gene assays) were performed for each individual in 20 mL reactions including 10 mL QuantiTect SYBR Green PCR Master Mix (QIAGEN), and final primer concentrations of tel 1: 270 nM, tel 2: 900 nM, 36B4u: 300 nM, and 36B4d, 500 nM. Reactions were performed on the ViiA 7 Real-Time PCR Instrument (Life Technologies). Cycling conditions for the measurement of telomere length were as follows: 50uC for 2 min, 95uC for 2 min, 40 cycles of 95uC for 15 sec and an annealing temperature of 54uC for 2 min. Cycling conditions for measurement of the 36B4 reference gene were the same except 35 cycles, with an annealing temperature of 58uC for 1 min were used. Telomere length was calculated as a ratio of telomere to 36B4, using Cawthon's formula [52]. Statistical Analysis Telomere length measurements were log 10 transformed to approximate a normal distribution. Student's t-test was used to compare mean telomere length between groups. Multiple linear regression was performed to test for the effect of a 1 -antitrypsin deficiency on telomere length and lung function, with adjustments for significant confounders including age, gender and pack years. JMP software (SAS, Cary, NC, USA) was used for all statistical analyses. Effect of a 1 -antitrypsin deficiency on telomere length To test for association between a 1 -antitrypsin deficiency and telomere length in peripheral blood cells, telomere length was (Figure 1). The mean log 10 transformed telomere lengths in a 1 -antitrypsin deficient patients and COPD controls were 20.1882 with a standard deviation of 0.2074 and 20.5639 with a standard deviation of 0.2074, respectively. The difference in telomere length between a 1 -antitrypsin deficient patients and COPD controls remained statistically significant after adjustment for age, gender and pack years. As a replication cohort, a second set of 217 COPD controls were selected who were matched to the a 1 -antitrypsin deficient patients for ethnicity, gender, age and pack years. Mean telomere length was again significantly longer in the a 1 -antitrypsin deficient patients compared with COPD controls (p = 1610 233 ). Relationship between tissue type and telomere length To test for a correlation between telomere length in the blood and in the lungs, telomere length was measured in lung and blood DNA from 51 patients from the JHRC. There was a significant correlation between telomere length in the blood and telomere length in the lungs (Pearson's r = 0.348, p = 0.012) (Figure 2). On average, however, median telomere length was 1.53 fold shorter in the blood when compared with the lungs (p = 0.008) (Figure 3). Relationship between telomere length and lung function Lung function data were available for 157/217 a 1 -antitrypsin deficient patients and all COPD controls. The effect of telomere length in the blood on FEV 1 % predicted was tested. There was no significant association between telomere length in the blood and lung function in either the a 1 -antitrypsin deficient patients (p = 0.3122), or in the controls with adjustment for age and pack years (p = 0.2503). In addition, lung function data were available for 49 patients from the JHRC with telomere length measurements in the lung. There was no association between telomere length in the lung and FEV 1 % predicted (p = 0.8057). Discussion The most important finding of this study was that COPD patients with a 1 -antitrypsin deficiency have longer telomere lengths in peripheral leukocytes compared with COPD patients who do not have a 1 -antitrypsin deficiency. However, there was no significant relationship between telomere length in blood and lung function as measured by FEV 1 % predicted in a 1 -antitrypsin deficient patients or in COPD controls. We also found that within subjects, there was a significant relationship of telomere length in peripheral leukocytes with that in lung tissue, although on average the telomere length of peripheral leukocytes was shorter than that in lung tissue. Telomere length has been positively correlated with lung function in some studies [26,40,44,45] but not others [41,43]. A recent study examined 46,396 individuals and the results suggested that the association of telomere length with lung function was, though significant, only modest after correction for confounding factors such as age [44]. Shorter telomere length has also been associated with COPD in some studies [39,41,42,44,45] but not in others [25,38]. This is the first study to examine the role of telomere length in a 1 -antitrypsin deficient patients, a group who we hypothesized may be particularly susceptible to accelerated reduction in telomeres and the subsequent cellular senescence. The role of premature aging in COPD has been shown by studies of explanted lung fibroblasts from emphysema patients that showed reduced proliferation rate in vitro [24] and markers of cellular senescence [25]. Similarly, alveolar type II cells and endothelial cells from emphysema patients showed elevated levels of senescence markers including shortened telomeres [26]. In our study we found a relationship between a 1 -antitrypsin deficiency and telomere length. However, the direction of effect was contrary to our hypothesis. Telomere length was longer in patients with a 1 -antitrypsin deficiency, despite the fact that they are likely exposed to higher levels of oxidative stress than usual COPD patients, as measured by oxidation of nucleic acids [46]. Oxidation of DNA is a general marker of oxidative stress and may directly promote telomere shortening [53] and therefore our results appear counterintuitive. On the other hand, patients with a 1 -antitrypsin deficiency have lower levels of myeloperoxidase (MPO) and neutrophil counts in sputum than non-deficient COPD patients [54]. MPO is the most abundant protein in neutrophils and catalyzes the formation of hypochlorous acid, a potent oxidant. Therefore, MPO likely plays an important role in oxidative stress in the lung and the lower MPO levels in a 1antitrypsin deficient patients [54] may explain the longer telomere length we observed in these patients. We found that there was a significant correlation between telomere length in the blood and telomere length in lung tissue. Many studies of telomere length are performed using DNA from peripheral blood cells, and results are extrapolated to biological processes occurring in other tissues. For example, the majority of studies investigating telomere length in COPD patients have been performed in DNA from blood cells [24,38,45,47,48]. Our results indicate that telomere length in the blood is correlated with telomere length in the lungs, suggesting that telomere length in the blood may be an appropriate surrogate for telomere length in the lungs. The correlation between blood and lung telomere length may reflect the nature of COPD as a systemic disease [55]. Thus, exposure to cigarette smoke in the lungs may affect leukocyte telomere length due to translocation of proinflammatory mediators [56] and reactive oxygen species from the lung into the circulation. We also demonstrated that the telomere length of peripheral leukocytes was shorter than that in lung tissue. Telomere length is known to vary between different human tissues [57] with leukocyte telomeres generally shorter than those in other tissues [58], presumably reflecting greater rates of proliferation in blood cells. Interestingly, Daniali et al. [58] studied adults (age .18 years) and the rate of telomere shortening was similar between the tissue types, suggesting that the length differences between tissues were established in childhood. This may explain why the telomeres in the lung samples in our patient samples were longer than those in blood cells, despite the presumably greater exposure of the lung tissue to oxidative stress via cigarette smoke. The telomere length differences established early in life may overwhelm any effect of exposure to smoke occurring mainly in adulthood. One limitation of our study is that lung function was only measured at one time point; therefore we could not test the effect of telomere length on rate of decline in a 1 -antitrypsin deficient patients. Another limitation is that all of the a 1 -antitrypsin deficient patients included in this study were current or exsmokers. Therefore, it was not possible to test for the effect of a 1antitrypsin deficiency in non-smokers compared with smokers. Finally, our telomere length measurements in the lung were performed using only a small piece of lung tissue, therefore the telomere length measured may not reflect the whole lung. Our data indicate that in a 1 -antitrypsin deficient patients, replicative senescence does not appear to play a significant role in the pathogenesis of COPD. Importantly, for the respiratory community, we found that telomere length of peripheral leukocytes is a good biomarker of telomere length in lung tissue.
3,542
2014-04-24T00:00:00.000
[ "Medicine", "Biology" ]
Kobe : New Stability Criterion for the Dissipative Linear System and Analysis of Bresse System : In this article, we introduce a new approach to obtain the property of the dissipative structure for a system of differential equations. If the system has a viscosity or relaxation term which possesses symmetric property, Shizuta and Kawashima in 1985 introduced the suitable stability condition called in this article Classical Stability Condition for the corresponding eigenvalue problem of the system, and derived the detailed relation between the coefficient matrices of the system and the eigenvalues. However, there are some complicated physical models which possess a non-symmetric viscosity or relaxation term and we cannot apply Classical Stability Condition to these models. Under this situation, our purpose in this article is to extend Classical Stability Condition for complicated models and to make the relation between the coefficient matrices and the corresponding eigenvalues clear. Furthermore, we shall explain the new dissipative structure through the several concrete examples. Introduction We are interested in the profile of solutions for a system of differential equations. To investigate the profile, our first step is to analyze the eigenvalue of the corresponding linearized system. If the coefficient matrices of our system have a good property, it might be easy to analyze the eigenvalue problem. However, there are a lot of physical models which do not have enough properties to analyze the corresponding eigenvalue problem. (We will study several problems in Sections 3 and 4). Under this situation, we focus on a general linear system with weak dissipation and try to construct the useful condition which induces the notable property of eigenvalues in this article. Precisely, we consider a general linear system Here, u = u(t, x) over t > 0, x = (x 1 , · · · , x n ) ∈ R n is an unknown vector function, and A 0 , A j , B jk and L are m × m constant matrices for 1 ≤ j, k ≤ n and m ≥ 2. Here and hereafter, we use notations that where ω = (ω 1 , · · · , ω n ) is a unit vector in R n , which means ω ∈ S n−1 . Then, throughout this paper, we assume the following condition for the coefficient matrices of (1). Reλ(r,ω) 0 r -c Under the symmetric property for B(ω) and L, Umeda et al. [2] and Shizuta and Kawashima [3] introduced the useful stability condition called Kawashima-Shizuta condition or Classical Stability Condition in this article. Precisely, they introduced the following conditions. On the other hand, Kalman et al. [4], Coron [5] and Beauchard and Zuazua [6] discussed the different condition called Kalman Rank Condition for the system (1), that is as follows. Under this situation, the following theorem is obtained. Furthermore, if B jk (1 ≤ j, k ≤ n) is zero matrix, the above four conditions are equivalent to the following. (v) Classical Kalman Rank Condition (CR) holds. [6] considered the system (1) with B jk ≡ O for 1 ≤ j, k, ≤ n, and assumed that L satisfies Remark 3. Beauchard and Zuazua We note that the assumption (5) is the sufficient condition for L ≥ 0 and Ker(L) = Ker(L ). Thus, we regard the assumption (5) as the essentially symmetric property. We will discuss in detail in Lemma 1. Emphasize that the physical examples in Section 4 do not satisfy (4) (and (5)). We remark that the typical feature of the type (1, 1) is that the high-frequency part decays exponentially while the low-frequency part decays polynomially with the rate of the heat kernel (see Figure 1). A lot of physical models satisfy these conditions and can be treated by applying Theorem 1. For example, the model system of the compressible fluid gas and the discrete Boltzmann equation is studied by Kawashima [7] and Shizuta and Kawashima [3], respectively. In recent 10 years, some complicated physical models which possess the weak dissipative structure called the regularity-loss structure was studied. For example, the dissipative Timoshenko system was discussed in [8][9][10], the Euler-Maxwell system was studied in [11,12], and the hybrid problem of plate equations is in [13][14][15][16]. We would like to emphasize that these physical models do not satisfy (4) but Condition (A). Namely, we can no longer apply Theorem 1 to these models. Under this situation, Ueda et al. [1] introduced the new condition called Condition (S) for the system (1) with B jk ≡ O (1 ≤ j, k ≤ n) as follows. Condition (S): There is a real compensating matrix S with the following properties: (SA 0 ) T = SA 0 and for each ω ∈ S n−1 . Then they derived the sufficient condition which is a combination of Condition (K) and (S) to get the uniformly dissipativity of the type (1, 2), which is the regularity-loss type. We remark that the dissipative structure of the regularity-loss type is weaker than the one of the standard type. Precisely, Reλ(r, ω) may tend to zero as r → ∞ (see Figure 2). This structure requires more regularity for the initial data when we derive the decay estimate of solutions. This is the reason why this structure is called the regularity-loss type. Indeed, the dissipative Timoshenko system, the Euler-Maxwell system and the thermoelastic plate equation with Cattaneo's law has the weak dissipative structure of type (1,2). For the detail, we refer the reader to [8,9,11,12,16]. However, the stability condition constructed in [1] is not enough to understand the regularity-loss structure. In fact, some physical models which possess the regularity-loss structure do not satisfy the stability condition in [1] (e.g., [16][17][18]). Moreover, we can construct artificial models which have the several kinds of the regularity-loss structure (in detail, see [19]). Furthermore, in recent, Ueda et al. in [20] succeeded to extend Condition (K) and (S), and analyzed the more complicated dissipative structure. This situation tells us that it is difficult to characterize the dissipative structure for the regularity-loss type. In fact, there is no related result. Under this situation, we try to extend Classical Stability Condition (CSC) and Classical Kalman Rank Condition (CR), and derive the sufficient and necessary conditions to get the strict dissipativity for (1) in Section 2. Furthermore, we will extend our main theorem to apply to a system under constraint conditions in Section 3. In Section 4, we introduce several physical models and apply our main theorems to them. Finally, we focus on the Bresse system as an interesting application of our main theorems in Section 5. New Stability Criterion We introduce the new stability condition for (1) in this section. The following conditions are important to characterize the dissipative structure for (1). Here and hereafter, we use notations that R + := (0, ∞) and Indeed, (4) and the second property of (6) give us ϕ ∈ Ker(B(ω) ) ∩ Ker(L ) for each ω ∈ S n−1 . (ii) It is easy to check that the system (1) under Condition (A) satisfies Condition (CSC) if the system is strictly dissipative. Namely, Condition (SC) is sufficient condition for Condition (CSC). To prove Theorem 2, we shall reduce our system. We introduce the new functionũ := (A 0 ) 1/2 u. Then (1) is rewritten asũ where we defineà j : Similarly as before, we use notations that Remark that the matrices of (7) satisfy Condition (A) if the matrices of (1) satisfy Condition (A). In this situation, the eigenvalue problem (2) is equivalent to For the problem (8), we consider the contraposition for Theorem 2. More precisely, we introduce the complement condition of Condition (SC) and (R), and prove the contraposition of Theorem 2. Theorem 3. Suppose that the system (7) satisfies Condition (A). Then, for the system (7), the following conditions are equivalent. Condition (R) : Then we show that Condition (R) is equivalent to Condition (R) . Indeed, Condition (R) means In the rest of this section, we study the relations between the assumption in Theorem 1 and (5). Lemma 1. Let X be m × m matrix and m 1 ≤ m. Then, is sufficient condition for X ≥ 0, Ker(X) = Ker(X ). New Stability Criterion under Constraint Condition In this section, we consider the system (1) under the constraint condition where P jk , Q j and R arem × m real constant matrices. In fact, a lot of physical models are described as (1) under (14). For example, the linearized system of the electro-magneto-fluid dynamics and Euler-Maxwell system are described as (1) under (14). For the detail, we refer [2,12] to the reader. Similarly as before, we study the corresponding eigenvalue problem for the system (1) under the constraint condition (14). Namely, we look for the eigenvalue and the eigenvector of the eigenvalue problem (2) under the condition for r ≥ 0 and ω ∈ S n−1 , where Here, we introduce a notation that for r ≥ 0 and ω ∈ S n−1 . From this notation, (15) can be expressed as ϕ ∈ X r,ω . Then, the strict dissipativity and the uniform dissipativity under the constraint condition are defined as follows. Definition 2. (Strict dissipativity and uniform dissipativity under constraint) (i) The system (1) under the constraint condition (14) is called strictly dissipative under constraint if the real parts of the eigenvalues of (2), which eigenvectors are in X r,ω , are negative for each r > 0 and ω ∈ S n−1 . (ii) The system (1) under the constraint condition (14) is called uniformly dissipative under constraint of the type (α, β) if the eigenvalues λ(r, ω) of (2), which eigenvectors are in X r,ω , satisfy for each r ≥ 0 and ω ∈ S n−1 , where c is a certain positive constant and (α, β) is a pair of non-negative integers. Under the constraint condition (15), we introduce the modified stability condition and modified Kalman rank condition as follows. Theorem 4. Suppose that the system (1) satisfies Condition (A). Then, for the system (1) under the constraint condition (14), the following conditions are equivalent. The strategy of proof is almost the same as before. Namely, we consider the contraposition for (7) under (14) as follows. Theorem 5. Suppose that the system (7) satisfies Condition (A). Then, for the system (7) under the constraint condition (14), the following conditions are equivalent. then X r,ω is equivalent to C m . Thus Condition (SCC) is equivalent to Condition (SC), and Theorem 4 is also equivalent to Theorem 2. In the rest of this section, we discuss a relation for the constrain condition and the initial data. More precisely, we introduce the following condition. Condition (C): The matrices P(ω), Q(ω) and R satisfy Condition (C) implies the fact that (14) holds at an arbitrary time t > 0 for the solution of (14) if it holds initially. For the detail, we refer the reader to [1]. Therefore, it is reasonable for the Cauchy problem to assign the constraint condition (14) which satisfies Condition (C). If we suppose that Condition (C) for the system (1) under (14), we can relax Condition (SCC). Remark 6. Theorem 6 tells us that if the system does not satisfy Condition (SC) for some µ ∈ R\{0}, then it is difficult to find the useful constraint condition and apply Condition (SCC). On the other hand, if the system satisfies Condition (SC) for µ = 0, it might be possible to find the useful constraint condition and apply Condition (SCC)(or (MSCC)) to the system. We will explain the situation by using concrete examples in Sections 4.3, 4.4, 5.2 and 5.3. Application to Physical Models In this section, we introduce the several physical models for the application of Theorem 2, 4 and 6. Timoshenko System In this subsection, as an application of Theorems 2, we consider the following dissipative Timoshenko system where a and γ are positive constants, and φ = φ(t, x) and ψ = ψ(t, x) are unknown scalar functions of t > 0 and x ∈ R. The Timoshenko system above is a model system describing the vibration of the beam called the Timoshenko beam, and φ and ψ denote the transversal displacement and the rotation angle of the beam, respectively. Here we only mention [8,9] for related mathematical results. As in [8,9], we introduce the vector function u = (φ x + ψ, φ t , aψ x , ψ t ) T . Then the Timoshenko system (18) is written in the form of (1) with coefficient matrices where I is the 4 × 4 identity matrix and O is the 4 × 4 zero matrix. Here the space dimension is n = 1 and the size of the system is m = 4. Notice that the relaxation matrix L is not symmetric. From the above matrices, we have for ω ∈ {−1, 1}, and the relaxation matrix L is decomposed L = L + L with It is obvious that these matrices satisfy Condition (A), and we can apply Theorem 2 to the dissipative Timoshenko system. Corollary 1. The dissipative Timoshenko system (18) satisfies Condition (SC). Therefore, this system is strictly dissipative. Thermoelastic Plate Equation with Cattaneo's Law In this subsection, we consider the following linear thermoplastic plate equation in R n , where heat conduction is modeled by Cattaneo's (Maxwell's, Vernotte's) law Here, v describes the elongation of a plate, while θ and q denote the temperature and the heat flux, respectively. For Cattaneo's law, the relaxation parameter τ is a positive constant. We have a lot of known results for the system (19). Especially, the system (19) is analyzed in detail by [16]. The authors of [16] obtained the sharp dissipative structure for the system (19), which is also regularity-loss structure. We can rewrite (19) to a general system (1). To this end, we introduce new functions z and w as z = ∆v and w = v t . Then our equation (19) can be rewritten as Now, we introduce an unknown vector function u = (z, w, θ, q) T and n + 3 dimensional coefficient matrices A j , B jk and L such that where I is the n × n identity matrix and δ jk denotes Kronecker's delta. Then the problem (20) can be rewritten as (1). Remark that the matrices A j and L are symmetric but B jk is skew-symmetric. From the above matrices, we get for ω ∈ S n−1 . Under this situation, it is easy to check that our system satisfies Condition (A), and we can get the following property. Coupled System of Wave and Heat Equations We treat a coupled system of wave and heat equations as one of concrete examples in this subsection. v tt − ∆v + aθ = 0, Here v = v(t, x) and θ = θ(t, x) over t > 0, x ∈ R n are unknown scalar functions, and a and γ denote constants which satisfy a ∈ R\{0} and γ > 0. The system (21) is one of the typical examples of the regularity-loss type equations. Indeed this system was concerned in [21] and the authors derived the weak dissipative structure in a bounded domain. Moreover, Liu and Rao in [22] analyzed this equation to derive the stability criterion for the regularity-loss type problems in a bounded domain. Recently, the author of [23] also considered this problem in R n and obtained the detailed dissipative structure. To employ our main theorem, we rewrite (21) to a general system. Introduce new functions z and w as z = ∇v and w = v t . Then (21) can be rewritten as Here we remark that by the fact that z = ∇v, the solution z should satisfy for an arbitrary j and k with 1 ≤ j, k ≤ n, where z j denotes the jth component of the vector z. Thus, we assign the constraint condition (23) for the system (22). We remark that the constraint condition (23) is trivial in R, and is same as rot z = 0 in R 3 . We introduce an unknown vector function u = (z, w, θ) T and n + 2 dimensional coefficient matrices A j , B jk and L such that A 0 = I and where I is the (n + 2) × (n + 2) identity matrix and δ jk denotes Kronecker's delta. Then the problem (22) can be rewritten as (1). We note that the matrices A j and B jk are symmetric. However, the matrix L is skew-symmetric. From these matrices, we have On the other hand, the constraint condition (23) can be expressed (14) with P jk = O, R = O and Q(ω) = Q n (ω) such that whereQ n (ω) is defined byQ 2 (ω) = (−ω 2 ω 1 ) and for ω ∈ S n−1 and n ≥ 3. Here,Q n (ω) is a n(n − 1)/2 × n matrix. For example, there arẽ We can check that these matrices satisfy Condition (A). Moreover, it is not difficult to check that Q n (ω) satisfies Condition (C). Therefore, we can also apply our main theorems to this problem. Namely, we obtain the following corollary. Euler-Maxwell System As a next application of Theorem 4, we deal with the following Euler-Maxwell system Here the density ρ > 0, the velocity v ∈ R 3 , the electric field E ∈ R 3 , and the magnetic induction B ∈ R 3 are unknown functions of t > 0 and x ∈ R 3 . Assume that the pressure p(ρ) is a given smooth function of ρ satisfying p (ρ) > 0 for ρ > 0, and ρ ∞ is a positive constant. The Euler-Maxwell system above arises from the study of plasma physics. The authors of [11,12] derived the asymptotic stability of the equilibrium state and the corresponding decay estimate. Furthermore, they analyzed the dissipative structure and concluded that the Euler-Maxwell system is a regularity-loss type which is of type (1,2). To get the structure of uniform dissipativity, they applied the complicated energy estimate. On the other hand, we suggest the different approach to get the information of the dissipative structure for Euler-Maxwell system in this subsection. From the analysis in [11,12], we had already known that the system (26) can be written in the form of a symmetric hyperbolic system. Precisely, we introduce that u = (ρ, v, E, B) T , u ∞ = (ρ ∞ , 0, 0, B ∞ ) T , which are regarded as column vectors in R 10 , where B ∞ ∈ R 3 is an arbitrarily fixed constant. Then the Euler-Maxwell system (26) is rewritten as where the coefficient matrices are given explicitly as Here I denotes the 3 × 3 identity matrix, ξ = (ξ 1 , ξ 2 , ξ 3 ) ∈ R 3 , and Ω ξ is the skew-symmetric matrix defined by for ξ = (ξ 1 , ξ 2 , ξ 3 ) ∈ R 3 , so that we have Ω ξ E T = (ξ × E) T (as a column vector in R 3 ) for E = (E 1 , E 2 , E 3 ) ∈ R 3 . We note that (28) is a symmetric hyperbolic system because A 0 (u) is real symmetric and positive definite and A j (u) with j = 1, 2, 3 are real symmetric. Also, the matrix L(u) is non-negative definite, so that it is regarded as a relaxation matrix. Moreover, we have L(u)u ∞ = 0 for each u so that the constant state u ∞ lies in the kernel of L(u). However, the matrix L(u) or L(u ∞ ) has skew-symmetric part and is not real symmetric. Consequently, our system is not included in a class of systems considered in Theorem 1. Next, we consider the linearization of (28) with (27) around the equilibrium state u ∞ . If we denote u − u ∞ by u again, then the linearization of the system (28) with (27) can be written in the form of (1) with (14), where the coefficient matrices are given by B jk = O and and P jk = O and where a ∞ = p (ρ ∞ )/ρ ∞ and b ∞ = p (ρ ∞ ) are positive constants. Here the space dimension is n = 3 and the sizes of the systems are m = 10 andm = 2. For this linearized system it is easy to check that the system satisfies Condition (A). Furthermore, using the expression (30), we can also check Condition (C) for the constraint condition. Therefore we can apply Theorem 4 and 6 for (1), (14) with (29), (30), and get the following result. Remark 7. When we check Condition (CSC) for the linearized Euler-Maxwell system, we do not need to use the first condition in (32). Bresse System In the last section, we introduce the important application of Condition (SC). The Bresse system is a one of good examples that Condition (CSC) is not enough to check what the physical model is strictly dissipative. Dissipative Bresse System We consider the dissipative Bresse system where a, γ, κ 1 and κ 2 are positive constants, is a non-zero constant, and φ = φ(t, x), ψ = ψ(t, x) and w = w(t, x) are unknown scalar functions of t > 0 and x ∈ R. If we put = 0, the dissipative Bresse system (35) is equivalent to the dissipative Timoshenko system (18) and the simple wave equation. Now, we introduce new functions such that v := κ 1 (φ x + ψ + w), s := φ t , z := aψ x , then (35) is rewritten as Symmetry 2018, 10, 542 19 of 25 Namely the system (36) is described as (1), where u = (v, s, z, y, q, p) T , and the matrices A 0 , A 1 , B 11 and L are defined by A 0 = I, B 11 = O and The space dimension is n = 1 and the size of the system is m = 6. Notice that the relaxation matrix L is not symmetric. Then, we obtain It is clear that these matrices satisfy Condition (A). Thus we can apply Theorem 2 and get the following result. Theorem 7. The dissipative Bresse system (35) does not satisfy Condition (SC). Therefore, this system is not strictly dissipative. The proof of Theorem 7 tells us that the real part of some eigenvalue for (2) which comes from the dissipative Bresse system (36) contacts the imaginary axis at r = | |. Namely, we can expect that the real parts of the eigenvalues are located in the gray region in Compare with the Corollary 1 and Theorem 7, we can predict that the difficulty of the analysis for (36) comes from the terms related with . Therefore we focus on the effect of the terms of and analyze the structure of strict dissipativity in the next subsections. Reduced Bresse System (I) Inspired by the analysis in the previous subsection, we regard that p ≡ 0 in (36) and study the reduced system. Namely, we treat the system Then the problem (39) can be rewritten as (1), where u = (v, s, z, y, q) T , and the matrices are defined by A 0 = I, B 11 = O and Hence, we get It is obvious that the system (39) satisfies Condition (A). Under this situation, we obtain the following result which comes from Theorem 2. Theorem 8. The reduced Bresse system (39) does not satisfy Condition (SC). Therefore, this system is not strictly dissipative. Reduced Bresse System (II) Based on the similar motivation as in Section 5.2, we also regard q ≡ 0 in (36). Then this yields Here, we note that our problem (1) with (46) satisfies Condition (A). Therefore, we can apply Theorem 2 and get the following result. Theorem 10. The reduced Bresse system (45) does not satisfy Condition (SC). Therefore, this system is not strictly dissipative. Conclusions In this article, we succeeded in introducing new stability conditions. By virtue of Stability Condition (SC), it is easy to check the dissipative structure for the general system (1), and there are a lot of applications. However, if the system has the symmetric property (4), Classical Stability Condition (CSC) is equivalent to the uniform dissipativity. Inspired by this situation, we predict that the system (1) is uniformly dissipative under Stability Condition (SC). If we can get the positive answer for this conjecture, Stability Condition (SC) is applicable to nonlinear problems.
5,780.6
2018-10-25T00:00:00.000
[ "Mathematics" ]
The application of Usability Testing Method for Evaluating the New Student Acceptance (NSA) System The new student acceptance (NSA) system is a design system to automate the selection of new student admissions, starting from the registration process, and the selection process to announce the results of online elections. It executes usability evaluation to determine the level of effectiveness, efficiency, and user satisfaction. This study focuses on testing the usability of the NSA system using usability testing methods on aspects of effectiveness, efficiency, and user satisfaction at SMAN 1 Pringgarata. This study is quantitative with a descriptive research method approach. The population in this study was 40 people and the number of samples in this study was eight people. Performance measurements, retrospective think-aloud, and questionnaire techniques are techniques used for data collection. The task scenario and System Usability Scale (SUS) questionnaire with a Likert scale is the instrument used. Data analysis used descriptive statistics, user success rates and Mann Whitney U-tests. The results showed that, (1) the NSA system is effective with value 98.5%, (2) it system is efficient with the value 6 over 0, (3) after implementing the NSA system, it satisfies users (80 >68). The recommendation given is the addition of images and icons on the main page of verification so that the pages on the system are more interesting and varied. Introduction The emergence of technology in the world forces its users to keep abreast of developments so they are always up-to-date. The current technological developments are so swift that it requires users to be more active in following the development of the information. The development of technology makes information not only through offline media but also through online media, such as information systems. It uses the information system in the online media in the form of a website to convey information widely. Information systems have a major influence in the development of technology, one of which is in the world of education. The development of a system that provides electronic services to users organized by the Ministry of Education and Culture, namely the new student admission system (NSA) is proof of technological developments in the world of education. The NSA system is a design system to automate the selection of new student admissions, starting from the registration process, and the selection process to announce the results of online elections. The NSA system has been implemented in some educational institutions, including at SMAN 1 Pringgarata. Since implementing NSA system at SMAN 1 Pringgarata, everything done in school management has changed a lot from the data collection of new students to academic information. This makes it easy for the institution to conduct the selection process of new students, but in its use there are still problems namely errors in inputting data such as the discovery of some same data, the results of acceptance are not under applicable regulations. To improve the quality, profitability and service of the product (system), there needs to be action to ensure that the user will accept the website, it is important for the website to have better [1]. Based on these, it is necessary to have effectiveness, profitability, quality and satisfaction, usability evaluation which can assess and improve the products' usability, and also usability evaluation is an important element of systems development and software development [2]. Usability is a main concept in human-computer interaction (HCI). The ISO/IEC 25000 series of standards was developed to replace and extend ISO/IEC 9126and ISO/IEC 14598. The main goal of this update is to organize, enhance and unify concepts relevant to two main processes: software quality requirements, specification and systems and software quality evaluation [3]. Usability is typically defined as the "capability of being used", other words, and the capability of an entity to be used. Standard ISO 9241-11 has been successful in providing an internationally accepted definition about what is usability and its application in several velds [4]. Usability evaluation helps improve the predictability of user interactions with products, leading to greater productivity with fewer user errors and savings in development time and cost [1]. Usability is one of the important quality characteristics of software systems and products [5]. Several methods used to evaluate systems or software consist of: Model/Metrics Based, Inspection, Testing, and Inquiry [6]. In this study used testing and inquiry methods. Usability testing is one of the most used methods to define the level of usability of a software product [7]. Testing methods are used for the effectiveness and efficiency of the software or system [8]. In the testing method, researchers used the Performance Measurement technique to measure the effectiveness and efficiency of the NSA system. Meanwhile the inquiry model uses a questionnaire technique to measure user satisfaction in using a product or system that has been applied. Performance measurement is a technique used to obtain quantitative data about the performance of users who perform tasks during usability testing and then compare the processing time of the task to see the efficiency of the product or software that has been applied. Furthermore, the questionnaire technique is a technique used to measure the level of user satisfaction with the software or product used [6], [9]. Meanwhile, the level of effectiveness of the product is seen from the user's success rate or completion rate in completing the tasks given during the usability testing process [10]. Several researchers have previously carried out usability evaluations to determine the level of efficiency and effectiveness system or software [11], meanwhile, researchers who have conducted research to determine the level of user [9], [16]. Based on this, several researchers had previously evaluated the usability of systems or software that were applied or even developed in various fields of industry and education using several methods and techniques in various fields. Meanwhile, this research focuses on evaluating NSA systems using performance measurement techniques and technical questionnaires to see the level of efficiency, effectiveness, and user satisfaction. Method The type of this study is quantitative research using descriptive methods that focuses or is aimed at describing the phenomena that exist, which took place at the present or in the past. The location of this study was at SMAN 1 Pringgarata with a population of 40 people, while the number of samples in this study was eighth people. According to [17], in identifying problems that are carried out by more than five users will only repeat the same problem. Therefore, researchers used eight samples consisting of four novice users and four expert users to conduct the NSA system. The technique of data collection is performance measurement and questionnaire. Performance measurement technique used to obtain quantitative data about the performance of users performing tasks during usability testing to see the efficiency and effectiveness of the system that has been implemented. The task scenario used to measure the performance of this system can be seen in table 1. Meanwhile, the questioner used to see user satisfaction with the software or product that has been applied using the System Usability Scale (SUS) [18]. In this study used the SUS questionnaire which amounts to 10 statements using Likert scale. Furthermore, data analysis techniques in this study used descriptive statistic techniques, user success rate and Mann Whitney U-test. Log-in User log-in with the username and password that have been provided. 2 Data Read The user reads the data displayed. 3 Data Verification Users read data details. 4 Export Data The user receives data Effectiveness The results of the analysis of the effectiveness of the NSA system using performance measurement techniques and measured by looking at the number of successes (success rates) that occur when respondents complete the task scenario during the usability testing process. These results show that the number of successes conducted by respondents in the novice group averaged 100%, even though there was 1 person who failed in completing the task, namely in task 3 (87.5%). Meanwhile, in the expert group, all users have been successful (100%) in completing all tasks in using or accessing this system, so there is no error when using the system. After an analysis by looking at the success rate gained by 98.5%, so the NSA system has been effective (see table 2). These results show that the adept group successfully completed the task with no errors. In assignments 1, 2, and 4, novice and expert respondents logged in, read data and export data. While in task 3 namely data verification, 1 respondent from the beginner group made a mistake. Based on the results of retrospective think aloud conducted through interviewing respondents in the beginner group who made mistakes on task 3, stated that there are two buttons on the verification page that are still not understood the usefulness of each button so that respondent 4 is mistaken and presses the verification button. Respondents in the advanced group and beginners who successfully complete the task are of the opinion that the NSA system has buttons that are simple and easy to use so that respondents state they do not need help from others in doing the assignment. Efficiency The following findings result from research conducted to usability evaluation of the new student acceptance system. We present the following table result from a study of system efficiency using performance measurement techniques by comparing the results of working time between expert users and novice users. Data analysis techniques used to determine the efficiency of the NSA system are descriptive statistics and the Mann Whitney U-Test. Descriptive statistic will describe the results of the average time difference between users when doing usability testing, both novice and expert. Meanwhile, Mann Whitney U Test is used to see the results of the hypothesis, if the value of Whitney U-test is more than (>) man Whitney table value, and then there is no time difference in working between the expert group and the novice group. Table 3 shows the results of getting time working on task scenario tasks using the new student admission system. These results show that the novice group on user 1 completes the task in 12.1 seconds, user 2 completes the task in 11, 9 seconds, user 3 completes the task in 11.4, and user 4 completes the task in 12, 3 seconds. In the expert group, user 1 completes the task in 11.5 seconds, user 2 in 11.3, user 3 in 11.9, and user 4 completes the task in 12.1. The results show that the average time spent in working on both the novice and expert groups is the same, despite the differences, but not too significant. We analyzed the results of the task scenario scores using Mann Whitney U-Test statistics. The results got show that the value of the Whitney U-test is more than (>) the value of the Mann Whitney table, so there is no time difference in working between the advanced group and the novice group (6> 0) (see table 4). Therefore, it can the conclusion that the time spent working on assignments by beginner groups and advanced groups is not different, even though one respondent in the beginner group did not complete task 3, namely data verification. This happens because the respondent feels that there are two buttons on the verification page that are still not understood for use so that the respondent mistakenly and incorrectly presses the verification button which results in the individual time spent by the respondent 4 being longer than the other respondents. Satisfaction Data got from the usability evaluation process with questionnaire techniques is data in the form of subjective statements of participants about the level of user satisfaction in using the new student acceptance system. It gave the results of the scores got after to the respondents were an average of 80 (see table 5). The score got is below the standard SUS score. As stated by [11] if the value of the SUS score is 68 or more, the product considered being included in the normal category or eligible for use. Based on these results, we can conclude it that users feel satisfied with the NSA system. In addition, 73.80% respondents state that use this system application frequently; 69.98% respondents stated that they were very confident in using the system application; 48.78% respondents state that needed to learn to using system application; 69.79% respondents state that most people would learn to use this system application quickly; 45.09% respondents state that found the system application unnecessarily complex; 76.27% respondents stated that the website application is easy to use; 66.80% respondents stated that the system application is well integrated; 47.55% respondents stated that there is inconsistency on the system application; 72.12% respondents stated that the system application was very cumbersome; and 61.54% respondents state that need the support of a technical person to using system application (see figure.1). Recommendation Recommended improvements to the interface design of the NSA system using the Retrospective Think Aloud (RTA) technique combined with the SUS questionnaire. It carries this technique out to respondents after carrying out the performance measurement process by working on the task scenarios in the instrument. Based on the results of performance measurement, recommendations for improving the interface design on this system are necessary to clarify the coloring on the MS and TMS buttons, so that the data verification by the user no longer makes mistakes (errors). In addition, the need to add a cancel button on the main page of verification, and images and icons so that the user interface is more user friendly so that respondents feel satisfied and comfortable in using this NSA system. Conclusion Based on the results of research conducted on usability evaluation using performance measurement techniques and questionnaires on the new student acceptance system, the conclusion of this research is that there is a significant time difference in accessing the NSA system between novice user and expert user, where these results show that the value of the Mann Whitney U-test is more than (>) the value of the Mann Whitney table, then there is no time difference in working between the advanced group and the novice group (6> 0). So, they can say it that the application of the NSA system has been efficient. The results of testing the NSA system with the number of errors in the system got a success rate of 98.5%, so this system was effective in its use. Meanwhile, the SUS questionnaire results got a value of 80 which means more than a score of 68, so they can say it that users feel satisfied in using this system.
3,375.4
2020-05-01T00:00:00.000
[ "Computer Science" ]
On the Statistical Significance of Foreshock Sequences in Southern California Earthquake foreshocks may provide information that is critical to short‐term earthquake forecasting. However, foreshocks are far from ubiquitously observed, which makes the interpretation of ongoing seismic sequences problematic. Based on a statistical analysis, Trugman and Ross (2019, https://doi.org/10.1029/2019GL083725) suggested that as much as 72% of all mainshocks in Southern California is preceded by foreshock sequences. In this study, we reassess the analysis of Trugman and Ross (2019, https://doi.org/10.1029/2019GL083725), and we evaluate the impact of the assumptions made by these authors. Using an alternative statistical approach, we find that only 15 out of 46 mainshocks (33%) are preceded by significantly elevated seismicity rates. When accounting for temporal fluctuations in the background seismicity, only 18% of the analyzed foreshock sequences remain unexplained by the background seismicity. These results imply that even in a highly complete earthquake catalog, the majority of earthquakes do not exhibit detectable foreshock sequences. Introduction Prior to large earthquakes, precursory signals, such as accelerating aseismic creep (see Roeloffs, 2006) and elevated tremor and foreshock activity (Dodge et al., 1996;Jones & Molnar, 1979;Marsan & Enescu, 2012), may be recorded by GPS and seismic stations. However, the majority of observations of transient creep and elevated seismicity have only been identified as precursory signals in hindsight. Without prior knowledge of the timing and location of the mainshock, unambiguous identification of earthquake precursors is enigmatic. While it has been argued that some, if not most, large earthquakes are preceded by detectable foreshock sequences (Bouchon et al., 2013), (locally) elevated seismicity rates do not uniquely signify the advent of an earthquake, as many of these excursions simply return to the (average) background seismicity rate. These observations prohibit the utilization of seismicity rates in earthquake forecasting attempts to date. Recently, Trugmann and Ross (2019, T&R from here on) performed an in-depth statistical analysis of the seismicity rates preceding 46 mainshock events in Southern California employing a highly complete earthquake catalog (the QTM catalog; Ross et al., 2019). By comparing short-term seismicity rates with the background rate over 1 year prior to a selected mainshock, T&R concluded that over 70% of all analyzed mainshocks were preceded by a statistically significant increase in seismicity rates. These authors further alluded to the possibility that in practically all cases foreshock sequences may be detected, provided that the earthquake catalog is sufficiently complete. If this claim holds true, this has the implication that the nucleation process of (large) earthquakes emits detectable information with potential application in short-term earthquake forecasting. However, the reported results do not immediately illuminate the significance of elevated seismicity rates when taking into perspective the ubiquitous fluctuations in the background seismicity rate: if similar elevations in the seismicity rate are observed at random 70% of the time, the presence of a foreshock sequence may simply be due to random chance. In this work, we revisit the analysis of T&R to assess the significance of their findings, additionally taking into account fluctuations in the background seismicity rate. We first describe the procedure to reproduce the results reported by these authors and comment on some of the assumptions made in this procedure. We subsequently propose alternative approaches that relax these assumptions, and we reassess the statistical significance and interpretation of elevated seismicity rates prior to large earthquakes. We find that the number of mainshocks preceded by statistically significant foreshock activity (taken in a broad sense) is substantially less than that reported by T&R (20-30% versus 70%, respectively). Only about half of these foreshock sequences can be explained by random fluctuations in the background seismicity rate, which suggests that in some cases elevated seismicity rates are uniquely associated with periods preceding mainshocks. Reproducing the Results of Trugman and Ross (2019) To set a reference, we begin by reproducing the results reported by Trugman and Ross (2019). Although the approach taken by these authors has been described extensively in their main manuscript and in their supporting information, certain details of the procedure were omitted for brevity. Below we briefly describe the procedure including these details, based on personal communication with D. Trugman. First, we extract the 46 mainshock events as identified by T&R from the QTM catalog . For each of these events, we collect all earthquakes that are located within a rectangular box extending ±10 km around the mainshock with no depth cutoff. We then compute the interevent time (IET) between each pair of subsequent earthquakes that occurred within 380 days and up to 20 days prior to each mainshock. A time window of 20 days prior to the main event is excluded to avoid bias of the estimation of the background rate by potential foreshock activity. The distribution of IET tends to follow a gamma distribution, of which the probability density function is defined as where is the interevent time, the shape parameter, the background rate, and Γ( ) the gamma function. The ratio of independent events over the total number of events (including clustered events) is represented by the shape parameter . A shape parameter of = 1 indicates that all events in the catalog are mutually independent, and accordingly equation (1) reduces to the exponential distribution (associated with a Poisson process). When fitting equation (1) to the earthquake catalog, the maximum likelihood estimate of the background rate is simply ∕ ⟨ ⟩, where ⟨ ⟩ is the mean IET. The corresponding maximum likelihood estimate of results from maximizing the log-likelihood function: Using these maximum likelihood estimates, T&R fitted the gamma distribution to the IETs of events within −380d ≤ t < −20d prior to each individual mainshock, yielding and that characterize the background seismicity rate associated with each mainshock event. Next, T&R assumed a Poisson distribution of observing N obs earthquakes in a given 20-day time interval, of which the probability mass function is defined as where = T, with T being the observation time window of 20 days. The probability of observing at least N obs events over the 20 days preceding the mainshock is given by the survival function: Low values of p imply that the number of events in the 20-day window is unlikely to be observed given the background rate , and hence are an indication of anomalously high seismicity rates. Following T&R, we adopt a value of p < 0.01 as a threshold for the statistical significance of the seismicity rates. Alternative Approaches Based on Random Sampling A key assumption in the above procedure of estimating the expected 20-day event count (i.e., equation (4)), is that the seismicity rate follows a Poisson distribution. This implicitly requires that each event in the analyzed sequence be statistically independent (and accordingly = 1). While this requirement may be met for individual mainshock events, the QTM catalog contains numerous occasions of (short) aftershock sequences associated with events of magnitude M w < 4 (which are therefore not considered as mainshocks). To esti-10.1029/2019GL086224 mate the background seismicity rate based on the interevent times does not require special treatment of correlated events (Hainzl et al., 2006), and so the estimates of obtained by fitting a gamma distribution are representative (for any ≤ 1). On the other hand, to subsequently assess the significance of observing N obs events in a given time-window based on a Poisson survival function, declustering of the earthquake catalog is essential (e.g., Reasenberg, 1985). To circumvent this, we propose two alternative methods below that do not assume Poissonian behavior. First, after fitting the gamma distribution to the IET data (see previous section), we draw N random samples of IET from the resulting probability density function. The total duration Δt of a sequence of N earthquakes is therefore wherêis a random sample drawn from ( ; , ) (equation (1)). The number of events that are observed within a 20-day time window T is thus defined as the largest value of N for which Δt ≤ T. Note that each sequence begins with an earthquake at t = 0, but that this does not affect the statistics of N. By generating 50,000 realizations of Δt, we obtain a distribution of N, that is, the distribution of earthquakes occurring in a time window of 20 days based on the measured background seismicity rate and shape factor . Since Δt results from the summation of random samples drawn from a gamma distribution, we make use of the following property (for a proof, see Appendix A): that is, the summation of samples drawn from a gamma distribution itself follows a gamma distribution of which the shape factor is multiplied by N. By introducing the criterion that Δt ≈ T = const., it is to be expected that N also follows a gamma distribution (which we will demonstrate numerically in the next section). Under the assumption that N is gamma distributed, we fit a gamma distribution to the random realizations of N and calculate the probability of observing at least N obs events in a 20-day window from the corresponding survival function. A probability lower than a threshold of 0.01 signifies an elevated seismicity rate that is not expected based on the background seismicity rate. Second, while a gamma distribution may be a significant improvement over a Poisson distribution in capturing the IET statistics, it may not be fully adequate in accounting for temporal clustering. Ideally, the true distribution underlying the observed number of earthquakes is sampled to assess the significance of a given earthquake sequence comprising N obs events. To this end, we create an empirical distribution of N by counting the number of events observed during a random 20-day period within −380d ≤ t < −20d prior to each mainshock event. We draw 50,000 random samples uniformly distributed over the year leading up to the mainshock, which unavoidably oversamples cases in which the background seismicity rate is low (mainshock #14601172 with a total of 18 events being an extreme example). For any empirical distribution, the discrete survival function can be obtained based on the notion that, in an ordered set N, the number of elements N > N i decreases by 1 for each increment of i, which leads to the following survival function: with {N} being the number of elements in N (i.e., {N} = 50, 000). For an observed number of events N obs , we bisect the ordered set N to find the smallest i such that N i ≥ N obs , and subsequently compute the corresponding p value. Depending on the total number of background events within each mainshock region, the empirical distribution may not be robust or representative of the underlying "true" distribution. Nonetheless, it is useful to compare the results from the empirical distributions with those assuming a parameterized (gamma or Poisson) distribution. Results The IET distributions and the corresponding fit with the gamma distribution for all 46 events is given in supporting information Figure S1. The values of and obtained from the fitting procedure, and their Table S1. Examples of four selected events are given in Figure 1. Throughout the remainder of this work, the same four selected events are considered. In most cases, the quality of the fit is good if the number of events in the catalog is sufficient to produce robust statistics, indicating that the IET statistics are indeed captured reasonably well by a gamma distribution. The seismicity rate parameters inferred in this study are overall similar to those estimated by T&R, though our values are systematically higher. The minor discrepancies between our estimates of and those of T&R possibly arise from a difference in the selection procedure of the background seismicity (or foreshocks). Using the maximum likelihood approach, the solution of equation (2) in terms of is unique, and therefore the differences between the estimates of in the present study and T&R must result from a difference in the population statistics of . One possible origin could lie in the preprocessing of the QTM catalog, as T&R accounted for (occasional) station outages (p.c. D. Trugman), which was not done in this study. Furthermore, we exclude values of = 0, that is, events that occur simultaneously in the catalog, since such cases are not permitted by the gamma distribution. Based on random sampling of the best fit gamma distributions, distributions of the 20-day event counts (N) are obtained (see Figure 2). As discussed in section 2.2, N itself approximately follows a gamma distribution, which is in strong contrast to the Poisson distribution assumed by T&R. As can be clearly seen in Figure 2, the probability density given by a Poisson distribution rapidly decays toward zero with increasing N, often well before the median value of N is reached. This has tremendous implications for the resulting p value estimates, as the integral over the tail of the Poissonian N-distribution is effectively zero for any range of observed N. Somewhat surprisingly, even though a gamma distribution describes the IET well, the resulting distribution of N obtained in this way does not match the empirical distribution of N. As is clearly seen in Figure 2 (and Figure S2), the empirical distributions are often much broader and more uniform than predicted based on the IETs. The mismatch between the two sampled distributions is unlikely to be attributable to the number of events and statistical robustness of the empirical distribution, since event #37374687 comprises close to 25,000 background events. Nonetheless, the heavy tails of the empirical distributions further disqualify a Poisson distribution to describe the 20-day event counts. To highlight this, we plot the probability curves (i.e., survival functions) of observing at least N events in a 20-day window of the four selected mainshock events (Figure 3). While the survival functions based on the gamma and empirical distributions plotted in Figure 2 remain well above the significance threshold in three out of four cases, the survival function based on the Poisson distribution (as adopted by T&R) sharply drops to zero in all selected cases, suggesting that all the observed earthquake sequences cannot be attributed to the background seismicity (i.e., p < 0.01 in all cases). In our analysis based on the gamma distribution, only 15 out of 46 mainshocks (32.6%) are characterized by elevated seismicity rates that are statistically significant (p < 0.01). This estimate decreases to 10 out of 46 (21.7%) when interrogating the empirical distributions. Temporal Fluctuations of Background Seismicity While the analysis of the seismicity over 20 days prior to each mainshock has indicated that 33% of all mainshocks exhibit significantly elevated seismicity rates (compared to the average background rate), it is at this point not known whether this enhanced seismicity is uniquely associated to the mainshock event, or whether similar excursions from the background seismicity occur also in the absence of mainshock events. These excursions are expected to occur either in the form of swarm activity, or as aftershocks of smaller (M w < 4) earthquakes. To this end, we compute the p value at a given moment in time by sampling the gamma distribution (as described in section 2.2) over a 20-day sliding window. This sliding window traverses the full duration of 380 days prior to the mainshock with strides of 1 day, so that the seismicity rate over this period is continuously evaluated relative to the (average) background seismicity rate. As before, we consider a p value less than 0.01 to define a statistically significant increase in seismicity. The result of this analysis for four selected mainshocks is presented in Figure 4 (the analysis of all 46 mainshocks is given in Figure S3). The variability in the background seismicity rate can be quantified by computing the fraction of time-windows for which p < 0.01 for all mainshock events combined. Doing so gives a fraction of 15.7%, that is, at any given moment in time there is a 15.7% probability of observing an elevated seismicity rate. When comparing this value with the estimation that 33% of all mainshocks are preceded by significantly elevated seismicity rates, we conclude that about half of these "foreshock sequences" would have been expected purely on the basis of a fluctuating background rate. This leaves only a small number of mainshocks (about 8) in the catalog that are truly preceded by a foreshock sequence associated with the nucleation process of the earthquake. In a way, the analysis of the background fluctuations gives a measure of the "false positive" rate, as none of the time-windows that exhibit a significantly elevated seismicity rate are associated with a mainshock (according to the definition of T&R). From conducting a brief sensitivity analysis (see Text S1; Agnew & Jones, 1991;Michael & Jones, 1998), we find that the false positive rate trades-off almost perfectly with the detection rate, so that, after correction for the background fluctuations, the percentage of mainshocks with statistically significant foreshock sequences falls in the range of 10% to 20%. Statistical Significance of Foreshock Activity When relaxing the assumption of Poissonian behavior, and when considering the temporal fluctuations in the background seismicity rate, only a minority of mainshock events is preceded by foreshock sequences. This is in strong contrast to the 70% estimated by T&R, who further alluded to the possibility of detecting more foreshock sequences in catalogs of higher completeness. The sample of mainshock events is small (46 in total), so it can be questioned whether or not the results presented here are significant considering the variation between mainshocks. However, a first-order sensitivity analysis (see Text S1) suggests that the results are robust with respect to certain arbitrary choices. Moreover, our findings are within the (wide) range of estimates given by previous studies (e.g., Abercrombie & Mori, 1996;Chen & Shearer, 2016;Jones & Molnar, 1976;Reasenberg, 1999), providing some confidence in the approach adopted in this study. The quantitative differences between this study and that of T&R can also be seen qualitatively by considering the seismicity over the full 380d preceding the mainshock. For instance, events #37374687 and #37644544 (top and bottom rows in Figure 3) seem to exhibit seismicity rates over the 20 days immediately prior to the mainshock that are not unusually high considering the preceding 380 days. It is therefore not intuitive to imagine a p value that is practically zero, as given by the Poissonian survival function. On the other hand, event #14571828 (second row in Figure 3) does seem to exhibit a short burst of seismic activity on the same day as the mainshock event. Unfortunately, owing to the extent of the time window over which the p value is considered (20 days), this clear burst of activity is of insufficient duration to exceed the p value threshold. Therefore, to decide whether or not a particular mainshock exhibits significant foreshock activity, visual inspection of the catalog is recommended, rather than to rely purely on the computed p values. Alternatively, other seismological metrics such as a changing Gutenberg-Richter b value could be considered to alleviate the limitations of a fixed temporal window (Gulia & Wiemer, 2019). Conclusions In this study, we reassessed the significance of foreshock activity in Southern California, following the analysis of Trugman and Ross (2019). While the characterization of the background seismicity rate based on the interevent time (IET) method is valid, the subsequent assumption that all earthquakes in the catalog are statistically independent (and may therefore be described by a Poisson distribution) is overly restrictive. Consequently, the number of events expected to be observed in a 20-day time interval directly prior to a mainshock event is severely underestimated, and the number of mainshocks exhibiting statistically significant foreshock activity overestimated. Based on random sampling approaches that do not invoke the assumption of Poissonian behavior, we estimate that only 33% (15 out of 46) of all mainshocks are preceded by elevated seismicity rates, while about half of that fraction is a priori anticipated based on the ubiquitous fluctuations in the background seismicity rate. In other words, we expect that only about 15% of all mainshocks exhibit a foreshock sequence uniquely associated with the earthquake preparation process.
4,641
2019-11-04T00:00:00.000
[ "Geology" ]
Characterization of B0-field fluctuations in prostate MRI Abstract Multi-parametric MRI is increasingly used for prostate cancer detection. Improving information from current sequences, such as T2-weighted and diffusion-weighted (DW) imaging, and additional sequences, such as magnetic resonance spectroscopy (MRS) and chemical exchange saturation transfer (CEST), may enhance the performance of multi-parametric MRI. The majority of these techniques are sensitive to B0-field variations and may result in image distortions including signal pile-up and stretching (echo planar imaging (EPI) based DW-MRI) or unwanted shifts in the frequency spectrum (CEST and MRS). Our aim is to temporally and spatially characterize B0-field changes in the prostate. Ten male patients are imaged using dual-echo gradient echo sequences with varying repetitions on a 3 T scanner to evaluate the temporal B0-field changes within the prostate. A phantom is also imaged to consider no physiological motion. The spatial B0-field variations in the prostate are reported as B0-field values (Hz), their spatial gradients (Hz/mm) and the resultant distortions in EPI based DW-MRI images (b-value = 0 s/mm2 and two oppositely phase encoded directions). Over a period of minutes, temporal changes in B0-field values were ≤19 Hz for minimal bowel motion and ≥30 Hz for large motion. Spatially across the prostate, the B0-field values had an interquartile range of ≤18 Hz (minimal motion) and ≤44 Hz (large motion). The B0-field gradients were between −2 and 5 Hz/mm (minimal motion) and 2 and 12 Hz/mm (large motion). Overall, B0-field variations can affect DW, MRS and CEST imaging of the prostate. Introduction Prostate cancer (PCa) is the second largest cause of male cancer deaths in the UK (Caul and Broggio 2016) making PCa assessment a necessity. Following clinical suspicion for localised PCa, it is common practice to use diagnostic multi-parametric magnetic resonance imaging (mpMRI) combined with standardised reporting such as Likert score (Dickinson et al 2013) or PI-RADS version 2.1 (Turkbey et al 2019). mpMRI involves T2-weighted (T2W), dynamic contrast-enhanced (DCE) and diffusion weighted (DW) MRI. Although mpMRI may prevent 27% of men from having invasive biopsies, its specificity is only 41% compared to 96% for the biopsies (Ahmed et al 2017). Improving the quality of existing imaging sequences in mpMRI and adding extra information using other MRI techniques (such as magnetic resonance spectroscopy (MRS) and chemical exchange saturation transfer (CEST)) (Jia et al 2011, Li et al 2013, Roethke et al 2014 can potentially enhance PCa assessment. Echo planar imaging (EPI) based DW-MRI sequences are an integral part of mpMRI due to their high tumour contrast and short acquisition time (Kirkham et al 2013, Metens et al 2012. However, they often exhibit shift, shears and geometric distortions in the phase encoding (PE) direction caused by a combination of low bandwidth in the PE direction and the presence of off-resonance effects, such as B 0 -field inhomogeneities and susceptibility differences at the tissue-air interfaces (e.g. rectum-prostate interface). Stretching distortions results from regions where there is a gradient of the B 0 -field in the direction of the PE direction and pile-up distortions occur when the in-plane gradient of the B 0 -field opposes the PE direction (Jezzard andBalaban 1995, Jezzard 2012). EPI-based DWI, CEST and MRS are prostate imaging MR techniques that are affected by B 0 -fields. A B 0 -field map can be calculated from the phase differences of the two echoes of a dual-echo gradient echo scan. In a distorted EPI image, this field map can be used in a correction scheme to move the warped EPI image pixels into their correct positions. Such distortion correction methods based on the spatially varying B 0 -field maps are either simple to use (Jezzard andBalaban 1995, Jezzard 2012) and/or can correct for difficult distortions (pile-ups) (Usman et al 2018), especially in the prostate. However, potential temporal B 0 -field changes due to patient motion (Alhamud et al 2016) can result in incorrect pixel shifts across a DW dataset leading to an inaccurately computed apparent diffusivity coefficient (ADC) map-possibly hindering PCa assessments (Nketiah et al 2018). Temporal B 0 -field changes may cause incorrect frequency shifts in CEST (Sun et al 2007), whereas in MRS both temporal changes and spatially varying B 0 -fields may cause spectral line broadening (Scheenen et al 2007)-these result in overlapping signals leading to a loss of accuracy of the imaging method. Hence, a knowledge of the B 0 -fields is important in prostate MRI. The purpose of this paper is to characterize B 0 -fields within the prostate by providing a measure of temporal changes in B 0 -field values (Hz) over a specific time (minutes), as well as a measure of the spatial B 0 -field values such as representative B 0 -field values within the prostate, their spatial gradients (Hz/mm) and their impact on distortions in EPI images. Our findings may inform the MR community when developing sequences and processing methods for prostate MRI, particularly those involved with DW-MRI, CEST and MRS. Materials and methods All experiments were performed on a 3 T Philips Achieva TX system (Philips Healthcare, Best, The Netherlands) equipped with a 16 anterior and 16 posterior channel cardiac receive coil array. Images were acquired for ten male patients and a prostate phantom to differentiate observations resulting from physiological motion. The study was approved by the London-Central Research Ethics Committee (REC# 16/LO/1440) and all subjects gave informed consent. 2.1. Prostate phantom 50 g of agarose was stirred in 2.1 l of tap water at room temperature and was heated until the agarose dissolved. Half of the mixture was poured into a plastic container (Sainsbury's Home Klip Lock Storage Square 5 l, dimensions 24 × 24 × 12.5 cm) and allowed to cool, whilst the remaining half was gently heated. The container contained a drinking glass (dimensions 3 × 5 cm), which was filled with weights to prevent it from floating. Similar to Bude and Adler (1995) once the first layer of agarose had lightly set, a peeled kiwi fruit (the 'prostate' phantom (Mueller-Lisse et al 2017)) was placed on top of the layer near the glass. The remaining mixture was poured into the container and allowed to set overnight; 4-5 h prior to the experiment, the glass was removed to create the air filled 'rectum' . The prostate phantom is shown in figure 1. Subjects Ten male patients (median (range) weight 84 (68-98) kg and age 68 (57-79) years old) were recruited from the clinical prostate imaging pathway. Patients were placed in a supine, feet first position into the scanner and imaging was carried out during free breathing for all patients. No antispasmodic agent was administered. Patient 2 had been previously treated with High Intensity Focused Ultrasound (HIFU) therapy and patient 3 had eaten 15 min prior to the scanning session. Imaging Temporal and spatial characterization of B 0 -fields were carried out using dynamic fast dual-echo gradient echo (FFE) sequences. Axial images were acquired using sequences with the following parameters: flip angle = 6 • , first echo time (TE) = 4.6 ms, TE difference = 2.3 ms, relaxation time (TR) = 8.6 ms, axial field-of-view (FOV) = 230 × 230 mm 2 , where the number of slices acquired depended on the prostate size of the patient, slice thickness = 4 mm, volume shim and right to left PE direction. The dynamic B 0 -field maps were automatically computed by the scanner in Hz. Temporal B 0 -field variations were evaluated for different time scales. A single slice 2D scan acquired every 1.75 s over 53 s was used as a short time scale. For longer time scales (≥150 s), multiple 3D sequences with varying SNR were compared and only one was chosen for subsequent analysis. SNRs were varied by changing the bandwidth and voxel size and the resultant SNR change was estimated by the Philips scanner. Table 1 summarises the sequence parameters for the different gradient echo sequences. Figure 1. Photo of the prostate phantom. The prostate phantom consists of a peeled kiwi (the 'prostate') and a cylindrical air gap (the 'rectum') both embedded in the agarose. The phantom is scanned in a position similar to a patient lying supine in foot to head direction, i.e. kiwi is anterior to the air filled cylinder. The red cross demonstrates the direction of the main static B0-field of the MR scanner relative to the phantom. B 0 -field maps were also related to the distortions in EPI based DW-MRI images. As the distortions are linked to the imaging gradients and not the diffusion encoding gradients, two EPI sequences with only the b = 0 s/mm 2 of a DW sequence were used with opposite PE gradients: One with anterior to posterior PE direction (PE:AP) and vice versa (PE:PA). The remainder of the DW sequence parameters are: resolution = 2 × 2 × 4 mm 3 , FOV = 180-220 × 180-220 × 4 mm 3 , SENSE factor = 2, TR = 2000 ms, TE = 80 ms, bandwidth in the PE direction ∼20. Hz/mm). For reference purposes, an axial T2W image was acquired using a turbo spin echo sequence with the following parameters: resolution = 2 × 2 × 4 mm 3 , FOV = 180-220 × 180-220 × 60-92 mm 3 , SENSE factor = 2, TR = 4700 ms, TE = 100 ms. Image analysis A single slice of the 3D gradient echo magnitude image from sequence 3 was chosen such that it was closest to the single 2D slice from sequence 1. ROIs were placed using the magnitude images and the reference T2W image to best visualise the prostate position by a radiologist with 25 years of experience. Inspection of all datasets did not suggest prostate motion caused the ROI to include non-prostate areas. However if severe physiological motion was to occur, the ROIs could be shifted out of the prostate introducing errors into the analysis. The B 0 -field values within the ROI were extracted for each case to characterise the temporal B 0 -field variation. The spatial B 0 -field variation across the prostate was characterized in three separate slices: the original mid-axial slice and two additional slices inferior and superior to the mid-axial slice. The centre-to-centre separation between each of the slices is 8 mm. The pixelwise B 0 -field values within the ROI of the slices were extracted from the first dynamic of Sequence 3 in table 1. Line profiles within the prostate ROI (posterior to anterior (PA) and right to left (RL)) were also drawn to evaluate the B 0 -field values across the prostate for the mid-axial slice. Additionally, gradients of the B 0 -field in the anterior-posterior direction were computed for the prostate ROIs at the three slices. Gradient values at the posterior edge of the prostate (where B 0 -field varied considerably) were recorded by selecting the last three rows of pixels within the ROI at the posterior of the prostate. A two-sided Wilcoxon signed rank test was used to determine whether the B 0 -field gradients at the posterior edge was significantly different Figure 2. Images of B0-field map for a single axial slice of the phantom (first row) and two example patients (patients 5 (second row) and 7 (third row)). Images displayed are from the sequences 1 and 3 from table 1. from zero and the sign of the gradient was noted. Distortions in the reverse phase encoded DW images were then compared to these B 0 -field gradients using the T2W image as reference. Results Figure 2 displays example B 0 -field maps for Sequence 1 and Sequence 3 from table 1. The B 0 -field map shows a large variation in the B 0 -field across the image plane and is dependent on the material/tissue type. It also visually demonstrates that low SNR leads to an apparent increase in B 0 -field variation. The first row in figure 3 demonstrates the changes in B 0 -field within the ROI of the phantom and two example patients (patients 5 and 7) over the duration of the dynamic sequences 1 and 3 from table 1. Sequence 1 of figure 3 shows that B 0 -fields are consistent throughout the duration of the sequence (51 s) for the phantom and patient 7, however for patient 5, large fluctuations initially occur potentially due to rectal size changes. As expected, the SNR increases for the 3D sequence and the range of measured B 0 -field values reduces within the prostate. While only results from two example patients are shown here, the measured distribution was consistently smaller in the 3D sequence for all patients. Unlike the patients, the measured B 0 -field range is higher in the 3D sequence for the phantom, possibly because the underlying signal from the kiwi phantom was lower in the 3D sequence. The second row in figure 3 summarises the changes in the median B 0 -field of the ROI across time for the phantom and all patients. The largest range of the median B 0 -field of the ROI is observed for patient 5 (52 Hz across ≈1 min) in Sequence 1. However, the range of the median B 0 -field values, which indicates the temporal changes, are much lower across other patients-their minimum-maximum ranges are: 2.5-14 Hz (Sequence 1, i.e. over a duration of 0.9 min) and 1.4-19 Hz (Sequence 3, i.e. across 2.8 min). The B 0 -field ranges are smaller for the phantom (between 2.0 and 3.6 Hz over a duration of <3 min) regardless of the sequences used. Figure 4 summarises the B 0 -field distribution within the prostate ROI for all patients for the first dynamic scan of Sequence 3. The minimum and maximum median B 0 -field values across the prostate are between −25 and 6.3 Hz, respectively, for all patients except patient 9. The interquartile ranges (IQRs) are ≤18 Hz for all patients except patient 2 (contains fluid filled region following HIFU treatment) and patients 8 and 9 (large bowel motion were observed during Sequence 3 and could also have occurred within the first dynamic of Sequence 3 for both patients), where the interquartile ranges are as high as 44 Hz. Figure 5(b) and c demonstrate an example of the effect of B 0 -field gradients and their effects on two reverse phased encoded non-diffusion weighted images in comparison to the reference T2W image (figure 5(d)). Figure 5(e) shows the B 0 -fields along the PA profile for the mid-axial slice of the prostate, where the B 0 -fields increase/decrease until they reach similar B 0 -field values at the anterior of the prostate. In contrast, the B 0 -field profile along RL were generally flat with small fluctuations (results not shown). Figure 5(f) displays the range of numerical gradients at the posterior edge of the prostate for the same slices from figure 4. The values range approximately from −20 to 20 Hz/mm. Significantly positive, negative and zero B 0 -field gradients are observed for ∼50%, <15% and ∼30% of the dataset, respectively. Additionally, for some patients (patients 1, 3 and 6), the polarity of the gradients are varied for different slices of the same prostate. Visual comparison of the B 0 -field gradients to the distortions observed in the DW images with respect to T2W images show negative B 0 -field gradients correspond to pile-up distortions and positive gradients correspond to stretching distortions when imaging in PE:AP direction and vice versa in the PE:PA direction. As expected, patients 4, 5 and 10 have small B0-field gradients and show little distortion in their DW images. Discussion In this study, we characterized the temporal and spatial variations in the B 0 -field. The temporal B 0 -field changes in the prostate are higher in patients than in the phantom. Typically, B 0 -field values fluctuated by 1-19 Hz over a time period of <3 min and in-plane median B 0 -field values at the prostates were between −25 and 6 Hz (with an interquartile range of up to 18 Hz) for cases of very little to no bowel motion. In EPI based DW-MRI dataset acquired with a PE bandwidth of 21 Hz/pixel (10.5 Hz/mm) on a 3 T MR scanner, these correspond to shifts between 0.1-0.9 pixels or 0.1-1.8 mm (compared to <0.2 pixels (<0.3 mm) for the phantom) between subsequent DW measurements and an additional shift of <1 pixel (<2 mm) in each DW measurement. For larger B 0 -field changes (for instance when fluid filled lesion was included in the prostate ROI or when large bowel motion occurred), shifts between subsequent DW measurements can be 1-2.5 pixels or 2-5 mm (with an additional average shift of ≈2 pixels (≈4 mm) within the prostate per measurement), resulting in misaligned 'corrected' DW data leading to miscalculation of ADC maps. In the 30 slices analysed over the 10 patients as part of this study, stretching occurred more frequently than pile-up distortions at the posterior edge of the prostate when the patients were imaged supine, feet first and with phase encoding in the AP direction than when the phase encoding direction was reversed. Stretching can be easier to correct (Jezzard andBalaban 1995, Embleton et al 2010), or less harmful to image interpretation and so consideration of the phase encode direction may be beneficial. Other than EPI based DW-MRI, the B 0 -field changes at the prostate area can potentially affect other prostate MR modalities. For instance, in CEST, heterogeneous spatial B 0 -fields can alter the z-spectrum but can be corrected using computed B 0 -field maps from pre-acquisition methods (Kim et al 2009, Schuenke et al 2017. However, CEST imaging is lengthy (≈3-6 min (Evans et al 2019, Liu et al 2019 for a single slice). Temporal B 0 -field changes of 30-50 Hz (0.23-0.40 ppm) spanning 1-3 min (observed in figure 3 on a 3 T MR scanner) and potential system drift (≈10 Hz) (Liu et al 2019, Windschuh et al 2019 may lead to wrongly corrected z-spectra, possibly increasing the chances of overlapping CEST signals from amides (≈3.5 ppm) and fast exchanging amines (≈3 ppm) and reducing the specificity of the method (Zhang et al 2018) to detect protein levels that are linked to PCa (Jia et al 2011). Another important prostate MR modality is MRS. If the B 0 -field within the prostate are shimmed perfectly to allow accurate water and fat suppression, then the spectral data should show four frequency peaks: choline-containing compounds (3.2 ppm), polyamines (3.1 ppm), creatine (3.0 ppm) and citrate (2.5-2.8 ppm) (Li et al 2013). However, our findings suggest that after volume-based shimming, B 0 -field values can change up to 0.15 ppm (19 Hz) within 1-3 min and the range of B 0 -field values within the prostate could be up to 0.14 ppm (18 Hz) for minimal bowel motion. For large bowel motion, the values are much higher (≥0.23 ppm or ≥30 Hz over a duration of ≈1-3 min from figure 3)and ≤0.35 ppm (≤44 Hz) from figure 4) spatially. These may cause spectral line broadening of the metabolites preventing accurate assessment of the citrate and choline concentrations-the main metabolites for determining PCa. In this study, we purposely used realistic imaging parameters. Even with the largest pixel size of 2 mm in the B 0 map, acquisition times were too long to correlate with the breathing and cardiac cycles. However, temporal B 0 -field fluctuations were lower for the stationary phantom suggesting that physiological motion affects the prostate. No antispasmodic agent was administered for this study. It is possible that antispasmodics, as often used for clinical scans to reduce bowel motion, could reduce the B 0 -field variations. However, the effectiveness of the drug can be variable and short-lived (Roethke et al 2013, Slough et al 2018, hence we would still expect some variations in B 0 -fields near the rectum area post administration of antispasmodic agents. A phase array coil was used for prostate imaging in this study. Prostate imaging is also possible through the use of endorectal coils (ERC) with PFC or barium sulfate to reduce susceptibility differences (Rosen et al 2007). They may offer lower spatial field variation and lower temporal field variation (Husband et al 1998) but at the expense of patient discomfort. A recent comparison study suggested that there is not much difference in cancer detection using either a body phase array coil or the ERC (Tirumani et al 2019). Recent heavy activity on the MR scanner could potentially make our results specific to the Philips Achieva MR scanner. A frequency drift of ∼10 Hz, caused by heating effects, can be expected on a 3 T Philips MR scanner when using rapid gradient switching sequences such as EPI in combination with diffusion gradients associated with high b-values (Liu et al 2019, Vos et al 2017. If the frequency is not re-adjusted, the effect is a constant offset to the B 0 -field. This does not cause image distortions but in EPI leads to an image shift in the phase-encode direction. However, our B 0 -field maps (acquired with FFE sequences-a less intense sequence than EPI) show temporal variations of >10 Hz suggesting that our findings would not change regardless of the drifts. Additionally, it would be interesting to perform this study on other MR scanners to test the reproducibility of our results. Our study produced a prostate phantom to simulate the artefacts in DW-MRI based on B 0 variations in the absence of physiological motion. The phantom geometry resembled an axial slice of the prostate and created similar B 0 -field maps and resultant distortions. Although the measured T1 and T2 of the agar (T1 ∼1800 ms and T2 ∼60 ms) and kiwi regions (T1 ∼1600-1900 ms and T2 ∼200-400 ms) in the phantom (data not shown) was not very similar to the prostate (T1 ∼1400-1700 ms and T2 ∼80 ms) and its surrounding organs (T1 ∼900-1500 ms and T2 ∼27-44 ms) (Bojorquez et al 2017), we do not expect these values to affect the B 0 -field maps and distortions. This phantom is easy to create, similar to (Bergen et al 2020), and may be useful for testing implementations of new DW-MR sequences (Hutter et al 2017, Kakkar et al 2017 on clinical scanners. Finally, we would like to offer some guidelines that may help with prostate MRI: • Temporal change in B 0 -field can be 1-19 Hz with minimal bowel motion and 30-50 Hz with large bowel motion over a duration of 1 and 3 min. • Median B 0 -field values at the prostate can be between −25 and 6 Hz with an interquartile range of ≤18 Hz for minimal bowel motion and an interquartile range of ≤44 Hz for large bowel motion. • An average B 0 -field gradient at the posterior edge of the prostate can range from −2 to +5 Hz/mm in the presence of no/small bowel motion and from +2 to +12 Hz/mm for large bowel motion. • In this study, EPI using a phase encoding gradient that is positive in the anterior to posterior direction gave more images with stretch distortions than pile-up. As stretch distortions are easier to correct, and may be less intrusive than pile-up, further consideration of the phase encode gradient sign may be beneficial. Conclusion Overall, this study should inform decisions for prostate MRI applications based on CEST, MRS and, more specifically, EPI based DW-MRI-techniques that can potentially offer additional information and/or improve the quality of the mpMRI dataset for assessing the extent of PCa.
5,479.6
2020-09-29T00:00:00.000
[ "Physics" ]
Cortical Plasticity Induced by Short-Term Multimodal Musical Rhythm Training Performing music is a multimodal experience involving the visual, auditory, and somatosensory modalities as well as the motor system. Therefore, musical training is an excellent model to study multimodal brain plasticity. Indeed, we have previously shown that short-term piano practice increase the magnetoencephalographic (MEG) response to melodic material in novice players. Here we investigate the impact of piano training using a rhythmic-focused exercise on responses to rhythmic musical material. Musical training with non musicians was conducted over a period of two weeks. One group (sensorimotor-auditory, SA) learned to play a piano sequence with a distinct musical rhythm, another group (auditory, A) listened to, and evaluated the rhythmic accuracy of the performances of the SA-group. Training-induced cortical plasticity was evaluated using MEG, comparing the mismatch negativity (MMN) in response to occasional rhythmic deviants in a repeating rhythm pattern before and after training. The SA-group showed a significantly greater enlargement of MMN and P2 to deviants after training compared to the A- group. The training-induced increase of the rhythm MMN was bilaterally expressed in contrast to our previous finding where the MMN for deviants in the pitch domain showed a larger right than left increase. The results indicate that when auditory experience is strictly controlled during training, involvement of the sensorimotor system and perhaps increased attentional recources that are needed in producing rhythms lead to more robust plastic changes in the auditory cortex compared to when rhythms are simply attended to in the auditory domain in the absence of motor production. Enjoyment of music relates to familiarity with musical genres that help the listener to form and develop perceptual expectations for musical events. In that sense pitch, harmony, timbre and rhythm establish a musical predictive template that produces musical expectations [20]. Violations of those expectations are reflected in an electrophysiologically measurable event related response, the mismatch negativity (MMN). Musical pitch expectations can be quickly formed by short-term musical piano training that shapes the brain activation within the auditory cortex [21]. After eight sessions of multimodal piano training in the form of learning to play short melodic chord sequences on a keyboard, non musicians showed an increased MMN in response to pitch incongruence, especially in the right hemisphere. A control group, that merely listened carefully to and made judgements about the music played by the experimental group, showed no MMN enhancement. Thus, the multimodal integration, the co-activation of auditory and sensorimotor areas and attentional mechanisms, that are involved in musical training, likely contribute to the brain plasticity effects that have been shown in musicians. In the same way that the chord structure of a musical piece shapes expectations about upcoming melodic events, the temporal structure of a musical piece induces anticipation of rhythmic events in the listener. A number of studies indicate that interactions between the auditory and motor systems may be particularly strong when rhythm is involved [22][23][24][25][26][27][28][29][30]. We therefore hypothesized that musical training, that is focused on the rhythm of a melody, should lead to enhancements of rhythm perception and correspondingly to enhancements of the cortical responses to deviations in rhythmic musical material. Since several studies suggest that the left hemisphere is specialized for temporal processing [31,32,33], we hypothesized further that neural activation increases induced by the rhythmic training should be particularly pronounced in the left hemisphere. Thus, the goal of this study was to investigate how rhythmic expectations within a musical context can be changed by short-term musical training involving either listening or learning to play the piano in a highly controlled laboratory environment. Methods To investigate this hypothesis we measured the MMN to rhythmic deviants before and after sensorimotor-auditory or auditory musical training. Musical training strengthens expectations for musical events, which is reflected in the auditory system as better performance on discrimination of tonal frequencies [34] and temporal events [35]. This ability can be quantified electrophysiologically in humans by means of completely noninvasive electro-or magneto-encephalographic measurements of the MMN (MMN in EEG, MMNm in MEG). MMN is a preattentive fronto-central negative component of the event related field, measured at latencies between 120 and 250 ms after stimulus onset with brain sources within the primary and secondary auditory cortices [21,36]. The MMN component can be elicited by changes in auditory features such as frequency, intensity or duration of a sound, but it can also reflect violations of more complex aspects of auditory features [17,37]. In the present study the duration mismatch negativity was used to determine changes in cortical strength after a rhythmic incongruency. Subjects Twenty-four non musicians (14 females) between 24 and 38 years of age participated in the study. Participants had no formal musical training, except for their compulsory music lessons at school. The data of four subjects had to be excluded because of the very low signal-to-noise ratio (insufficiently pronounced MMN). Thus 20 subjects were included in the analyses. Subjects were all right-handed as assessed by the Edinburgh Handedness Inventory [38]. None of the subjects had a history of otological or neurological disorders. We used pure tone audiometry to confirm normal audiological status. Subjects were informed about the nature of the study, which was approved by the Research Ethics Board of the University of Münster. Based on a clear understanding of what participation involved, subjects gave informed consent to take part in this study. Subjects were randomly assigned to the different experimental goups (sensorimotor-auditory, SA and auditory, A). The SA-group learned to play a musical sequence on the piano, whereas the A-group merely listened carefully to the music that was played by the participants of the SA-group and evaluated whether the sequences were rhythmically correct or not. Stimuli The musical stimuli for the MEG measurement before and after training comprised six-tone piano sequences generated in a realistic piano timbre with a digital audio workstation ( Figure 1). The sequences were composed of a d-minor broken chord in root position followed by an A-major chord in first inversion: d' (293.66 Hz) -f' (349.23 Hz) -a' (440.00 Hz) -c sharp (277.18 Hz) -e' (329.63 Hz) -a' (440.00 Hz). These are the two most important chords (tonic and dominant) in the key of d-minor, the key of the training exercises described below. The standard stimulus was composed of two rhythmic figures, each with an eighth note (400 ms) at the beginning followed by two sixteenth notes (200 ms each) for a total duration of 1600 ms. The deviant stimulus (c.f. Figure 1a), was identical to the standard except that the fifth tone was shortened by 100 ms to produce a duration advance deviance of 100 ms on the sixth note and a total sequence duration of 1500 ms. The two sequences (standard and deviant) were presented in an oddball paradigm with two runs of 400 trials separated by a short break. Each run consisted of 320 standards and 80 deviants presented in a quasi random order such that at least three standards occurred between two deviants. Note that the rhythmic motive used during the MEG measurement was not identical to that used during training so that we could test the training effects under conditions requiring some generalizability. Specifically, the order of the long and short notes was reversed. Our previous melody study showed that participants were able to abstract harmonic rules from training material and to transfer them to new musical material. Thus, a similar effect in the present study would allow us to draw conclusions about generalization of training effects in the rhythmic domain as well. Although Western music offers a large variety of melodic and also rhythmic material, it is nevertheless based on a comparatively small rule catalogue, and it lies in the nature of musical training that it generalizes to different musical material. We wanted to demonstrate the potentials of musical training for musical learning in general. Training procedure The first 16 measures of an exercise from a piano workbook for beginners [39] were used for the piano training ( Figure 1b). In order to avoid possible differential plasticity effects in the two hemispheres due to dissimilar movements of the two hands, we chose a piano piece where both hands were similarly involved. The piano exercise was in d-minor with a metrical time signature of 3/8. The melody was built from a recurring small rhythmic motive consisting of two sixteenth notes on the first beat followed by two eighth notes on the second and third beats. The rhythmic motive did not change during the whole piece. In the first 8 measures the melody was in the right hand whereas in the last 8 measures the left hand played the melodic line. In each case the other hand played an interval on the first beat of each measure. Numbers represent the fingers (thumb, 1; index finger, 2 and so on) with which the subjects were supposed to press the corresponding piano keys. The rectangles indicate that the left hand is used, the circles mark the right hand. The numbers that were depicted in one horizontal line had to be played simultaneously. The small circles indicated that notes had to be played at double speed. doi:10.1371/journal.pone.0021493.g001 During the first 8 bars the motive was played on successively higher scale steps each bar, and during the final 8 bars on successively lower scale step each bar. In order to facilitate training, we did not use the musical notation of the piano exercise, but visual templates instead ( Figure 1c). On each template the image of the piano keyboard was depicted and the finger placement was marked. The SA-group was instructed in how to play the piano exercise. The piano sequence was demonstrated by the experimenter at the beginning of the first training session. Training sessions were scheduled on 8 days within two weeks, each session lasting 30 minutes. A computer recorded the keystrokes of the subjects during the training through a MIDI connection. The MIDI data of the SA-group provided the stimuli for the training of the Agroup. This included also the first training sessions when piano performance of the SA-group was still poor. Consequently, musical exposure of the SA-and A-group was identical. Each participant of the A-group was paired with a participant of the SAgroup and listened to all the training session of that subject. Prior to training the A-group received a short introduction to the correct piano exercise. As in the SA-group, training sessions for the Agroup were scheduled on 8 days within two weeks. During auditory training subjects of the A-group were seated in front of the piano. However, they received no visual information regarding the keys that had been pressed. Subjects in the auditory group were instructed to press the right foot-pedal whenever they noticed that the rhythm was played incorrect. This task ensured that subjects of the auditory group listened thoroughly and participated actively in the experiment. Behavioral test To evaluate the effect of the training on a behavioral level, all subjects participated in an auditory discrimination test before and after the two-weeks of training. For this test we extracted the first two measures of the piano exercise and recorded them via MIDI connection to a computer. Thus we obtained a sequence that contained 8 notes in the melody part and two accompanying intervals on the first beat of each measure. During the behavioral test this sequence or temporally altered sequences were presented. In temporally altered sequences a randomly chosen note of the melody was played earlier or later than expected by 10, 20, 30, 40 or 50 ms. These temporal offsets were chosen through pilot testing, which had revealed that a 50 ms time shift is easy to detect even for non musicians, whereas a 10 ms or even 20 ms time shift is very hard to detect. Sequences with temporal errors were presented randomly interleaved with correct sequences and subjects responded by pressing the left-foot pedal of the piano whenever they detected a temporal advance or delay of a note. If they did not detect a temporal error they had to press the right foot-pedal to start the next trial. The test contained 243 trials and lasted about 20 minutes. The first three trials were correct sequences to accustom the subject to the task. The remaining 240 trials contained 120 error sequences and 120 correct sequences. Each time shift was thus repeated 12 times during the test. MEG data acquisition The auditory MMN responses were measured from all participants before and after training. Training induced plasticity was evaluated by comparing the MMN differences before and after training between the SA and A-groups. Magnetic field responses were recorded with a 275-channel whole-cortex magnetometer system (Omega 275, CTF Systems). The MEG signals were low-pass filtered at 150 Hz and sampled at a rate of 600 Hz. For each individual subject, epochs of 2 s for the standard and of 1.9 s for the deviant stimulus beginning 0.2 s before the last tone of the stimulus and ending 0.4 s after stimulus offset were extracted from the continous data set. The total recording time was 35 min. The recordings were performed in a magnetically shielded and acoustically silent room. The subjects were in an upright position, seated as comfortably as possible while ensuring that they did not move during the measurement. Three localization coils, that were fixed to the nasion and the entrances of both ear canals, were used to check the subject's head position at the beginning and end of each recording block. Subjects were instructed to move and blink as little as possible, to stay relaxed but awake during the measurement, and to pay no attention to the sound stimuli. Alertness and compliance were verified by video monitoring. To control for confounding changes in attention and vigilance, subjects watched a soundless movie of their choice, which was projected on a screen placed in front of them. MEG data analysis The recorded magnetic field data were averaged separately for the standard and the deviant stimuli. Epochs contaminated by muscle or eye blink artifacts containing field amplitudes greater than 3 pT in any MEG channel were automatically rejected by the averaging procedure. The MMN was expected to be elicited in the deviant sequences after the onset of the sixth tone. Therefore, standard and deviant datasets were temporally aligned to the onset of the sixth tone of the sequence and subtracted to generate difference waveform data sets, representing the MMN. Although the alignment ensures that the onset time and duration of the sixth tone of the standard and the deviant were identical in both sequences there is still the possibility that the fifth tone, which is shorter in the deviant than in the standard sequence, provides additional MEG components that interfere with the subtraction procedure. Since the fifth tone of the deviant sequence is of shorter duration than the standard one, its corresponding N1 response will be closer to the onset of the sixth tone and this N1 component could then be mistakenly interpreted in the deviantstandard difference waveform as an MMN component [37]. In two test measurements with four musically experienced subjects we therefore tested a different subtraction procedure. The two stimuli were presented in two blocks as standard and deviant as described above. Then, in two further measurement blocks the roles of standard and deviant were reversed, i.e., the standard became the deviant and vice versa. This procedure enabled subtraction of physically identical stimuli, namely the shorter stimulus that was the standard in the latter measurement from the identical shorter stimulus that was the deviant in the earlier blocks. We compared the results of this subtraction procedure with that of the direct subtraction procedure in which the shorter deviant and the aligned longer standard were subtracted. Both subtraction methods yielded the same results, that is, the obtained MMN components were nearly identical. Since the direct subtraction method required a much smaller number of trials the direct subtraction procedure was employed in the main experiment. For the MMN source analysis a baseline correction was performed relying on the 100 ms time interval prior to the onset of the piano tone sequences. Then, the source analysis model of two equivalent current dipoles (ECD) (one in each hemisphere) was applied to the MMN component identified in the data between 120 to 180 ms after tone onset. The two spatiotemporal dipoles, defined by their dipole moment, orientation, and spatial coordinates, were fitted simultaneously to the MMN derived from the difference waveforms for both hemispheres and for each recorded dataset before and after training. The source space projection method [21,40] was applied, collapsing the 275 channel data to one source waveform for each dipole. Finally, grand average waveforms for the dipole moments were computed for pretraining and posttraining data, groups (SA and A), and hemisphere (left and right). To evaluate the MMN source strength across participants, the MMN dipole moment peaks were determined from the corresponding waveforms of each individual participant and subjected to statistical analysis by means of a repeated measures mixed-model ANOVA with factors group, pretraining/posttraining and hemisphere. In all statistical tests, the alpha-level was set at 0.05, and all test were two-tailed unless otherwise stated. Training Prior to the training procedure participants of both groups listened to a correct version of the piano exercise. The subjects of the SA-group started in their first training session to play the upper line of the piece with the right hand only. The left hand was added in later training sessions after subjects were able to play the first part with the right hand correctly. After finishing the first line of the piano piece the same procedure was applied for the second line. The employment of the left and right hand was reversed, however, since the melody in the second part of the piece was in the left hand. The transfer of the melody to the left hand was difficult for most of the subjects, but was eventually mastered by all participants. Due to differences in learning progress among subjects, the different steps during the training process (inclusion of the second hand, switching to the second line) were performed at different times during the training of each individual subject. At the end of the training, after 8 sessions, all subjects of the SA-group were able to play the piece within an acceptable speed with few mistakes. However, two subjects only reached successful performance of the second line with a reduced accompaniment in the right hand. Instead of playing the complete intervals, they simply played a single accompanying tone in each measure. Behavioral test data The Performance on the behavioral test was evaluated by computing the detection rate for the error trials of each absolute time shift (10, 20, 30, 40, or 50 ms). Positive and negative time shifts were analyzed together. The data from one subject in the Agroup (due to technical failure) and two subjects from the SAgroup (due to misunderstanding the task) had to be excluded, so that overall the data of 9 subjects of the A-group and 8 subjects of the SA-group were analyzed. The detection rates were fitted with a Weibull function and the 75% detection threshold was determined. A 262 mixed model ANOVA with factors group and pretraining/posttraining revealed a significant interaction of group x pretraining/postraining, (F (1,17) = 5.098; p = 0.039), demonstrating that rhythmic discrimination ability improved more strongly in the SA than in the Agroup. On average, the detection threshold in the SA-group improved by 9 ms (Figure 2). No threshold improvement was observed in the A-group. Main effects of group and session did not reach statistical significance. MEG data The MEG data showed a clear MMN dipolar pattern in most of the individual subjects, which justified the use of a single equivalent current dipole model for the cortical source analysis of the data. Figure 3a Discussion Rhythm-focussed sensorimotor-auditory training in non musicians results in representational changes in the auditory cortices. The SA-group that had received sensorimotor-auditory piano training showed a significant post-training enhancement of the MMN to temporal deviants in rhythmic sequences. The A-group that had received only auditory training showed no significant training effect on MMN. This is consistent with the behavioral finding that thresholds for detecting temporal errors only improved in the SA-group but not in the A-group. However, both groups showed significant enhancement of the P2 component between deviant and standard after training, although the enhancement was larger in the SA-group, indicating that even the auditory-only training led to some plastic changes in auditory cortex. Previous studies indicate that the P2 component is larger in skilled musicians [14] and is highly neuroplastic with frequency discrimination training [41,42]. Our data extend these findings by showing that the P2 difference component, which is mainly based on the training effect on the P2 response to the deviants, is also sensitive to rhythmic processing after short-term training. In sum, in comparison to the A-group, the SA-group showed significantly larger training effects on behavioural thresholds, MMN amplitude, and P2 amplitude. Training and test stimuli were not identical, indicating that the multimodal effects of the training generalized. Because MMN and P2 are generated primarily in auditory cortices [43], the results point to strong effects of sensorimotor practice on auditory representations for musical rhythm. These results extend the findings of our previous study which described increased neural activation within the auditory cortex in the form of MMN enhancement after musical training that focussed on melodic chord progressions [21]. Whereas the present study examined brain responses to deviants in the temporal domain, the previous study examined responses to deviants in the pitch domain. Perhaps most interesting is that for both pitch-based and rhythm-based training, larger plastic enhancements of responses from auditory cortex were seen after sensorimotor-auditory training than after auditory-alone training, suggesting that multimodal stimulation has a larger effect on auditory cortex than auditory stimulation alone. Several studies show different brain responses in musicians and non musicians to pitch-, melodic-or harmonic-based deviants [16,17,18,19,21] or during rhythm perception [23,32,44,45,46,47]. However, when comparing adult musicians and non musicians, it is difficult to determine definitively whether the differences seen are primarily the result of the extensive experience of the musicians in practicing their instruments or whether they are largely a result of pre-existing congenital differences that led to the decision to undertake extensive musical training. The design of our study exposes the effects of experience directly. Because we randomly assigned subjects to different training groups, and because we controlled the experience and measured responses before and after training, we can conclude that the effects that we report are the result of the experience itself. Our finding of superior learning with multimodal training is in line with other evidence that the brain is very sensitive to relations across modalities. The interaction and integration of different sensory modalities is especially important when playing a musical instrument. The multimodal effects that we obversed likely involved both somatosensory and motor interactions with auditory processing. Previously, Schulz et al. [48] found evidence for auditory/somatosensory reorganization of cortical functions in musicians by comparing trumpet players and control subjects who had never played an instrument. In the trumpet players, concurrent stimulation of the lips and presentation of a trumpet tone led to a stronger cortical activation compared to the sum of the responses to the two types of uni-modal stimulations, either trumpet tone or tactile lip stimulation. In the present study, it is likely that in the SA-group, the concurrent experiences of touching the keys (somatosensory) and hearing the piano tones (auditory) led in part to the enhanced learning seen in this group. As far as the importance of the motor aspect of our training protocol, the concept of a strong link between the auditory and motor systems has a long history [49]. Musical stimuli give rise to rhythmical organized motor behavior [27,28,30], and synchronized movement to music is found in all cultures [50]. Even 5-to 25-months-old infants coordinate their movement to musical rhythmic stimuli and adapt the tempo of their rhythmic movement to the tempo of the auditory rhythmic stimuli [51]. Executing rhythmic movements involves a network of brain areas spanning the basal ganglia, cerebellum, motor, premotor cortex, and supplementary motor cortex [30]. Recent fMRI studies have shown, however, that these movement-related areas are also activated during auditory perceptual tasks [24,52]. In particular, the cerebellum [53]) and the premotor cortex [22] are activated during auditory discrimination, and disruption of auditory feedback affects motor execution [54]. In addition, the results of the present study indicate that the interaction between auditory and motor areas is bidirectional, suggesting that movement can affect auditory processing. Phillips-Silver & Trainor [25,26] showed that for both infants and adults, bouncing on every second beat of an auditory metricallyambiguous rhythm pattern biased listeners to hear the ambiguous pattern as a march whereas bouncing on every third beat of the same pattern biased them to hear the same ambiguous pattern as a waltz. Recent physiological evidence also indicates strong bidirectional connections between auditory and movement-related areas [30]. For example, auditory cortex is activated when musicians observe someone else play a keyboard [55]. Furthermore, similar auditory and motor areas are activated when pianists play a piece without being able to hear it and when they listen to it without playing it [56,57]. The results of the present paper are consistent with all of these findings, demonstrating that sensorymotor training affects auditory cortical areas. Since the mismatch negativity is mainly generated in the auditory cortex, the mismatch paradigm is not suited to directly investigate response changes in motor related areas to musical stimuli after musical training. A different experimental design would be needed to demonstrate directly this connection. However, we suggest that auditory-motor interaction is bidirectional because the auditory input was identical for both groups, the only difference was motor-execution with associated attentional mechanisms in the SA-group. We therefore reason that auditorymotor interactions are likely involved in the generation of the increased mismatch negativity after sensorimotor training. Whereas many studies show larger MMN in the right compared to left hemisphere, the present results showed no difference in hemispheric involvement. It is possible that MMN tends to be right-hemiphere dominant for pitch-based discriminations, but not for duration-based discrminations. Indeed, our previous training study involving melodic chord sequences showed a greater plasticity effect in the right hemisphere whereas the present rhythm training study showed plastic changes of similar magnitude in both hemispheres. The strong involvement of the right hemispheric auditory cortex in the melody study, and the relatively well-pronounced involvement of the left auditory cortex in the rhythm study are consistent with data showing preferential encoding of spectral information on the right and temporal encoding on the left [31,32,58,59,60]. It has also been suggested that musical expertise could lead to a higher degree of analytical processing, which is believed to favor left hemispheric mechanisms [61,62]. The results from our study are somewhat more complicated in that we found statisically equivalent effects of training in the right and left hemispheres in the SA-group for MMN, but significantly greater effects of training in the right than left hemisphere for the P2 component in both the SA and A-groups. One reason for our findings might be that our stimuli included also pitch and melodic variation, which might have interacted with the rhythmic elements [63]. Another reason might be that the sensorimotor-auditory training was too short to reveal differential hemisphere effects. Playing the piano is a motivating and demanding task. It is thus conceivable that the participants in the SA-group were more motivated or more engaged in their task than the participants of the A-group. The stronger MMN plasticity in the SA-group, therefore, might originate also from a stronger involvement of motivation or attention in this group. Attention, and other top-down modulatory signals, can increase plasticity effects in the auditory cortex [64,65,66]. However, the participants of the A-group also had to concentrate on a task that demanded alertness and attention, namely, the detection of the rhythmic errors in the auditory material of the SA-group. Thus, their attention was directed to the same stimulus feature (rhythmic correctness) as in the SA-group, such that the level of attention on the auditory input was likely comparable in both groups. The groups differed in that the SA group performed motor behavior and acted to create the acoustic material, while the A group merely listened attentively to the created material. Thus, while we cannot rule out that attentional or motivational factors differed between the groups, any difference in that regard would be driven by the active involvement and the motor behavior of the piano playing. We conclude that sensorimotor-auditory training of rhythmic material can increase the neural responses to a temporal mismatch in non musicians. The response increase was achieved after only eight training sessions. The enhancement of cortical activity was based on new musical material, suggesting strong generalization effects of sensorimotor-auditory training. Since cortical activity was enhanced to a lesser degree in the control group after auditory only training, we conclude that rhythm-focussed multimodal piano training has causal effects on auditory cortex plasticity. Together with our previous study [21] we conclude that harmonic and rhythmic expectations can be shaped by short-term experience, and that multimodal sensorimotor-auditory training is much more effective than auditory alone training at inducing plastic changes to auditory areas. These results have educational implications in that they show that multimodal training leads to more effective and faster learning than unimodal training.
6,840.4
2011-06-29T00:00:00.000
[ "Psychology", "Biology" ]
Sac-ta: a Secure Area Based Clustering for Data Aggregation Using Traffic Analysis in Wsn Clustering is the most significant task characterized in Wireless Sensor Networks (WSN) by data aggregation through each Cluster Head (CH). This leads to the reduction in the traffic cost. Due to the deployment of the WSN in the remote and hostile environments for the transmission of the sensitive information, the sensor nodes are more prone to the false data injection attacks. To overcome these existing issues and enhance the network security, this paper proposes a Secure Area based Clustering approach for data aggregation using Traffic Analysis (SAC-TA) in WSN. Here, the sensor network is clustered into small clusters, such that each cluster has a CH to manage and gather the information from the normal sensor nodes. The CH is selected based on the predefined time slot, cluster center, and highest residual energy. The gathered data are validated based on the traffic analysis and One-time Key Generation procedures to identify the malicious nodes on the route. It helps to provide a secure data gathering process with improved energy efficiency. The performance of the proposed approach is compared with the existing Secure Data Aggregation Technique (SDAT). The proposed SAC-TA yields lower average energy consumption rate, lower end-to-end delay, higher average residual energy, higher data aggregation accuracy and false data detection rate than the existing technique. Introduction WSNs are progressively deployed for applications such as battlefield surveillance applications in military, industrial control applications, wildlife habitat monitoring, healthcare monitoring, forest fire prevention, etc.In these applications, the data gathered by sensor nodes from their physical environment require to be assembled at a data sink (base station) for further analysis.An aggregate value is calculated at the data sink by applying the corresponding aggregate operator such as COUNT, MAX, MEDIAN or AVERAGE to the gathered data.Figure 1 shows the general architecture for data aggregation model.There are two categories that are applied for data aggregation [1]: tree-based data aggregation [2]- [5] and cluster-based data aggregation schemes [6]- [8].Recently, various data aggregation protocols are developed to reduce the data redundancy among the nodes in the network.It reduces the amount of energy consumed for the collection of data from the nodes.Hence, the communication cost is reduced.In the existing data aggregation process, the nodes are formed as a tree hierarchy fixed at a BS.The non-leaf nodes perform as data aggregators and gather the data from the child nodes of the tree structure before transmitting the collected data to the BS.Based on this process, the data are processed and collected at each hop on the communication path to the BS.Thus, the communication overhead in the network is largely reduced. The hop-by-hop data aggregation formulates a new gateway to the false data injection attacks.The sensor nodes are deployed in the unattended and open environments.The malicious nodes can physically attack the nodes and retrieve the confidential information from the compromised nodes.It can also reprogram the compromised nodes into the malicious sensor.The compromised node reports a false aggregation result to their parent node in the tree structure, which causes huge variation in the final aggregation result from the actual measurement result.This attack has become more vulnerable during the collision of multiple compromised nodes.Data encryption is an essential factor in WSN when this type of sensor is subjected to different types of attacks.Without encryption, the malicious nodes can monitor and inject the false data into the sensor network.During the encryption process, the nodes should encrypt the data packets on the hop-by-hop basis.An aggregator node possesses the keys for all the sending nodes.It gathers all the received data and finally decrypts the collected data for transmitting the original data to the base station.Encryption technique can solve the security challenges in WSN, but the aggregation of data decides the overall network performance. Our previous works [9] [10] also studied about the data aggregation protocol in WSN.But there are some security concerns in these protocols.To overcome the security issues in data aggregation, this paper presents a secure data gathering approach using the area based clustering with traffic flow and energy analysis.The CH is selected based on the cell center and energy among the nodes, which improves the network lifetime.The traffic behavior between the nodes is analyzed at the time of data transmission.If the traffic flow is dramatically increased, then it is recognized as the route including the malicious nodes that might include the false data.Then, it needs to be identified and eliminated from the path for secure data aggregation.Hence, this paper introduces a one-time key generation step to isolate the malicious nodes and add it into the block list. The major contributions are described as follows: 1) Identification of false injection attack based on the traffic analysis at the time of route discovery process. 2) The one-time key generation step eliminates the malicious nodes from the network. 3) The efficiency of the proposed data aggregation scheme is evaluated by comparing it with the existing secure data aggregation scheme. The remaining sections of the paper are structured as follows.Section 2 presents a brief outline about the conventional research works relevant to the secure data aggregation protocols.Section 3 describes the proposed SAC-TA scheme.Section 4 presents the results and comparative discussion with the proposed scheme and existing methods.Section 5 discusses the conclusion and future scope of the proposed work. Related Works This section describes the various research works related to the secure data aggregation techniques.Mantri et al. presented a bandwidth efficient cluster-based data aggregation method.It deliberates the network with heterogeneous nodes based on the energy and dynamic sink to gather the data packets.The optimality was attained by inter-cluster and intra-cluster aggregation on the randomly disseminated nodes with the adaptable data generation rate [11].Rout et al. formulated an analytical approach for adaptive data aggregation.The network coding was used to improve the energy in a cluster based duty cycled WSN.The traffic within the cluster was reduced, and the energy efficiency of the bottleneck region was improved [12].Govind et al. proposed an energy and trust-aware mobile agent migration protocol for data aggregation.A framework was introduced for trust validation to detect the malicious nodes [13]. Roy et al. discussed a secure data aggregation for WSN and formulated a synopsis diffusion approach in which the compromised nodes contribute the false sub-aggregate values.A lightweight verification algorithm was used to determine the aggregation that includes any false contribution [14].Li et al. introduced an energyefficient and secure data aggregation protocol.Accurate aggregation of the data received from the nodes was achieved and overhead on the sensors was reduced [15].Licheng et al. proposed a discrete logarithm-based method to realize the fully adaptive or multiplicative homomorphism and secure data aggregation [16].Groat et al. introduced a k-distinguishable privacy-preserving data aggregation in WSN.The sensitive measurements were complicated by hiding them among the set of disguise values.It was used to hide the wide range of aggregation functions [17].Huang et al. proposed a secure data aggregation method to eliminate the redundant sensor readings without using the encryption.Also, it maintained the data privacy and security during transmission.This scheme was resilient to chosen plaintext attacks, man in the middle attacks and cipher-text only attacks [18]. Chein et al. implemented a secret data aggregation for data integrity in WSN.The base station can recover all sensing data, even it can be aggregated, and this property was called as recoverable [19].Ozdemir et al. formulated a hierarchical secret data aggregation for WSN.The aggregation of encrypted packets with diverse encryption keys was permitted [20].Liu et al. introduced a high energy-efficient and secure data aggregation for WSN.The communication overhead was reduced and the data accuracy was improved [21].Suat and Hasan et al. proposed a combined approach for the detection of false data along with aggregation and transmission of confidential data in WSN.Also, the small-size Message Authentication Codes (MACs) were computed for the verification of data integrity at their pair levels.Data integrity verification on the encrypted data was performed rather than on the plain text, to enable the confidential data transmission.The simulation results have shown the reduction in the transmitted data rate by using the data aggregation and early false data detection [22]. Aldar et al. designed a security and privacy-preserving framework for data aggregation in WSN.The security model was sufficiently required for covering the most application scenarios and construction of data aggregation [23].Lu et al. presented an optimal allocation strategy for data aggregation in WSN.A distributed algorithm was designed for the joint rate control and scheduling, based on the decomposition.The near optimal performance was achieved by using our approximate solution [24].Li et al. designed a secure and energy-efficient data aggregation protocol with the identification of malicious data aggregator.Here, all the aggregation results were signed with the private keys of the data aggregators.Hence, it cannot be modified by others.Additionally, nodes located on each link utilize the pairwise shared key for secure transmission [25].Piyi et al. introduced an effi-cient data aggregation method with the constant communication overhead.The efficient actions against the passive and active privacy attacks were facilitated.The proposed method was proven to be robust for data loss and reduced transmission cost which was suited to be applied for large networks [26].Arumugam and Ponnuchamy introduced an energy-efficient LEACH protocol for data gathering.The proposed protocol provided better packet delivery ratio and improved the network lifetime [27].The limitations of the existing secure data aggregation technique and benefits of the proposed SAC-TA approach are shown in Table 1. SAC-TA: A Secure Area Based Clustering for Data Aggregation Using Traffic Analysis In this section, we proposed a novel secure area based clustering for data aggregation in WSN.In many applications, the physical occurrence is sensed by the sensors and reported the sensed information to the Base Station (BS).Data aggregation is used for solving the disintegration and overlapping problems occur during the data-centric routing in WSN.The data having same attribute is aggregated, when the data reach the same routing node located on the path back to the BS.The security issues, data confidentiality, and integrity are the vital factors for the data aggregation process, during the deployment of the sensor network in a hostile environment. Figure 2 shows the overall flow of the proposed approach. To reduce the energy utilization, the application should incorporate an in-network aggregation before it reaches the BS.The compromised nodes can perform malicious actions that affect the aggregation results.Before the detection of the malicious nodes, a secure aggregation protocol will safeguard the data packets and forward the data in a secured route.The sensor nodes are separated into different clusters, where each cluster has a CH.The CH acts as an aggregator for collecting the data received from the sensor nodes. Network Model Let us assume a WSN of "N" sensor nodes that are randomly distributed over the area M * M. The proposed network model is based on some assumptions. • The sink with infinite energy level is located within the monitoring area. • The sensor nodes are deployed within the specified area. • The nodes can dynamically adjust the communication power according to the distance between the sink and other nodes.• The communication between the nodes is reliable and symmetric. Area Based Clustering The area based clustering method uses the location information for each sensor nodes.The coordinates (x i , y i ) for each sensor nodes are utilized to calculate the distance between two sensor nodes.The nodes are clustered to minimize the residual energy and maximize the lifetime of the node and network.The CH is selected based on Table 1.Comparison of the limitations of the existing secure data aggregation technique and benefits of the proposed SAC-TA approach. Existing Secure Data Aggregation Technique Proposed SAC-TA Approach 1) The cluster head can be easily attacked by the malicious attacker. 2) The base station cannot ensure the correctness of the aggregate data send to it if the cluster head is compromised. 3) The power consumption at the nodes is increased, due to the transmission of several copies of the aggregate data to the base station.4) Moreover, data aggregation results in the changes in the data received from the sensor nodes.5) It is a really challenging task to provide authentication of data authentication along with data aggregation.6) Due to these contradictory objectives, data aggregation and security protocols must be designed together, to enable aggregation of data without sacrificing security. 1) Identification of the false injection attack is performed based on the traffic analysis at the time of route discovery process.2) One-time key generation step is introduced to eliminate the malicious nodes from the network. 3) It helps to identify the false data on the gathered data to provide the secure data gathering environment.4) The proposed approach results in the reduction of the energy consumption rate to gather the data from diverse sensor nodes.5) It automatically improves the residual energy, because half of the sensor nodes are alive at the end of the gathering process.6) The data aggregation accuracy is improved by using the proposed technique.This results in the improvement of the network performance.the cluster center and highest residual energy.The cluster center is estimated based on the minimum distance between the cluster node and the centroid.The data gathering is performed between the cluster members to CHs; and CHs to BS.The distance between the reference nodes is calculated using the following equation ( ) ( ) Here, ( ) , x y are the coordinates of the reference node. Node Deployment The nodes are deployed around the sink using the two-dimensional Gaussian distribution.The deployed nodes are battery powered with the initial energy a E .σ denotes the standard deviation for the "x" and "y" dimen- sions of the node.The Gaussian distribution is defined as ( ) Cluster Formation After the deployment of the nodes, the cluster is formed.A non-cluster node considers the size of the cluster "i" denoted as ( ) k S i to decide which cluster to join in the k th round.In most of the cases, the non-cluster nodes depend on the signal strength of the CHs to decide about the cluster to join based on the assumption about the uniform distribution of the sensor nodes in the monitoring region.But this is not possible in the practical scenarios.The energy dissipation of the CH of a relatively big cluster for aggregating data from the nodes is highly greater than the CH in a smaller cluster.This leads to the imbalance of energy consumption in the CHs and re-sults in the adverse impacts in the network lifetime.A new criterion for the cluster formation is introduced in this work, for achieving better load balancing among the CHs.When a non-cluster node decides to join a cluster, it considers the residual energy of the CH and size of the cluster.This is defined as where, k i E is the energy of the node "I" at the k th round; β is a factor that is used for adjusting the impact of the size of the cluster and k i E .When a sensor node is located far away from all CHs, the above equation is used to choose its CH having high residual energy and lesser cluster size than the other CHs.This ensures better load balancing to the CHs, to improve the overall lifetime and better performance of the WSN. Optimal CH Selection The CH is chosen based on the distance measurement approach.The node located at the center of the cell is selected as the CH, due to the minimum distance between the center node and other cluster member nodes.The residual energy of the node is the main condition for selecting the CH.The nodes forming a cluster are disseminated within a small region, and the sink is located far away from the nodes.The data aggregation results in the energy saving effect, since the nodes require only a minimum amount of energy for transmitting the data directly to the CH, rather than sending the data to the sink.So, the nodes located closer within a cluster are inferred due to the minimum amount of energy consumption.Hence, selection of the CH is performed based on the residual energy amount of the sensor nodes. Let us consider a WSN with "N" number of sensor nodes. ( ) k D i is defined as the concentration degree of the node "i", for sensing the number of sensor nodes during the k th round. ( ) , W a k is defined as the selection weight of the node "a" in the k th round. 1 , where 1 where "C" is the number of clusters; " a E " is the initial energy of the node "a"; " k E " is the average residual energy of the network in the "k th " round; ρ is the residual energy of the node "a" in the k th round; γ is an adaptive factor to adjust the influence of the residual energy of the node and concentration degree to the selection weight.The adaptive factor increases gradually with the reduction in the residual energy, to adapt to the decrease in the number of effective nodes in the WSN. The CH aggregates and compresses data before transmitting the data to the sink.The optimal probability of a node being selected as a CH is defined as the function of the spatial density, during the uniform distribution of the nodes over the sensor field.Optimal clustering is achieved, when the energy consumption across all the sensor nodes is low.Let us consider that the distance between the nodes to the sink or CH is 0 D ≤ .Thus, the energy dissipation in the CH, during transmission of "B" bit message over a distance "D" in a round is given as where, D E is the energy dissipated per bit; "D" is the distance between the sender and receiver nodes; "k" is the number of clusters; fs ε is the energy dissipation amount of the transmitter amplifier circuit; P E is the processing cost of a bit report to the sink or base station; and BS D is the average distance between the sink and CH.The energy used in a non-CH node is equal to where CH D is the average distance between the cluster member and its CH.If the nodes are uniformly distributed, the 2 CH D is defined as where, ( ) , x y ρ is the node distribution and " 2 M " is the area in which the nodes are distributed.The total energy dissipation in the network is 2 The optimal number of CHs is estimated by differentiating the total energy dissipation with respect to "k" and equating it to zero.The probability of a node to become a CH is computed as where, mp ε is the energy dissipated by the transmitter amplifier circuit.If the residual energy level of the node is greater than the threshold value, optimal CH selection is performed.Otherwise, the residual energy calculation process is repeated.After the selection of the CH, data aggregation is performed and data is forwarded to the BS.If the cluster construction is not in the optimal manner, there is an exponential increase in the total energy consumption of the network, when the number of constructed clusters is greater than or less than the optimal number of clusters. The CH is used for authenticating the nodes located inside the cluster.The CH validates the sensor nodes in the cluster, shares the secret key to the nodes, validates the data received from the nodes and forwards the data to the BS.The CH forwards the data received from the cluster member nodes after validating the signature of the data.If the sensor node is compromised by the attacker, the data received from that node is sent to the CH.The CH verifies the signature of the data received from that node.If there is a mismatch in the data signature, then the CH detects the node as a compromised node and stops the data communication with that node.After the successful transmission of data from the CH to the sink, the selected CH will lose energy for transmission.Again, a new CH having highest residual energy and minimum distance with other cluster member nodes is chosen.Figure 3 shows the flow diagram of the optimal CH selection process., , , n X x x x = be the set of nodes and be the set of clusters. Step 1: Estimate the distance between each node with the cluster center. Step 2: Allocate the time slot t. Step 3: Estimate the maximum residual energy for each node. Step 4: Select the node having maximum residual energy and located closer to the cluster center and choose that node as the CH. Each node maintains the energy and distance between the cluster centers in the routing table.For the predefined time slot t, the CH is selected and data aggregation process takes place.Through the periodic reassignment of the CH role to various nodes, the problem of single point of failure during the depletion of node energy is prevented.It is a really critical task to avoid the failure of the nodes caused by the early depletion of energy, to ensure high network lifetime.After the selection of the CH, the traffic behavior is analyzed.If the traffic is found as normal, the data transmission process is performed.During the detection of the abnormal traffic, the one-time key is generated and checked.If the key is valid, the data transmission is executed.Otherwise, the node is added to the block list. Traffic Analysis The amount of traffic between the sensor nodes is estimated for a specific period.During this period, the amount of traffic emitted from all other regions and the average size of the sent packets are recorded.If the difference between the estimated and actual values of the traffic is more than a threshold value, the probability of occurrence of attack is detected.This includes three main steps: 1) Assess the amount of traffic generated by the nodes located adjacent to the suspected node. 2) Estimate the traffic amount and compare it with the actual values. 3) Comparing the average size of the sent packets with the pre-stored information. If the increase in the generated traffic occurs as a result of detection of an event in the area covered by the sensor node, the neighbor nodes should have experience the same increase.Therefore, the normal and misbehavior of the nodes is properly differentiated.If the adjacent nodes do not show noticeable increase in the traffic volume, the possibility of occurrence of an attack is detected more precisely.The packet size comparison is performed based on the concept that if the attacker tries to improve the packet sending rate, power consumption of the nodes will be high.Hence, the misbehaving node is identified based on the traffic analysis. One-Time Key Generation In the symmetric key generation process, a static secret key is generated by using the cipher text.During the symmetric key cryptography, if there is N number of nodes, then there is a need to generate (N − 1) keys in the network.If the value of N is large, it requires more memory space for storing the large key values.The public key cryptography system requires huge computation power, but the sensor has less processing power.The public key cryptography system is not efficient in the WSN applications.Hence, the dynamic key generation is mostly preferred for the security purpose in WSN, since the dynamic keys are single-time usable symmetric cryptographic keys for forming a sequence of keys.The probability of breaking a dynamic key is low.The dynamic key management does not involve a central key controller such as BS or third party in the rekeying process of the nodes.The key management is handled by the dynamically assigned key controllers.The cryptographic keys are provided securely, while preventing the activities of the attacker nodes inside a network.During the detection of a compromised sensor node, the current secret key of the compromised node is canceled and a new key is generated.This new key is distributed to its associated nodes, except the compromised node.Moreover, it is highly desirable for a dynamic key management scheme to maintain the security of the keys and avoid collusion between the compromised nodes and newly joined nodes.The dynamic key management schemes prevent a single point of failure and ensure high network scalability.However, these schemes are highly prone to the design errors because the compromised nodes can participate in the node removal process. Key Generation Several security parameters are defined during the construction of the Secure One-Time (SOT) key [28].For signing B-bit initially 'n' and 'k' are selected such that 2 and the security parameter "p" and one-way hash function " F H " that operates on p-bit strings is chosen.The p-bit string ( ) , , , n s s s are generated randomly to generate the public key.Let ( ) and private key is , , , , n SK k s s s = . Signature Generation Let ( ) M for signing a message "M" with the secret key.Then, "H" is split into "k" substrings 1 2 , , , k H h h h = .Finally, each z H is interpreted as an integer for 1 z k ≤ ≤ .The resulting signature is ( ) , , , . Signature Verification The signature verification is similar as the signature generation process.If the verifier has the message "M", signature ( ) , , , and public key ( ) . The signature is accepted only if , for each "z".Otherwise the signature is rejected.In this scheme, the public key component can be used for multiple times.Signature generation requires only a single call to the hash function.But, the verification process requires "k" calls to the hash function.The main advantage of our scheme is the smaller signature size. Key Generation Procedure 1) Each node chooses "n" number of secret key components ( ) Each node creates a "m" number of hash chain of length "l". 3) The public key components are obtained through an one-way function H denotes the hash function. Public Key Handling Each node generates a set of one-time keys that are distributed locally among the nodes.Since the one-time keys are valid once or used for a limited period of time, there is a need to update the one-time key for the nodes.Initially, the public key called as Initial key is distributed safely to the nodes during the system setup.This guarantees that each neighboring node holds the authentic copy of the public key.When, a new node enters into the network, it receives the public key and broadcasts to its neighbors.Hence, successive one-time public keys are distributed efficiently through the periodic broadcasting of the Hello message. The first secret key SK 1 is defined as k H S H S H S H S (11) and the corresponding public key is System Setup If a node enters into the network, it is notified about the security parameters in the network.Then, the node selects its secret key components and creates a hash chain (Figure 4). Route Discovery If the source node (I) initiates a route discovery to a certain destination node, it simply generates a signature over the Route Request (RREQ).When the CH1 receives the RREQ, the signature of the source node is initially verified.If the signature is found to be correct, the CH1 hashes the received message and generates its own signature by using the SOT key generation scheme.The whole message is retransmitted to the next CH2.Once the CH2 receives this double signed RREQ, it initially verifies the previous hop using the public key of CH1 received through the Hello messages.If the one-time signature is found correct, CH2 hashes the signature again, creates a signature over the hash for replacing the signature of CH1.Then, this new message is broadcasted to the BS, only if both the signatures are correct.These operations are repeated until the RREQ reaches the desti- nation node (BS).When RREQ reaches the BS, the BS performs same verification operations each CH.Then, a route reply (RREP) is generated and signed in a same manner as RREQ.Each CH will transmit the RREP to the source node through the reverse route and same operations are performed along the route. In our proposed work, the SOT key is generated dynamically based on the source ID, random number and position coordinates of the nodes.The main idea to generate the one-time key is to avoid distribution of the long-term shared cryptographic keys.Due to the randomness, the one-time key is unbreakable by the compromised nodes.Hence, the secret key is generated each time during transmission of packet between the nodes.The one-time key generation is used to prevent the compromised node or third party from extracting the key.Even if the source node is a compromised node, the one-time key is not hacked by the source node.The security of the one-time key depends on the randomness property. As the sensor nodes report their data, the direction of the data movement is habitually towards the BS.This asymmetric communication design can assist a malicious node in tracking down the location of the BS.It can result in malicious launching the serious attacks on the BS and eventually settling down the entire network.There are many ways to track the location of BS: 1) If the malicious node can understand the information of the packet being transmitted, then the malicious nodes can correlate the packets that are transmitted towards the BS.It will permit the malicious node to follow the direction of these packets towards the locality of the BS, which leads to discovery/jamming and destruction of the BS. 2) If there is a time correlation between when a node receives or forwards a packet, a malicious node can use the time correlation to estimate the direction towards the BS.Hence, it is necessary to block the malicious nodes from the data aggregation and data transmission process.The misbehavior of the nodes is checked by analyzing the traffic flow between the nodes.Before the key generation process, the location of the node is updated.The key generation step is initiated, if the traffic flow is found to be abnormal.If the one-time key of the encrypted packet is invalid, the current packet is dropped, and the node is added to the block list.Data processing is performed, only if the one-time key is found to be valid.After the BS receives the aggregated data from all the child nodes, it decrypts and validates the signature.It estimates the final aggregation result just like the operation of the normal intermediate node.The final aggregation result stored in the BS.The following algorithm is proposed to block the malicious nodes from the network. The SOT key is generated based on the key generation scheme adopted in [28].Here, ID S denotes the source ID, RP is the random prime number generated for CH, (Pos x , Pos y ) is the node coordinates and OT_K i denotes the SOT key generated for the corresponding nodes.The Exclusive-OR operation is performed in the SOT key generation process.Figure 5 shows the flowchart of the onetime key generation algorithm. Performance Analysis In this section, the performance of the proposed Secure Area Based Clustering for Data Aggregation Using Energy Analysis for Data Aggregation The energy consumption of the sensor network over a period can be estimated as follows: • Energy spent for sensing the channel. • Energy spent to send the packets from the sensor nodes to the CHs. • Energy spent to receive the gathered data from the CHs to the BS. • Average energy consumption and remaining energy for the entire data gathering process. Scenario 1: There are totally 600 packets are gathered by the CHs among various sensor nodes.The CHs collect the packets and validate it to analyze the gathered information containing any malicious information.Figure 6(a) shows the average energy consumption for a number of packets to be gathered from the sensor nodes to the corresponding CHs.The results are compared with the existing Secure Data Aggregation Technique (SDAT).The proposed SAC-TC results in the lesser energy consumption than the existing SDAT. Scenario 2: The BS collected the gathered data from the CHs.If any misbehaving activities are identified, then a one-time key is generated to identify whether the packet is included with the false injected data or it includes the secured gathered data.After the process gets completed, the remaining energy for the corresponding packet reception is calculated to determine the energy utilization rate of each node.The energy efficiency of the node is high if the remaining energy value is high.SDAT approaches.The proposed approach results higher remaining energy than the existing SDAT.Scenario 3: The process is evaluated for simulation time up to 100 s.The remaining energy is noted for every 20 seconds, and it examined with the existing method.We are using the highest energy nodes as the CHs to aggregate the packets.Hence, the remaining energy is obviously high, which helps to increase the network lifetime.Figure 6(c) shows the average remaining energy for the proposed and existing method.The proposed SAC-TA results in the higher residual energy than the existing method SDAT. Scenario 4: In our analysis, 200 sensor nodes are taken for data aggregation process.The average energy consumption is examined for varying the nodes.Figure 6(d) shows the comparison result for the proposed SAC-TA with the existing SDAT regarding the variation in the number of nodes up to 200.The results show that the proposed approach takes lesser energy consumption than the existing method. Time Taken for Secure Data Aggregation The time taken to complete the secure data aggregation refers to the time taken to transfer a packet across a sensor network from the sensor to CHs and CHs to BS.The equation to estimate the End-to-End (E2E) delay is defined as follows, E2E-delay Here, t d denotes the transmission delay; pb d is the propagation delay; pr d denotes the processing delay; and "N" denotes the number of links.The CH is selected based on the predefined time slot, to improve the system performance without causing any unwanted link failure.Hence, it automatically reduces the E2E delay for data aggregation. Figure 7 shows the E2E delay for the proposed SAC-TA and existing SDAT which is quite lesser than the existing method. False Data Detection The malicious node hacks the network and stole the gathered data.The false data is injected by the malicious nodes to intrude the network performance.The malicious node needs to be isolated and removed from the network.Here, the traffic flow is analyzed and the abnormal behavior of the sensor nodes is identified.A one-time key generation procedure is introduced to validate the encrypted information about the gathered data.It helps to identify the false data on the gathered data to provide the secure data gathering environment. Figure 8 shows the false data detection rate for the proposed SAC-TA with the existing method.The proposed approach results in better detection rate than the existing method for varying the data transmission. Data Aggregation Accuracy The data aggregation accuracy is an important factor to predict the successful system performance.The accuracy measure is taken for varying the simulation time up to 100 s.It shows that the process gets completed at the final stage and results in the accuracy of about 89.5% to receive the gathered data at the BS.Totally 600 packets are transmitted by the sensor nodes to CHs, and approximately 576 packets are gathered and collected by the BS. Figure 9 shows the aggregation accuracy for the proposed approach with the existing method.The comparative analysis of the resultant aggregation accuracy values for the proposed SAC-TA approach and existing SDAT method are shown in Table 3.It shows the proposed SAC-TA approach achieves better accuracy rate than the existing SDAT method. Alive Nodes Analysis The number of alive nodes is calculated for each round to find the energy efficiency of the network.After com- pletion, the proposed approach contains approximately 84 alive nodes, whereas the existing holds only 30 alive nodes.Figure 10 shows the comparison of alive nodes for proposed and existing methods.The proposed approach holds higher alive nodes than the existing method.The proposed SAC-TA method improves the network lifetime.Hence, it is clearly evident that the proposed approach achieves better performance than the existing methods. Conclusion and Future Work In this paper, a SAC-TA approach is presented for data aggregation in WSN.The false injection attack is identified based on the traffic analysis at the time of route discovery process.One-time key generation algorithm is introduced to eliminate the malicious nodes from the network.The efficiency of the proposed data aggregation scheme is evaluated by comparing it with the existing secure data aggregation scheme (SDAT).It results in the lower energy consumption to gather the data from diverse sensor nodes.It automatically improves the residual energy, because half of the sensor nodes are alive at the end of the gathering process.The proposed approach also achieves low end-to-end delay and better false detection rate and aggregation accuracy, when compared to the existing method.In the future work, the behavior of the WSN on internal attackers is investigated, and the SAC-TA scheme is extended to resist such attacks.This will guarantee the secure data aggregation under the presence of attacks. Figure 1 . Figure 1.General architecture for data aggregation model in WSN. Figure 2 . Figure 2. Flow of the proposed SAC-TA approach. Figure 3 . Figure 3. Flow diagram of the optimal CH selection process. Figure 4 . Figure 4. Hash chain of secret key components. Figure 6 (Figure 6 . Figure 6.Energy Analysis for the proposed SAC-TA and the existing SDAT (a) Average energy consumption for sending the packets to CHs, (b) Average remaining energy after receiving the packets, (c) Average remaining energy across the simulation time (100 s) and (d) Average energy consumption across varying the number of nodes. Figure 7 . Figure 7.Comparison of end-to-end delay for SCA-TA and SDAT. Figure 8 . Figure 8. False data detection across the transmission data for SAC-TA and SDAT. Figure 10 . Figure 10.Number of alive nodes after data aggregation completion. Table 3 . Comparative analysis for aggregation accuracy.
8,773.8
2016-06-02T00:00:00.000
[ "Computer Science", "Engineering" ]
D-(+)-Galactose-induced aging: A novel experimental model of erectile dysfunction Erectile dysfunction (ED) is defined as the inability to achieve and/or maintain penile erection sufficient for satisfactory sexual relations, and aging is one of the main risk factors involved. The D-(+)-Galactose aging model is a consolidated methodology for studies of cardiovascular aging; however, its potential for use with ED remain unexplored. The present study proposed to characterize a new experimental model for ED, using the D-(+)-Galactose aging model. For the experiments, the animals were randomly divided into three groups receiving: vehicle (CTL), D-galactose 150 mg/kg (DGAL), and D-(+)-galactose 150 mg/Kg + sildenafil 1.5 mg/Kg (DGAL+SD1.5) being administered daily for a period of eight weeks. All of the experimental protocols were previously approved by the Ethics Committee on the Use of Animals at the Federal University of Paraíba n° 9706070319. During the treatment, we analyzed physical, molecular, and physiological aspects related to the aging process and implicated in the development of ED. Our findings demonstrate for the first time that D-(+)-Galactose-induced aging represents a suitable experimental model for ED assessment. This was evidenced by an observed hyper-contractility in corpora cavernosa, significant endothelial dysfunction, increased ROS levels, an increase in cavernous tissue senescence, and the loss of essential penile erectile components. Introduction Erectile dysfunction (ED) is defined as the inability to achieve and/or maintain sufficient erection for satisfactory sexual relations [1]. Its prevalence tends to increase throughout the individual's life, affecting mainly men over 40 years old [2]. With Animals Forty male Wistar rats (Rattus novergicus), eight weeks old, from the Animal Production Unit of the Institute for Research in Drugs and Medicines (IPeFarM) of the Federal University of Paraíba (UFPB) were used. The animals were kept under appropriate environmental conditions, temperature (22 ± 1˚C), a 12-hour light-dark cycle (6-18 hours), with free access to water and food (Nuvilab CR-1, Quimtia 1 ), while recording the physical and mental health of the animals on a daily basis. After confirmation of anesthesia induced by the intraperitoneally administration of xylazine and ketamine (10 and 75 mg/Kg, respectively), the animals were euthanized by exsanguination. All experimental protocols were carried out according to the guidelines established by the Brazilian National Council for Animal Experiment Control (Conselho Nacional de Controle de Experimentação Animal-CONCEA), obeying law No. 11.794/2008, submitted and previously approved by the Ethics Committee on the Use of Animals (Comissão de Ética no Uso de Animais-CEUA) of the UFPB, n˚9706070319. Experimental design The animals were randomly assigned into two three experimental groups: the control group (CTL), which received physiological saline solution (NaCl 0.9%) intraperitoneally (IP), the Dgalactose group (DGAL), which received D-(+)-galactose at 150 mg/Kg via IP, and the sildenafil group (DGAL+SD1.5) which received both D-(+)-galactose at 150 mg/Kg via IP and sildenafil 1.5 mg/Kg by oral gavage. All of the animals were subjected to eight weeks of treatment with daily administration. The IP administrations were standardized at a volume less than or equal to 2 mL/Kg [16]. The administered dose of D-(+)-galactose (150 mg/Kg) was chosen based on a review of the literature, citing doses sufficient to induce aging in the animals [17,18]. Sildenafil was administered at 1.5 mg/Kg, corresponding (approximately) to a dose of 100 mg administered to an adult man with 70 Kg of body weight [19]. Monitoring body weight and blood glucose Variations in animal body weight were assessed throughout the treatment. The animals were weighed individually three times a week, always before administration of their respective treatments. The values were expressed as average weekly weight in grams (g). Glycemic analysis was performed at the end of treatment on the day of euthanasia. For this, one drop of blood was collected from the end of the caudal vein and introduced to a strip attached to an Accuchek Guide glucometer (Roche 1 , Brazil). Glycemic values were expressed in mg/dL. Erectile function measurements-ICP/MAP ratio Erectile function was assessed using the ICP/MAP (intra-cavernous pressure/mean arterial) pressure ratio methodology adapted according to that previously described by Kim and colleagues [20]. Briefly, at eight weeks of treatment, the animals were anesthetized with a mixture of xylazine and ketamine (10 and 75 mg/Kg, respectively, via IP). A polyethylene (PE) catheter, filled with heparinized saline (200 IU/mL) was then implanted into the right common carotid artery to the measure the mean arterial pressure (MAP). To record intra-cavernous pressure (ICP), a 30G gauge needle, connected to a PE tube (10 mm) filled with heparinized saline (200 IU/mL), was inserted in the crural region of the left corpus cavernosum. Subsequently, the cavernous nerve was identified and a bipolar bronze stimulator (Animal Nerve Stimulating Electrode, MLA0320, ADinstruments, United States of America) was placed and electrically stimulated with 1 millisecond (ms) pulses, at 6 volts (V), and 16 Hz lasting 60 seconds (s). Two cycles of electrical stimulation were performed, the interval between each stimulation was at least 5 minutes. MAP and ICP variations were measured using pressure transducers (Disposable BP Transducer, MLT0699, ADinstruments) coupled to the PowerLab 1 data acquisition system (LabChart 1 software, version 8.1; ADInstruments, USA). [21]. Corpora cavernosa were suspended vertically in isolated organ baths (Panlab Multi Chamber Organ Baths, ADIntruments, Australia) by two stainless steel metallic rods and immediately submerged in 10 mL of 37˚C Krebs-Ringer solution, with a carbogenic mixture (95% O 2 , and 5% CO 2 ), maintained at pH 7.4, and under a stabilizing tension of 0.5 g, for 60 minutes. Voltage changes were measured using isometric transducers (MLT020, ADInstruments, Australia) and recorded in a PowerLab 1 data acquisition system (ML870/P, LabChart version 7.0, ADInstruments, Australia). The contractility of the corpus cavernosum was assessed against an increasing and cumulative addition of Phe (10 nM-300 μM), via electrical field stimulation (EFS) using different frequencies (1, 2, 4, 8, and 16 Hz) with 50 V electrical pulses of 1 ms duration. The treated groups' corpus cavernosum relaxing responses were evaluated by increasing and cumulative addition of ACh (1 nM-10 μM), and SNP (100 pM-100 μM). ROS measurements Redox-sensitive fluorescent dye (DHE) was used to evaluate ROS (reactive oxygen species) formation. The corpus cavernosum was isolated and embedded in OCT compound, and then immediately frozen using liquid nitrogen for 5 minutes, before transferred and stored in a freezer at -80˚C until the next step experimentation. Microtomy of the tissue in cryostat was performed at -20˚C, in which cuts with 8 μM thickness were obtained. The tissue was fixed on slides, washed with phosphate-saline buffer (PBS) (161.0 mM NaCl; 1.8 mM NaH 2 PO 4 .H 2 O, and 15.8 mM Na 2 HPO 4 ), and incubated with DHE (5 μM) for 30 minutes, at 37˚C, in a humid chamber protected from light [22]. Subsequently, the sections were washed (twice) before being mounted in Fluorescence Mounting Medium (DAKO © ) with coverslips. Images were obtained with a Fluorescence Eclipse Ti-U Nikon 1 microscope (Japan). Quantification (of levels of staining) was performed using NIS-element 1 software. The data were normalized using the CTL group, and expressed as percentage fluorescence. Morphometric analysis To perform histological sections, tissue sections of the mid-transversal part of the penis were fixed in buffered formaldehyde (10%) and incorporated into paraffin blocks with 5 μm thickness. Hematoxylin-eosin staining was used for morphometric measurement. The images were obtained using an Olympus BX-60 microscope and an Olympus camera coupled with the Olympus CellSens Dimension digital image capture program (USA). The morphometric areas were acquired using the "polygon area" function of the Olympus CellSens Dimension Program according to the given methodology, as modified by Correa et al. [23]. Histochemical analysis of SA-β-galactosidase Analysis of Senescence Associated β-galactosidase (SA-β-galactosidase) was adapted as previously described by Chang and colleagues [14]. The animal penile segments were embedded in OCT compound and immediately frozen in liquid nitrogen (3 min). After freezing, microtomes (5 μm) of the tissue in cryostat were performed at -20˚C. Subsequently, the tissue was washed with PBS and then fixed with a solution of formaldehyde (2%) and glutaraldehyde (0.2%), for a period of 5 minutes. In sequence, the tissues were washed with PBS and incubated with the x-gal staining solution; (150 mM NaCl, 2 mM NaCl 2 , 5 mM C 6 N 6 FeK 4 , 5 mM C 6 N 6 FeK 4 , 5 mM C 6 N 6 FeK 3 ), 1 mg/mL of x-gal buffer, and citrate-phosphate buffer (pH 6.0 40 mM), for a maximum period of 18 h, at 37˚C, in a humid chamber protected from light [24]. Subsequently, the sections were washed with PBS solution to remove the excess x-gal staining solution and taken immediately to analysis under a microscope (Nikon Eclipse Ti-E, Nikon, Japan). Statistical analysis The data were expressed as mean ± standard error of the mean (SEM). For statistical analysis of the concentration-response curves, the maximum effect (Emax) values were used as calculated from non-linear regression of the responses obtained. The student's t-test and two-way analysis of variance (ANOVA), with the Bonferroni post-test were used. The data were considered significant when p < 0.05. All analyses performed were calculated using the Graph Pad Prism 1 version 7.0 statistical program. Evaluation of physical characteristics, body weights, and blood glucose levels The animals studied presented differences in their appearance at the end of each treatment (Fig 1). The rats in the CTL group had smooth, healthy-looking, and shiny hair with uniform colors, however, the animals in the DGAL group presented curly, coarse, and opaque hair, with darker regions, and severe hair loss (Fig 1A and 1B). The animals both in the CTL and DGAL groups presented similar graduated increases in their body weights without statistical differences (n = 5; p > 0.05) (Fig 1C). At the end of the eight-week treatment, glycemic levels in both the CTL and DGAL animal groups (121.2 ± 4.09 mg/dL and 118.8 ± 5.73 mg/dL, respectively), were similar and without statistical differences (n = 5; p > 0.05). The relaxation response induced by the increasing and cumulative addition of SNP (100 pM-100 μM) did not result in a significant difference in maximum effect (p > 0.05). However, D-(+)-galactose accelerated aging model induced increased levels of superoxide anions in the corpus cavernosum isolated from rats Superoxide anions measurements were performed in the corpus cavernosum isolated from Wistar rats. Redox-sensitive DHE fluorescent dye was used in both the CTL and DGAL groups. The animals in the DGAL group presented a significant increase in fluorescent intensity (233.58 ± 13.69%, n = 4) when compared to the CTL group (100.00 ± 13.16, n = 4; p < 0.05) (Fig 4). Discussion In the present study, a novel ED model associated with mimetic aging induced by D-(+)-galactose in Wistar rats was characterized. The daily administration of 150 mg/Kg D-(+)-galactose, via IP (eight weeks), reduced erectile function in vivo, promoting hyper-contractility and endothelial dysfunction in isolated corpus cavernosum, as well as promoting oxidative stress, reducing the proportion of erectile components, and increasing senescence markers in penile tissue. Chronic administration of D-(+)-galactose for a period of six to ten weeks is well described as a model to accelerate the natural aging process [25,26]. Physiologically, the monosaccharide, is converted to glucose by galactose-1-phosphate-uridyltransferase and galactokinase [27]. Yet if in excess, deleterious metabolic disturbances are generated, with several effects PLOS ONE such as immune system cell dysfunction, sexual hormone deficiencies, increases in inflammatory cytokines, increases in cellular apoptosis, and decreases in both total antioxidant capacity and oxidative stress (via oxidative metabolism) [26,28,29]. Taken together, these effects, mainly mediated by persistent oxidative stress, favor the development of disease by affecting both structure and function in pertinent tissues and organs [26,30]. Despite D-(+)-galactose being widely used for aging research, the potential for association with ED remains unexplored. To test the hypothesis that the aging model induced by D-(+)galactose can trigger ED, we treated Wistar rats with a chronic daily administration of D-(+)galactose (150 mg/Kg) for eight weeks. Initially, we observed the rat's physical appearance, and at the end of the treatment period, the animals in the DGAL group presented physical characteristics such as severe hair loss, and curly or opaque hair with darker regions. This was in contrast to animals in the CTL group which presented smooth hair with a healthy look, and a bright and uniform color. Such aging characteristics were also observed in a study developed by Zhao and colleagues [29] in rats treated with D-(+)-galactose for eight weeks. The animals' body weights were also monitored during the eight weeks of treatment. During this period it was observed that the animals of the experimental groups all similarly presented a gradual increase in their body weights, demonstrating that administration of D-(+)galactose did not interfere in the animals' body weights. This was also observed in studies developed by Cardoso and colleagues [31]. There was also no significant change in glycemic PLOS ONE levels among animals in the treated groups, demonstrating that D-(+)-galactose does not interfere in glucose metabolism. After the treatment period, the most used method for in vivo evaluation of erectile function in rats, the ICP/MAP ratio was assessed [32]. Electrical stimulation of the cavernous nerve promotes nitrergic discharge inducing relaxation of the corpus cavernosum with consequent elevation of ICP [33]. The ICP/MAP ratio in the DGAL group was reduced significantly as compared to the CTL group, demonstrating, for the first time in the literature, that the D-(+)galactose induced aging model was effective in promoting ED. Similar results which demonstrated ED were observed in a study demonstrating that elderly rats (physiological aging) presented an ICP/MAP ratio decrease [34]. Treatment with sildenafil, in animals in the DGAL +SD1.5 group, promoted a significant increase in the ICP/MAP ratio as compared to animals in the DGAL group, demonstrating that the treatment prevented ED. This result can be explained by the increase in cGMP via PDE-5 inhibition in the corpus cavernosum, as well as decreases in oxidative stress, and restoration of pro-oxidant/antioxidant equilibrium, which reduces endothelial damage and increases nitric oxide (NO) bioavailability [10,35,36]. These mechanisms favor relaxation of trabecular smooth muscle, and result in penile erection. Given this in vivo observation of changes in erectile function, the next step would be to assess whether changes in the contractile and relaxing reactivity of corpus cavernosum isolated from the rats is involved in this process. These results are important, since erectile function is a hemodynamic process, and any imbalance is closely related to ED [37]. Therefore knowing that noradrenergic discharge and stimulation of α-adrenergic receptors favors increases in corpus cavernosum smooth muscle tone, and consequently impairs the state of erection [38], the response of the corpus cavernosum in contractile reactivity was evaluated using cumulative Phe and EFS curves. After the treatment period, in response to Phe and EFS, rats of the DGAL group presented increased hyper-contractility of the corpus cavernosum as compared to the CTL group. This effect may have been related to over-regulation of the contractile pathways in the corpus cavernosum; autonomic neuropathy (caused by exacerbation of sympathetic activity), and/or greater noradrenergic receptor sensitivity [39]. NO is another important factor and plays a key role in corpus cavernosum tonus regulation. Changes in NO synthesis or bioavailability can favor corpus cavernosum contraction, and consequently the development of ED [40]. We therefore evaluated whether NO release was affected by the treatments due to the action of ACh in the endothelial cells. ACh, an endothelial muscarinic agonist, was evaluated for its role in endothelium-dependent relaxation impairment. We observed that endothelium-dependent relaxation mediated by ACh was significantly impaired in the DGAL corpus cavernosum strips as compared to the CTL strips. Age-related changes result in altered endothelial cell function, and cause reductions in cellular nitric oxide levels with subsequent impairment in penile smooth muscle relaxation. ACh, to induce its vasorelaxant effect releases NO to target muscarinic (M 3 ) receptors in endothelial cells. In our experimental conditions, animals of the DGAL group presented a significantly impaired relaxation response to ACh, as compared to the CTL group. This effect reveals an endothelial dysfunction that may be associated with decreased NO bioavailability, yielding impaired corpus cavernosum relaxation [10,36,39]. Lafuente-Sanchis and colleagues [41] have demonstrated that reductions in endothelium-dependent vasodilation, in response to ACh in elderly animals, is likely related to endothelial dysfunction in the cavernous trabeculae. In addition to assessing endothelium-dependent relaxation, we also investigated impairment in pathways directly involved in relaxation of corpus cavernosum smooth muscle tissue. For this, the SNP was used, whose induced relaxation did not present statistical differences between the groups in the maximum response, did promote a reduction in the potency of the relaxation response of the DGAL group, compared to the CTL group, suggesting that the functionality of the smooth muscle cells of the corpus cavernosum may thus be altered. Recent studies suggest that endothelial dysfunction in age-induced ED is likely related to oxidative stress [42]. Similarly, the D-(+)-galactose accelerated aging model revealed an increase in ROS levels which lead to oxidative damage [26]. Further, increased oxidative stress has also been linked to lower NO concentrations. In age-related ED, ROS has been postulated as a principal cause of impaired cavernous function. We thus evaluated whether ROS would also increase in corpus cavernosum isolated from rats using histological sections from both the CTL and DGAL groups, measuring fluorescent intensity as emitted by a DHE probe. In these experiments, the animals in the DGAL group presented a significant difference in fluorescent intensity as compared to the CTL group. This suggested an increase in superoxide anions levels, contributing to cavernous tissue remodeling, a key event in the pathophysiology of ED. Corroborating our findings, Gur and his group [43] have demonstrated an increase in ROS levels in the smooth muscle and the endothelium of the corpus cavernosum in elderly rats as compared to young animals. In addition to functional abnormalities, age-related ED is associated with structural changes resulting in the loss of essential penile erectile components [44]. Morphologically, a significant reduction in the muscle cell layer was observed in the DGAL group, as compared to the CTL group, suggesting a loss in erectile components essential for the penile erection. This reduction functionally alters the smooth muscle of the corpus cavernosum, and revealed a significant reduction in SNP potency in the DGAL group as compared to the CTL group. Yet it is likely that such morphological changes do not sufficiently modulate functionality so as to alter the SNP response maximum. These data are in agreement with several previous studies, which reveal that both in aged men and aged animals, a decline in erectile capability is associated with a diminishing number of smooth muscle tissue cells [45][46][47][48]. Similar data were also observed in an ED model induced by diabetes [49]. Reduction of erectile function with aging has been extensively reported and related to multiple functional, morphometric, molecular, and cellular changes that lead to significant loss of erectile capability. Accumulation of senescent cells is a biological marker of aging, and is associated with increased lysosomal SA-β-galactosidase activity. We found that in cavernous tissues, the DGAL group presented an increase in SA-β-galactosidase activity when compared to the CTL group, suggesting an accumulation of senescent cells. Similar results have been demonstrated in the cardiac tissue of animals receiving the same treatment with D-(+)-galactose [14]. D-gal is a known normal substance in the body, however, at high levels, accumulating free D-gal is converted into secondary metabolites such as galactitol, hydrogen peroxide, and Schiff's base, which in turn, induce inflammation, cellular apoptosis, and degenerative changes, this resulting in aging and age-related disorders. Further, this model was characterized by increased inflammatory cytokines, and up-regulated P16, P53, and P21 gene expression [13,26,50]. One of the main limitations of the present study is that the model poorly relates real physiological and biochemical changes. In addition, in the present study, inflammatory mediators, P53-P21, PI3K/Akt, and AMPK/ULK1 pathways were not measured. Nevertheless, due to its ability to mimic the senescent characteristics of natural aging, Dgalactose-induced aging is potentially an ideal model for anti-aging therapeutic intervention studies. In summary, our results demonstrate for the first time that the D-(+)-galactose aging model was able to promote ED in Wistar rats, through hyper-contractility and endothelial dysfunction in the rat corpus cavernosum. These effects may be related to oxidative stress, decreased erectile components, and accumulation of senescence cells in the corpora cavernosa of these animals. Conclusion The present study reports on a novel ED rat model, successfully induced by D-(+)-galactose (daily, during 8 weeks), and validated based on functional, cellular, molecular, and morphometric analysis. The D-(+)-galactose-induced aging model was able to mimic ED in Wistar rats. The present study found in isolated rat corpus cavernosum that ED is associated with hyper-contractility and endothelial dysfunction. The effects appear to be associated with the βgalactosidase activity through an increase in oxidative stress, loss of erectile components, and increased cell senescence.
4,825.8
2021-04-15T00:00:00.000
[ "Biology" ]
Defect in Gauge Theory and Quantum Hall States We study the surface defect in $\mathcal{N}=2^*$ $U(N)$ gauge theory in four dimensions and its relation to quantum Hall states in two dimensions. We first prove that the defect partition function becomes the Jack polynomial of the variables describing the brane positions by imposing the Higgsing condition and taking the bulk decoupling limit. Further tuning the adjoint mass parameter, we may obtain various fractional quantum Hall states, including Laughlin, Moore-Read, and Read-Rezayi states, due to the admissible condition of the Jack polynomial. Introduction The relation of low-energy physics of supersymmetric gauge theory and integrable system has been an active research for decades [1][2][3].One of the best-known story is the Seiberg-Witten curve of the N = 2 supersymmetric gauge theories can be identified as the spectral curve of the integrable systems.This correspondence was later extended to the quantum level by Nekrasov and Shatashivilli in [4,5], with the gauge theories subjected to the Ωdeformation.This deformation introduces two parameters (ε 1 , ε 2 ) associated to the rotation on the two orthogonal plane in R 4 = C 2 .The partition function Z and BPS observables can be computed exactly by localization technique for a variety of gauge theories [6].In the limit (ε 1 , ε 2 ) → (0, 0), the classical integrable system is recovered.The Nekrasov-Shatashivilli limit (NS-limit for short) ε 1 → and ε 2 → 0 results in an N = (2, 2) supersymmetry being preserved in the fixed plane.One expects to get the quantum integrable system. From gauge theory to integrable model One is naturally to ask the question of computing the wavefunction of the integrable system.The stationary state wave function, in the context of Bethe/gauge correspondence, are the vacua of the two-dimensional N = (2, 2) theory.In order to get the stationary wavefunction, we compute the expectation value of a special observable in the two dimensional theory -a surface defect in the four dimensional theory [7][8][9][10][11][12].It turns out that induction of co-dimensional two surface defect provides a powerful tool in the study of Bethe/gauge correspondence.The parameter of the defect becomes the coordinates that the wavefunction depends on.The four dimensional theory with a co-dimensional two surface defect can be realized as a theory on an orbifold.The localization computations extend so as to compute the defect partition function and expectation value of BPS observables. Our scope is on the class of qq-characters observable in the gauge theory [6].The main statement in [13] proves certain vanishing conditions for the expectation values of the qqobservables, both with or without defects.These vanishing conditions, called non-perturbative Dyson-Schwinger equations, can be used to construct KZ-type equations [14] satisfied by the partition function [15,16].In the NS-limit, the KZ-equations becomes a Schrödinger-type equation satisfied by the partition function. Jack polynomial and quantum Hall state The Laughlin wavefunction has provied a key to understand the quantum Hall effect (QHE).It models the simplest abelian FQH and is the building blocks of model wavefuntion of more general states, both abelian and non-abelian such as Moore-Read and Read-Rezayi state.The wavefunctions of such models, aside from the Gaussian factor which we will drop, are conformally-invariant multivariable polynomials.All three of Laughlin, Moore-Read, and Read-Rezayi state wavefunctions are proven to be special cases of the Jack polynomial J 1 κ n with the Jack parameter κ taking negative rational value [17][18][19]. Summary and organization In this paper we will establish the relations between three objects: the surface operator in the 4-dimensional N = 2 * theory, the Jack polynomials, and fractional quantum Hall states.The main end-result is to realize the fractional quantum Hall states as instanton partition function of 4-dimensional N = 2 * gauge theory with the presence of full-type surface defect in the following simultaneous limits (i) Nekrasov-Shatashivili limit ε 2 → 0, (ii) Bulk-decoupling limit q = e 2πiτ → 0, (iii) Higgsing the Coulomb moduli parameters {a α } to sum of adjoint mass m and Ωdeformation paranmeter ε 1 , (iv) Tuning the ratio between the adjoint mass m and ε 1 to control the filling factor of the quantum Hall states. The paper is organized as follows: • In section 2 we will review the instanton partition function of N = 2 * and prove that in the Nekrasov-Shatashivili limit ε 2 → 0 (i) the defect partition function is the eigenfunction of the elliptic Calogero-Moser system . • In section 2.3, we will show that in the trogonometric limit τ → i∞ (ii) the Calogero-Moser Hamiltonian becomes the Laplace-Beltrami operator after a canonical transformation.The Jack polynomials are the eigenfunction of the Laplace-Beltrami operator. • In section 3 we will review some basic property of Jack polynomials. • In section 4 we will impose Higgsing condition (iii) to the N = 2 * supersymmetric gauge theory.The Higgsing truncates the infinite summation of the instanton partition function.By using the Young Tableaux representation of the instanton configuration, we prove that the defect partition function becomes the Jack polynomial after Higgsing. • In section 5 we recover both the Laughlin and Moore-Read quantum Hall states from the defect partition function with a file tuning of the adjoing mass m (iv).We also discuss about the admissible condition satisfiled by the Jack polynomial. • We end this paper with discussion about potential future work in section 6. 2 Four Dimensional N = 2 * Gauge Theory We consider N = 2 * U (N ) gauge theory in four dimensions with adjoint mass m.The vacuum of the theory is characterized by Coulomb moduli parameters a = (a 1 , . . ., a N ) and exponentiated complex gauge coupling The instanton partition function can be calculated via supersymmetric localization computation in the presence of an Ω-background, whose deformation parameters are (ε 1 , ε 2 ).The instanton configuration is labeled by a set of Young diagrams λ = (λ (1) , . . ., λ (N ) ), which denotes the number of boxes on each row in the Young diagrams.We define the formal sum of the exponentials e aα+(i−1)ε 1 +(j−1)ε 2 . (2. 3) The pseudo-measure associated to the instanton configuration is defined using the index functor E that converts the additive Chern class character to multiplicative class where n a ∈ Z is the multiplicity of the Chern root x a .θ(z; p) is the theta function defined in (A.8).We remark the hierarchical structure, θ(e −x ; p) ).In this paper, we mostly apply the rational convention, which corresponds to four dimensional gauge theory.The pseudo measure associated to the instanton configuration λ is computed by: (2.5) q i = e ε i are the exponentiated Ω-deformation parameters with P i = 1 − q i .Given a virtual character X = a n a e xa we denote by X * = a n a e −xa its dual virtual character.The supersymmetric localization equates the supersymmetric partition function of the Ω-deformed A 0 U (N ) theory of the grand canonical ensemble The pseudo-measure Z[λ] can be expressed in terms of products of Γ-functions with definition of the following parameters The 1-loop contribution to the partition function can be expressed in terms of x by . (2.9) The product of the 1-loop and the instanton contribution has the x terms completely cancelled . (2.10) Introducing Surface defect Recent developments in BPS/CFT correspondence [6,7,11] notices differential equations of two dimensional conformal field theories, such as KZ-equation [7,16,20] and KZBequations can be verified by adding a regular surface defect in the supersymmetric gauge theory.These conformal equations becomes eigenvalue equations of the integrable model in the Nekrasov-Shatashivilli limit (NS-limit for short) ε 2 → 0. See also [21,22] for a relation to the (q-)hypergeometric function.Moreover, the surface defect is also used to discuss non-perturbative aspects in N = 2 * theory [23][24][25][26], and its relation to the isomonodromic system [27,28].The co-dimensional two surface defect is introduced in the form of an Z l oribifolding acting on R 4 = C 1 × C 2 by (z 1 , z 2 ) → (z 1 , ζz 2 ) with ζ l = 1.The orbifold modifies the ADHM construction, generating a chainsaw quiver structure [29].Such defect is characterized by a coloring function c : [N ] → Z l that assigns the representation R c(α) of Z l to each color α = 1, . . ., N .Here and below R ω denotes the one-dimensional complex irreducible representation of Z l , where the generator ζ is represented by the multiplication of exp 2πiω l for ω ≡ ω + l.In general one can consider Z l orbifold of any integer l.A surface defect is called full-type/regular surface defect if l = N and the coloring function c bijective.Hereafter, we consider this case with the coloring function of the form (2.11) In the presence of surface defect, the complex instanton counting parameter q fractionalizes to N coupling (q ω ) N −1 (2.12) The coupling q ω is assigned to the representation R ω of the quiver.We also define the fractional variables, z α , α = 1, . . ., N , by From string theory point of view, these variables {z ω } ω=1,...,N are interpreted as the (exponentiated) brane positions, whereas the couplings {q ω } ω=0,...,N −1 are (exponentiated) distances between the branes.The defect instanton partition function is an integration over the Z N -invariant fields (2.14) Here denotes the number of squares in a colored Young diagram that is in the R ω representation of Z N orbifold. For the convenience of later calculation, we scale ε 2 → ε 2 N and define the shifted moduli which are neutral under orbifolding.All the ADHM data can now be written in terms of the shifted moduli (2.17) The expectation value of the defect partition function Z defect in the NS-limit ε 2 → 0 has the asymptotic [30,31] with the singular part is identical to the bulk instanton partition function (2.6) The leading order contribution Z surface is the surface partition function [10], with where qq-character and eigenvalue equation As we have stated previously, differential equations from conformal field theories such as KZequation and KZB equations can be verified with the introduction of regular surface defect. The key of these verification relies on an observable called qq-character [6].The fundamental qq-character of N = 2 * ( A 0 quiver) U (N ) gauge theory is given by [6-8] µ is a single Young diagram µ = (µ 1 , µ 2 , . . . ) obeying One may realize µ as a "dual" instanton configuration in the eight dimensional gauge origami construction [32].Each square in µ is labeled by Let us define Here a ij = µ i − j denotes the "arm" associated to a given box (i, j) in the Young diagram µ, the l i,j = µ T j − i for the leg of the same box.We also define h ij = a ij + l ij + 1.The qq-character X (x)[λ] is an Laurent polynomial in Y (x) with shifted arguments defined on a specific instanton configuration λ.The most important property of the qq-character is that its expectation value In the presence of a regular surface defect, the argument x is assigned to the R ω representation of the orbifold and shifted to x + ω N ε 2 .the fractional qq-character is build from the fractional Y -function (2.30) The factor B µ ω is the orbifolded version of q |µ| B[µ]: We denote the ensemble over all dual partition µ of each ω as The fractional qq-character X ω (x) share the same property as the bulk qq-character, whose expectation value defined through is a degree one polynomial in x.We expand the RHS in the large x limit and denote [x −I ]X ω (x), I = 1, 2, . . ., as the coefficient of the x −I term in the Laurant expansion of X ω (x).The following equation can be translated to differential equations acting on the defect partition function Z defect .See [7] for detail.For our interest, we will look at I = 1 case.The large x expansion of Y ω (x) is where k ω is defined in (2.15) and (2.36) The summation in K ω (2.15) runs through the colored squares in the Young diagram that is in the R ω representaiton of the Z N orbifold.The large x expansion of the fractional qq-character Here we define the differential operator for ω = 0, . . ., N − 1 By summing over ω and take the expectation value, we obtain a second order differential equation for the defect partition function where ∇ z ω+N = ∇ z ω and with ν = m ε 1 being the ratio between the adjoint mass and Ω-deformation parameter ε 1 .The function Q is defined in (A.20).Here Θ A N −1 ( z; τ ) is the rank N − 1 theta function defined as a product of Jacobi theta functions: ρ is the Weyl vector of SU (N ) root system, whose entries are given as 42) See section A for definitions of of theta function and eta function.By using the heat equation for Q in (A.22) to rewrite the ∇ q ω -derivative term in (2.39) to ∇ z ω -derivative.The defect partition function now obeys In the NS-limit ε 2 → 0, the shifted moduli approaches to the bulk moduli ãα → a α .Eq. (2.39) becomes an eigenvalue equation in the NS-limit (2.45) The Hamiltonian takes the form with the eigenvalue (2.47) The differential operator on the right hand side of eigenvalue equation (2.44) can be rewritten as the elliptic Calogero-Moser (eCM) Hamiltonian after a canonical transformation, The complexified gauge coupling τ = 4πi g 2 + ϑ 2π plays the role of the elliptic modulus.This is the Bethe/gauge correspondence between the elliptic Calogero-Moser system and four dimensional A 0 U (N ) supersymmetric gauge theory in the presence of regular surface defect [7,33].See also [34,35] for more geometric interpretation.The coupling constant ν is identified as the ratio between the adjoint mass of the gauge group and the Ω-deformation parameter The parameter matching between the gauge theory and the Calogero-Moser integrable system is summmarized in the following Bulk decoupling limit For the purpose of this paper, we will focus on the trigonometric Calogero-Moser system instead of the elliptic version.We have shown the complex gauge coupling acts as the complex modulus of the elliptic function.From this point of view, the bulk decoupling limit 1 g 2 → ∞ (Im τ → ∞; q → 0) corresponds to the trigonometric limit of the ℘-function.The elliptic Calogero-Moser Hamiltonian (2.48) becomes the trigonometric Calogero-Moser (tCM) Hamiltonian On the gauge theory side, the bulk decoupling limit q → 0 becomes q N −1 → 0 in the presence of regular surface defect.The bulk instanton, which now labeled by the R N −1 representation of the Z N orbifold and counted by q N −1 , only has the trivial (no instanton) configuration counted toward the ensemble in (2.6) in the bulk.It gives a vanishing superpotential (2.51) Even though the bulk instanton now becomes trivial, there can be non-trivial instanton configuration on the surface.The defect partition function consists of now solely the surface defect contribution (2.52) The width of the colored partition For later convenience, let us take a transpose on all Young diagrams labeling the instanton configurations, (2.55) The defect Nekrasov instanton partition function consists of only the surface defect term, which is where we have defined Here we multiplied by the one loop factor Z 1-loop ({a α }, ε 1 ) to simplify the expression in (2.56). In the bulk decoupling q → 0 limit, the theta function is reduced to the trigonometric function lim q→0 The second order differential operator Ĥ in (2.46) in the decoupling limit becomes We identify Ĥ as half of the Laplace-Beltrami operator: with the following identification of the parameter We identify the defect partition function (with suitable pre-factor) as an eigenfunction of the Laplace-Beltrami operator (2.63) Center of mass frame In a two body system, the center of mass frame can be separated.The Laplace-Beltrami operator can be rewritten in a variable z 2 = z 2 /z 1 and a center of mass variable u 2 = z 1 z 2 : The wave function Ψ(u, z) takes the separated variable form with a constant b ∈ C.After decoupling the center of mass, we denote z = e x , the Laplace-Beltrami operator becomes acting on f (x).We consider the following test function to find the eigenfunction f (x) of the Laplace-Bletrami operator.The ansatz is chosen such that it takes the form of a polynomial in z 1 and z 2 . In order for f A,b (x) to be a eigenfunction, we will choose A = 0, 1 − 2κ and B = 0, 1 that annihilates the cosh 2 x sinh 2 x and sinh 2 x cosh 2 x terms.There are four cases when f A,B (x) is an eigenfunction of H LB : Jack polynomial defined on a single partition of integer 1, with its eigenvalue E 0,1 = κ + 1 2 . • Here we choose b such that Ψ is a polynomial in z 1 and z 2 Defect partition function The defect instanton configuration is given by a single column Young diagrams, λ (1) = (k). Let us recall that we have chosen the coloring function c(α) = α − 1 in (2.11). (2.69) The defect instanton partition function (2.56) takes the form where 2 F 1 is the hypergeometric function.The eigenfunction Ψ in (2.45) is given by (2.71) As we have proven before, the eigenvalue is given by (2.63): In the context of gauge theory, it's natural to consider the positive adjoint mass ν = m ε 1 ≥ 0. One may also consider the limit ν → 0, which is the limit that the N = 2 * gauge theory recovers the N = 4 symmetry.In such a case, all instanton configurations share the same pseudo-weight in the ensemble, giving The eigenvalue can be found by (2.74) An interesting case we would like to investigate is the ν = −1 case.By the identification of κ = ν + 1 = 0, The Laplace-Beltrami operator (2.61) is nothing but the Hamiltonian of free particles.On the gauge theory side, we notice that the only instanton configuration with non-vanishing pseudo-measure is the no-instanton configuration. The wave function Ψ is indeed of free particles with the eigenvalue being nothing but the kinetic energy (2.77) The hypergeometric function in (2.70) can be truncated when κ ∈ Z <0 .The defect partition function Z defect becomes a degree −κ polynomial in q 0 = z 2 z 1 . Jack Polynomial In this section we collect some facts about the Jack polynomial.See, e.g., [36,37] for more details.A Jack polynomial J 1 κ n (z 1 , . . ., z N ) is a symmetric polynomial in variables {z 1 , . . ., z N } labeled by the partition n = (n 1 , n 2 , . . ., n N ): be the orbit sum.The σ ∈ S N is a permutation of the set {1, . . ., N }.When κ → 0, n → M n is the monomial wavefunction of the free boson state with occupancy number l(n). The Dunkl operator D i is defined by Here are the operators that exchange of the i-th variable and the j-the variable and the differentiation operations with respect to those variables.Two Dunkl operators commute with each other such that It has been known that the eigenfunction of Laplace-Beltrami operator are the Jack polynomials The energy spectrum is given by where (3.9) Jack polynomial with negative rational value of κ is used to construct wavefunction of fractional quantum Hall effect.In particular Laughlin, Moore-Read, and Read-Razayi fractional quantum Hall effect wave function can be explicitly written as single Jack symmetric polynomials, whose partitions n obey the (k, r)-admissible condition [17][18][19] and the coupling κ = − r−1 k+1 is set to negative rational number where r − 1 and k + 1 are co-prime.See [38] for properties of Jack polynomial at negative rational coupling. Concrete expressions We define the power sum polynomial, here we list a few the Jack polynomials J 1 κ n (z) given in terms of the power sum polynomial: The Jack polynomial J 1 κ n (z) can have divergent coefficients when κ is an negative rational number. Higgsing the Coulomb Moduli Parameters The wave function Ψ (2.45) built from defect instanton partition function Z defect was proven to be the eigenfunction of Laplace-Beltrami operator (2.61) in section 2. Ψ is a function of fractional couplings {z α }, adjoint mass m, Ω-deformation parameter ε 1 , and Coulomb moduli parameters {a α }. In this section we will demonstrate how the wave function Ψ becomes a Jack polynomial: By fine-tuning the Coulomb moduli parameters {a α } with respected to the adjoint mass m, the infinite instanton summation is reduced to a finite number of terms.This finite summation can be recast as a summation over the Young Tableaux.The summation is identifed as the combinatorial formula of the Jack polynomial [39]. To put the system on a circle, we denote z ω = e 2πixω with a periodic variable x ω ≡ x ω + 1.A quantization condition for moduli parameters a α shall be imposed for the wave function Ψ to be single valued: This gives The combination of −m − ε 1 might seem weird at first.One can understand this by putting the gauge theory in the framework of gauge origami [32].In the gauge origami, the adjoint mass m is realized by Ω-deformation parameter ε 3 on the third complex plane C 3 .The combination −m − ε + = ε 4 is the Ω-deformation parameter on the fourth complex plane C 4 , which becomes −m−ε 1 in the ε 2 → 0 limit.Hence, in terms of the Ω-deformation parameters, the Jack polynomial parameter is written as The condition (4.2) imposes a locus on the Higgs branch where it meets the Coulomb branch, known as the root of Higgs branch for N = 2 * theory [40].The physical interpretation of n α is turning on a magnetic flux in the 23-direction for the α-th U (1) factor in the U (N ) gauge group We denote the set of these U (1) fluxes by n = (n α ) α=1,...,N .We can realize the quantization in the D-brane construction of N = 2 * gauge theory.Let us first consider the case with the absence of magnetic flux.The mass of the adjoint hypermultiplet is realized by the Ω-deformation on R 2 45 = C 3 space.The two ends of the D4 brane on the NS5 no longer align by the twisted boundary condition.In particular, this allows all D4 branes to join together to from a single helical D4 or a coil wrapping along the x 4 and x 6 direction.See Figure 1 for illustration.We now turn on the magnetic flux.The quantized magnetic flux can be realized as n α D2 branes "dissolving" into the α-th D4 brane.To minimize the energy, the D2 branes prefer to stay inside the D4 brane.The compact helical structure of the D4 branes hence make the engineering of these D2 branes subtle.It is done by the following way: We take n α D2 branes and stretch them from the single NS5 brane to the D4 brane in the x 7 direction, then stretch them along the x 4 and x 6 direction with a fixed x 7 inside the toroidal D4 brane, and finally stretch them back to the NS5 brane in the x 7 direction.See Figure 2 for the illustration. Near the region transverse to the D4 brane, there are n α D2 branes with one orientation in the x 7 and n α+1 of D2 branes of opposite orientation.They locally annihilate each other leaving only n α − n α+1 net D2 brane stretching along the interval.The net magnetic flux from the D2 branes cancels the net magnetic charge in the D4 brane, which is n α − n α+1 by the opposite orientation of the adjacent D4 branes. We would like to argue that without loss of generality, we can consider the case and choose the coloring function c(α) = α − 1.For the generic coloring functions, the U (1) fluxes (n α i ) i=1,...,N can be arranged in a non-increasing order By renaming each n α i as n i , we arrive at the case where and the coloring function c(α) = α − 1. Furthermore we can set n N = 0 with an over all boost.The wave function Ψ in (2.56) can be simplified with the Higgising condition (4.2): Here we assume n 1 ≥ n 2 ≥ • • • ≥ n N = 0.In order to see Ψ consists of only a finite number of terms, we notice that the instanton configuration λ that counts toward in the ensemble must obey for any α = 1, . . ., β < ω.This restricts the length of each row in Young diagram λ (α) .In particular when β leaving only a finite number of terms in the ensemble in (4.10). The wave function Ψ in (4.10) has eigenvalue which matches with the Jack polynomial spectrum (3.8). Example 1: Let us consider the simplest case n = with The only instanton configuration that counts toward the ensemble is a single column Young diagram, λ The instanton configuration is of single column of length one for l = 0, 1, . . ., N − 2. We find the wave function is of the form: This is the first power sum symmetric polynomial of (z 1 , . . ., z N ), which agrees with the corresponding Jack polynomial J (1) (z 1 , . . ., z N ). With some deliberate calculation, we find the wave function takes the following form Define the power sum polynomial such that Ψ can be rewritten as The wave function is identified as the Jack polynomial defined on partition n = . Example 3: The defect instanton partition function of U (2) theory is an ensemble over single rowed Young diagram: Here we list value of Ψ n 1 of the first few value of n 1 : For general n 0 , it has poles at Example 4: Here we consider n = (n 1 , n 2 , n 3 ) = (1, 1, 0) for N = 3 case.The instanton configuration λ must satisfy to have non-vanishing contribution toward the ensemble.We obtain which agrees with the Jack polynomial defined based on the partition n = . Example 5: We now consider N = 3, n = (n 1 , n 2 , n 3 ) = (2, 1, 0).The instanton configuration needs to satisfy to have non-vanishing pseudo-weight in the the ensemble.There are seven instanton configurations that meet the above requirements: This gives Again we see the wave function Ψ is the Jack polynomial defined on the same partition n = . Young Tableaux representation We will now introduce an alternative way to denote the instanton configuration λ that the ensemble in (4.10) sums over. Let us consider a semi-standard Young Tableaux T n [λ = ∅] of shape n.The initial reading of each box in the α-th row is α.We define Young Tableaux T n [λ] based on an instanton configuration λ by the following procedure: Starting with T n [λ = ∅], we increase the reading of the last λ (α) i squares at α-th row (with the counting starts from the left) by one and repeat the process for i = 1, . . ., N − α.On individual row this process guarantees that the reading stays non-decreasing when moving towards right since λ i+1 .For the j-th square in the α-th row, the final reading is The j-th square in the (α + 1)-th row will have the reading The constraint on the instanton configuration (4.11) ensures that The reading of the squares in Young Tableaux T n [λ] is always non-decreasing when moving rightward and always strictly increasing when moving downward.Thus T n [λ] is semistandard for any instanton configuration λ. The ensemble in (4.10) now sums over the Young Tableaux T n = (n 1 , . . ., n N ) of shape n.We denote the reading of the j-th square in the α-th row as T αj .These reading can be translated to the corresponding instanton configuration by The λ T,(α) denotes the conjugation of the instanton configuration λ (α) (also known as the transpose of λ (α) ).By the construction of Young Tableaux T n , we have The counting of instanton configuration λ, which is the power of z α , is We now argue that this value is the weight t α of T n [λ].In other words, t α counts the occurrences of the number of α in T n [λ].The first two terms are straight forward, it is the number of the number α after we increase the reading of squares in the α-th column.In the β-th column with β < α, a square that can have a reading of α needs to be increased α − β times but not any further.Since each λ (α) i only increase the reading by 1, The β-th column will have exactly λ α+1−β readings of the number α.The wave function (4.10) can be rewritten as an ensemble over the semi-standard Young tableaux T n whose reading at the (α, j) square satisfy Given a Young Tableaux T n [λ] whose largest reading is less or equal to N (not necessary equal to N ).We can define a series of sub Young Tableaux The sub Young Tableaux T (i) N ) has its reading less or equal to i.By its construction if j > i.The instanton configuration λ can be obtained by The weight t α of the Young Tableaux T n [λ] equals where The wave function (2.56) from the defect instanton partition function can be now written in terms of ensemble over the Young Tableaux where z Tn = N j=1 z t i i and . Eq. (4.39) is the combinatorial formula for Jack polynomial [36, Chapter VI, §10]. In the massless limit m → 0, which translates to κ → 1 limit (Schur limit) of the Jack parameter under the Bethe/gauge correspondence, all instanton configuration λ satisfying (4.11) shares a common pseudo-measure in the ensemble The wave function Ψ is an ensemble over the instanton configuration that satisfies (4.11).We immediately identify Ψ as Schur polynomial using the Young Tableaux representation: Example: Let N = 3, n = = (2, 1, 0).We start with a Young Tableaux T n [∅] that represents the no instanton configuration λ = (∅, ∅, ∅) Here we will list out all Young Tableaux denoting the instanton configuration λ. There are only eight semi-standard Young Tableaux of shape n, each of them corresponds to an instanton configuration.One can check that in each case the weight of T n [λ] equals to the instanton counting. Higher dimensions We have discussed the surface defect in four dimensional gauge theory, and its relation to the Jack polynomial.From the gauge theory point of view, one can generalize this setup to higher dimensions.Imposing codimension two defects, we would obtain 5d/3d and 6d/4d coupled systems, which correspond to the hierarchy of rational/trigonometric/elliptic integrable systems.Based on a similar setup in five dimensions, it has been shown that the defect partition function can be identified with the Macdonald polynomial, which is an eigenfunction of the Ruijsenaars-Schneider operator [41,42].In fact, the Macdonald polynomial also has the tableau formula, which is a trigonometric analog of (4.39) (See [36, Chapter VI, §7]1 and [43]), The previous formula (4.39) is reproduced by putting t = q κ and then taking the limit q → 1.This expression is obtained in parallel from the defect partition function by replacing the index functor (2.4) with the trigonometric version (5d/3d theory convention) although its derivation from the qq-character would be more involved.We obtain an elliptic analog of the formula (4.44) from the 6d/4d setup with the elliptic index, where Γ(z; q, p) is the elliptic Γ-function (A.11).This defines an elliptic analog of the Macdonald polynomial [44,45]. Quantum Hall States In this section, we discuss a possible connection between four dimensional gauge theory and two dimensional fractional quantum Hall (FQH) effect.The idea is as follows.On the four dimensional gauge theory side, we apply the Ω-background for each complex plane, which plays a role of the background U (1) magnetic field.See, e.g., [46,47].Then, imposing the surface defect and taking the bulk decoupling limit, we may focus on the two dimensional system with the background field, which realizes the QH effect.In fact, it has been shown by that the wide class of FQH wave functions for the ground state and with the quasi-hole excitations are realized using the Jack polynomial (multiplying a trivial Gaussian factor which we will drop here) that we have already obtained from the gauge theory analysis.We pursue the direction suggested by Nekrasov [48] for more details, and construct more generic FQH wave functions.2 Laughlin State The lowest Landau level (LLL) wave function is in general given by a product of conformallyinvariant holomorhic multi-variable polynomial ψ LLL (z 1 , . . ., z N ) and a trivial Gaussian factor that we will drop here.The Laughlin wave function is a key for understanding the physics of the FQH effect.It models the simplest abelian FQH state and is the building block to more general cases, both abelian and non-abelian ones.The Laughlin wave function of filling is the eigenfunction of Laplace-Beltrami operator (2.61) with the parameter identification This can be easily verified by noticing the Laughlin state ψ (r) L is annihilated by the Dunkl operator Hence the Laughlin state is annihilated by i z i D (1) i , which we identify i.e. L . ( For Laughlin state to be a polynomial in (z α ) α=1,...,N , i.e. r ∈ Z >0 , it would require the Laplace-Beltrami coupling κ to be negative.On the gauge theory side, that means the adjoint mass must be negative half integer (5.6) We impose quantization condition (4.2) such that the Young diagram n is (1, r, N )-admissible.Indeed, we find the eigenvalue of gauge theory instanton partition function becomes matching with that of Laughlin state. We consider the following quantization condition for the U (4) gauge theory: (5.16) The partition n = is (2, 2)-admissible: (5.17) The instanton configuration that has non-zero pseudo measure must obey The λ (2) instanton configurations are of one of the following: The λ (1) is always dominated by λ (2) .Here we list out all the instanton configurations and their contribution toward the ensemble: As we have demonstrated earlier, each instanton configuration λ = (λ (1) , λ (2) , λ (3) = ∅) can be expressed using semi-standard Young Tableaux.We start with (5.20) In general, Jack polynomial can have poles at a negative rational value of κ.However if the partition n is (k, r)-admissible then the Jack polynomial will not have a pole at [38], This is known as the admissible condition.The admissible condition considers the pole structure at a handful of particular values of Jack parameter κ.It does not mean the Jack polynomial is entire function for κ. It has been pointed out [17][18][19] that this admissible condition properly captures the clustering property of the FQH state, so that the corresponding wave function is generally obtained as the Jack polynomial with the negative couplings.For example, the Laughlin state and the MR state correspond to k = 1 and 2, respectively.The higher k cases are the Read-Rezayi states, which are associated with Z k -parafermion CFT, while the k = 2 case corresponds to the Ising CFT.One can also apply this formalism to the FQH states with spin degrees of freedom [53,54]. We have seen in the example in the N = 2 case with partition n = = (2, 0).This partition n is (1, 2) admissible (5.27) The Jack polynomial defined based on this partition (5.28) does not have pole at Instanton sum formula In the previous section we express the Jack polynomial as ensemble over instanton configuration (4.10).It is easier to see the pole structure of the wave function Ψ (2.45) by rewriting the pseudo measure in the form of Γ-functions by multiplying the 1-loop factor in (2.58): (5.30) The last line comes from the 1-loop factor and does not depend on instanton configuration λ.In order for Ψ to be finite, The pole coming form the 4 Γ-function must be canceled by Γ-functions in the denominators.Let κ → − s t be a negative rational number.s, t ∈ Z >0 are coprime.We isolated out the Γ-functions whose arguments are integers in the limit κ → − s t : The two terms that have no dependence on the instanton configuration λ comes from the 1-loop contribution Z 1-loop .Their combined contribution is (5.32) It's obvious it does not give poles as the Γ-function in the numerator has larger argument than the denominator counter part.For the Γ-functions in the first line, a similar argument where we parametrize the Ω-background parameters as q i = e ε i for i = 1, . . ., 4. This expression implies that the admissible condition may be interpreted as the Higgsing process in C 1 × C 4 from the point of view of gauge origami.A similar situation has been studied in [55] 5 that points out that such a Higgsing condition is interpreted as the resonance condition [56,57] in the context of quantum toroidal algebras. Discussion and Future Direction In this paper we have established relations between three objects: the surface operator of 4d gauge theory, the Jack polynomials, and fractional quantum Hall states.In particular, the main result is to realize the fractional quantum Hall states as the instanton partition function of four dimensional N = 2 * in the presence of full-type surface defect (up to an overall Gaussian factor) in the following simultaneous limits: • Using the qq-character we are able to identify the instanton partition function of four dimensional N = 2 * supersymmetric gauge theory in the presence of surface defects as the eigenfunction of the N -body elliptic Calogero-Moser system in the Nekrasov-Shatashivili limit ε 2 → 0 (i). • The trigonometric limit q = e 2πiτ → 0 (ii) of the elliptic Calogero-Moser system, which translates to the bulk decoupling limit in the gauge theory, simplifies the gauge theory partition function to the surface contribution.The defect instanton partition function is then proven to be the eigenfunction of the Laplace-Beltrami operator. • With proper Higgsing condition (iii) imposed on the Coulomb moduli parameters, the defect supersymmetric partition function is identified with the Jack polynomial, with the defining partition given by the quantization condition.On the side of four dimensional gauge theory, the presence of both the orbifolding and Higgsing can be understood as two different types of the co-dimensional two surface defects are introduced simultaneously. • We also explored the reconstruction of Laughlin and Moore-Read states from the defect instanton partition function (iv).It is well known that Laughlin and Moore-Read states serve as models for the study of both abelian and non-abelian quantum Hall effect (up to an overall Gaussian factor) with the filling fraction given by ν = ε 3 ε 1 . The translation from the defect partition function (with bulk decoupled) to the FQH state wavefunction requires a Gaussian factor shared by all FQH states: here B is the magnetic field, e is the electron charge, = ε 1 is the Planck constant.In section (4) we placed the trigonometric Calogero-Moser system on a circle with z α = e 2πixα .The Gaussian factor is nothing but an overall constant since |z α | = 1 for all α = 1, . . ., N − 1. It would be nice to see in a more general case whether the Gaussian factor can be realized physically from the gauge theory side.We would like to note that our construction of the FQHE from 4d N = 2 * theory has similarity but not equivalent to the construction in [58].In the later, the ADE N = (2, 0) gauge theory in 6d lives on S 3 ε 2 /ε 1 × R × Σ. S 3 ε 2 /ε 1 is a squashed 3-sphere.Σ is a 2d Riemann sphere known as the Gaiotto curve.In particular, the FQHE filling fraction is identified as ν − 2 ε 2 ε 1 in [58]. The qq-character observables (2.23) are known to have analytic property on its argument.We would like to know if one can use the analytic property of qq-character to prove the admissible condition of the Jack polynomial: a Jack polynomial J 1 κ n is regular at κ = − r−1 k+1 when the partition n is (k, r)-admissible.The admissible condition has been proven using the clustering properties of the Jack polynomial [59].The hardship lies on the fact that the analytic property of qq-character is associated to its argument x, which in the context of gauge origami is the moduli parameter on the auxiliary space C 2 34 .On the other hand the Jack parameter κ that the admissible condition addresses is associated to the adjoint mass m and Ω-deformation parameter ε 1 in the gauge theory.Both are free parameters in the gauge theory.It will be very helpful if gauge theory can provide a proof of the admissible condition of the Jack polynomial: It is known that the supersymmetric gauge theory instanton partition function has five and six dimensional extension.Using the same strategy of the qq-character one should recover the Macdonald polynomial and its elliptic uplift.Furthermore, if a proof of the admissible condition for Jack polynomial can be shown from the corresponding 4 dimensional gauge theory, it is possible that an admissible condition would be applied to both Macdonald polynomial and its elliptic uplift using the five and six dimensional gauge theories. . 1 ) and a parameter κ.In the context of QHE, the partition n can be represented as a (bosonic) occupation number configuration l(n) = {l m (n), m = 0, 1, 2, . . .} of each of the lowest Landau level (LLL) orbits of angular momentum L = m , where for m > 0 the number l m (n) is the multiplicity of m in n.Given a partition n = (n 1 , n 2 , . . ., n N ), let β < ω ≤ N to have a non-vanishing pseudo measure.Remember that the instanton Young diagram is limited in height λ (β) N +1−β = 0.By iteration, we obtain
10,054.8
2022-10-12T00:00:00.000
[ "Physics" ]
Graphene-Based Photo-Fenton Catalysts for Pollutant Control Water pollution is a global environmental issue with multi-dimensional influences on human life. Some strategies, such as photo-Fenton reaction, have been employed to remove recalcitrant pollutants. Two-dimensional (2D) graphene and its three-dimensional (3D) configurations have attracted considerable attention as emerging carbon-based catalysts in photo-Fenton fields owing to their alluring properties in electron transfer, reactant adsorption, and light response. This review summarizes the recent developments in 2D and 3D graphene-based catalysts for photo-Fenton reactions. Their structures, characteristics, activity, and mechanisms are discussed. The conclusions and outlooks are proposed for the profound understanding of challenges and future directions. Introduction With continuous developments of human societies and industries, water pollution is becoming increasingly serious and riveting attention among researchers. Although a small fraction of pollutants is present in aquatic environments, most of them bio-accumulate and resist traditional physical removal methods [1]. Consequently, innovative strategies, such as advanced oxidation processes (AOPs), have been introduced; the Fenton reaction, which was discovered in 1894, is one of the most studied strategies [2][3][4][5][6][7][8][9][10]. The main reactions stated below [Eqs. (1), (2)] reveal the major oxidation mechanism, where ferrous ions (Fe 2+ ) continuously react with hydrogen peroxide (H 2 O 2 ) to produce hydroxyl radicals (·OH) with strong oxidizing ability, which plays a vital role in the subsequent degradation of pollutants. The concurrent reduction of ferric ions (Fe 3+ ) to Fe 2+ realizes the circulation of iron ions. However, Eq. (2) is the ratedetermining step, whose rate is approximately 1/6000 that of Eq. (1), which greatly abates the effectiveness of Fe 3+ / Fe 2+ circulation [11]. The invalidation of Fe 3+ /Fe 2+ circulation induces not only the insufficient utilization efficiency of H 2 O 2 but also the aggregation of Fe 3+ . It precipitates to form ferric hydroxide (Fe(OH) 3 ), namely iron sludge, when pH is above 3, leading to thorny secondary pollution [12]. Traditional Fenton reactions possess four major drawbacks: (1) low utilization efficiency of H 2 O 2 , (2) narrow pH range, (3) excess iron ion loss and secondary pollution of iron sludge, and (4) difficulties in recycling powder catalysts [13][14][15][16]. As a branch of Fenton reactions, the photo-Fenton reaction improves the utilization efficiency of H 2 O 2 after the introduction of ultraviolet (UV) or visible light because of the photo-induced reduction of Fe 3+ to Fe 2+ [Eq. (3)] and the production of ·OH [Eq. (4)] [11]. In traditional photo-Fenton reactions, ferrous compounds, such as ferrous sulfate (FeSO 4 ), are added directly into aqueous catalytic systems to react with H 2 O 2 in ionic state [17]. Such type of homogeneous reaction exacerbates the formation of iron sludge, resulting in blocked iron recycling and secondary (2) Fe 3+ + H 2 O 2 → Fe 2+ + HO 2 ⋅ +H + pollution [18]. Iron-based catalysts may contribute to interface reactions between iron ions and H 2 O 2 , suppressing the formation of iron sludge and waste of iron sources [19]. Various methods to introduce iron-based catalysts have been investigated to ameliorate the application range of pH, formation of iron sludge, and catalyst recycling in photo-Fenton reactions [20,21]. In photo-Fenton reactions, a myriad of catalysts display their outstanding potential in degrading recalcitrant pollutants. In recent years, graphene-based materials have become promising candidates owing to their unique merits of theoretical specific surface, electron mobility, wide light response, and mechanical strength [22,23]. Numerous reports have proved the overwhelming benefits of graphene in various AOPs, including photo-Fenton reactions [24]. Early in 2011, Fu and Wang [25] loaded ZnFe 2 O 4 on graphene and applied the ZnFe 2 O 4 -graphene hybrid catalyst for the photo-Fenton reaction in degrading methylene blue (MB). The salient improvement over pristine ZnFe 2 O 4 is ascribed to the enhanced light absorbance of graphene and inherent π-conjunction in graphene, which is conducive for the separation of photo-generated electrons and holes, thus extending the lifetime of photo-generated electrons. Moreover, the introduction of graphene in the photo-Fenton reaction can quickly transfer the electrons to Fe 3+ , accelerating its reduction to Fe 2+ and enabling the Fe 3+ /Fe 2+ circulation in the photo-Fenton reaction. In addition, the large specific surface endows graphenebased catalysts with the preeminent adsorption of pollutants and retards the agglomeration of catalyst nanoparticles [26][27][28]. Figure 1 illustrates the general photo-Fenton scheme of graphene-based materials. Given the peculiar properties and alluring prospects of graphene in photo-Fenton reactions, this review summarized the recent advances in graphene-based photo-Fenton catalysts and categorized them into 2D and 3D graphenebased photo-Fenton systems, whose structures, characteristics, activity, and mechanisms were discussed in detail. Moreover, attempts to overcome the four aforementioned drawbacks that hampered the practical application of photo-Fenton reactions were presented. On the basis of the above analysis, a perspective on the direction and emphasis in this field was provided to understand the possible path of photo-Fenton reactions in practical industrial wastewater treatment. Two-Dimensional Graphene-Based Photo-Fenton Systems Graphene oxide (GO) obtained by the exfoliation and oxidation of graphite possesses oxygen-containing functional groups on the surface [29,30]. These functional groups allow GO to chemically bond with various active materials and form highly stable 2D graphene-based photo-Fenton systems. Consequently, the high stability, great dispersity, and rapid reaction of 2D graphene-based photo-Fenton systems triggered a myriad of studies. Iron-based catalysts have been extensively applied in recent research to restrain the formation of iron sludge. As a result, recently developed 2D graphene-based photo-Fenton systems were divided into 2D graphene/iron oxides, 2D graphene/spinel ferrites, and 2D graphene/iron-based metal organic frameworks, whose characteristics, advantages, and mechanisms were introduced specifically. As an earth-abundant n-type semiconductor, α-Fe 2 O 3 stands out as the most discussed crystalline polymorph of Fe 2 O 3 because of its low cost, heat endurance, and chemical stability. With a relatively narrow band gap of 1.90-2.20 eV, α-Fe 2 O 3 can absorb approximately 43% solar light [32,36] and releases few iron ions as a heterogeneous Fenton catalyst [31]. Nevertheless, the inherent high recombination rate of photo-generated electrons and holes and inefficient conversion efficiency of Fe 2+ and Fe 3+ in α-Fe 2 O 3 hamper the overall catalysis in photo-Fenton reactions [32]. With its fascinating electron transport property and large specific surface area, graphene is regarded as a promising support to overcome the demerits of α-Fe 2 O 3 . Guo et al. [37] first synthesized a GO-Fe 2 O 3 composite through a simple impregnation method and used it as the photo-Fenton catalyst in degrading Rhodamine B (RhB). As shown in Fig. 2a, Fourier-transform infrared (FT-IR) spectroscopy was employed to analyze the oxygen-containing functional groups before and after impregnation to identify the chemical structures of GO and GO-Fe 2 O 3 . The peaks at 1719, 1621, 1417, 1223, and 1053 cm −1 , which were ascribed to C=O, aromatic C=C, carboxyl C-O, epoxy C-O, and alkoxy C-O, respectively, appeared in GO and GO-Fe 2 O 3 but were slightly different in their positions and sharpness, suggesting change in coordination environment of these groups. In specific, the peak at 1719 cm −1 was weaker in GO-Fe 2 O 3 than that in GO because of the formation of -COO − after loading Fe 2 O 3 . In addition, the extra peak at 535 cm −1 ascribed to Fe-O in Fe 2 O 3 elucidated the connection between Fe 2 O 3 and -COO − on the edge of GO, corroborating that Fe 2 O 3 could form bonds with oxygen-containing functional groups on the GO surface and be firmly fixed on GO. Degradation tests revealed that GO-Fe 2 O 3 greatly accelerated photo-Fenton reactions (Fig. 2b), where 99% RhB was degraded in 40 min. Removal of 60% RhB in 80 min under dark conditions also demonstrated the preeminent adsorption ability of GO-Fe 2 O 3 . Moreover, 90.9% RhB was eliminated in 80 min when pH was up to 10.09, suggesting that the electronegativity and the oxygen-containing functional groups on the GO surface extended the pH application range to some extent. Liu et al. [31] subsequently investigated the mechanism of degrading different organic pollutants in the α-Fe 2 O 3 @GO photo-Fenton system. Ultraviolet and visible diffuse reflectance spectroscopy (UV-Vis DRS) (Fig. 2c) revealed that α-Fe 2 O 3 @GO showed an enhanced light absorption and slight redshift than α-Fe 2 O 3 , indicating that the former has greater photo-Fenton efficiency than the latter. Such slight redshift might be credited to the formation of Fe-O-C bond between α-Fe 2 O 3 and GO. The degradation rates of MB at pH 3-12 reached 99% in 80 min, suggesting that π-π stacking and electronegativity of the GO surface allowed α-Fe 2 O 3 @GO to adsorb MB in large quantities. To validate this theory, the authors conducted degradation and adsorption experiments on different organic pollutants (Fig. 2d). Results showed that the degradation and adsorption of cationic compounds (MB and RhB) were conspicuously quicker than those of anionic compounds (Orange II and Orange G) and neutral compounds (phenol, 2-nitrophenol, and endocrine disrupting compound 17β-estradiol), proving the aforementioned speculation. Moreover, the degradation rate of MB remained 99% after 10 cycles, and no detectable iron leaching was identified by inductively coupled plasma (ICP)-optical emission spectroscopy, suggesting the outstanding catalytic stability and inhibition of iron sludge formation in this α-Fe 2 O 3 @GO photo-Fenton system. Bio-friendly Fe 3 O 4 nanoparticles are promising photocatalysts because of their unique magnetic, electronic, and catalytic properties. The coexistence of Fe 2+ and Fe 3+ in the octahedral structure endows Fe 3 O 4 with the enhanced ability of iron ion cycle [33]. The inherent magnetic property allows Fe 3 O 4 composite catalysts to be separated from wastewater with minor loss in the presence of an external magnetic field, which is a possible solution to the difficulty in recycling photo-Fenton catalysts. However, Fe 3 O 4 nanoparticles agglomerate and form large particles, which reduces the specific surface area and solubility, thereby suppressing catalytic activity [27]. To this end, the researchers loaded Fe 3 O 4 nanoparticles on the GO surface and employed the composite catalysts in photo-Fenton reactions. Qiu et al. [20] reported a feasible Stöber-like method using Fe(III) acetylacetonate as the precursor to synthesize ultra-dispersed Fe 3 O 4 nanoparticles on the surface of reduced graphene oxide (rGO). This method could be applied in large-scale synthesis without the need for reducing agents and organic surfactants. High-resolution transmission electron micrographs (Fig. 3a, b) /GO photo-Fenton system, the photo-induced electrons generated from dyes and Fe 3 O 4 migrated to GO sheets for the superior conductivity, so that Fe 3+ could capture electrons and be reduced to Fe 2+ , thereby continuing to react with H 2 O 2 to produce ·OH. The proposed mechanism was concordant with the fact that no change in the Fe 3 O 4 /GO catalyst was observed before and after the photo-Fenton reactions from the transmission electron microscopy (TEM) and field-emission scanning electron microscopy (FESEM) images. β-FeOOH has also attracted attention because of its natural abundance as a biocompatible catalyst. With the narrow band gap of 2.12 eV, β-FeOOH can effectively absorb visible light. However, β-FeOOH minerals usually exist in the form of very small particles, which tend to agglomerate, thus causing inactivation. In addition, the inherent poor electron-hole separation ability results in the short lifetime of photo-generated electrons and poor catalytic ability in photo-Fenton reactions [35,38]. To overcome these defects, Su et al. [35] prepared a β-FeOOH@GO nanocomposite through moderate hydrolysis for MB degradation. UV-Vis DRS (Fig. 4a) revealed that β-FeOOH@GO was more capable than β-FeOOH in absorbing UV and visible light owing to the interface interaction between β-FeOOH and GO sheets. In the photoluminescence (PL) spectra ( Fig. 4b), the peak of β-FeOOH@GO was lower than that of β-FeOOH, suggesting stronger inhibitory effect on the recombination of photo-generated electrons and holes and longer lifetime of photo-generated electrons. As a result, the degradation rate of MB with β-FeOOH@GO reached 99.7% in 60 min, and the calculated pseudo-first-order rate constant was 0.6322 min −1 , which was much higher than that of β-FeOOH (0.2148 min −1 ). Moreover, the pH application range and catalytic stability of β-FeOOH@GO were satisfactory. As shown in Fig. 4c, the degradation of MB with the addition of β-FeOOH@GO remained in a high speed at pH 2.67-12.11. This result can be ascribed to the fact that the point of zero charge of β-FeOOH@GO was 2.25 and the negatively charged surface at higher pH value (pH > 2.25) augmented the adsorption of cationic MB. As illustrated in Fig. 4d, the degradation and adsorption rates of different cycles showed no significant differences. The maximum concentration of dissolved iron was 0.277 mg/L, accounting for only 0.4% of the loaded β-FeOOH. These results indicate that the β-FeOOH@GO catalyst has the potential to overcome the narrow application range of pH and the generation of iron sludge. Basing from the results of X-ray photoelectron spectroscopy and electron spin resonance (ESR) spectroscopy, the authors proposed the degradation mechanism of MB as follows. First, MB was adsorbed on the surface of β-FeOOH@GO through electrostatic interaction and π-π stacking. Second, H 2 O 2 reacted with Fe 2+ generated through the photoreduction of Fe 3+ on the catalyst surface to generate ·OH. Third, GO guaranteed the increase in the active sites and the effective enrichment of MB molecules, which were further attacked by ·OH. Fourth, GO effectively captured the photo-generated electrons from the semiconductor conduction band or the LUMO of the dye and quickly transferred them to the active site of β-FeOOH because of the heterojunction between β-FeOOH and GO. Finally, Fe 3+ could react with hν, H 2 O 2 , or the transferred electrons and be reduced to Fe 2+ , facilitating Fe 3+ /Fe 2+ circulation. Table 1 summarizes the catalytic performance of some typical photo-Fenton catalysts based on 2D graphene/iron oxides and iron oxides. Obviously, graphene-based catalysts possess an obvious edge over those without graphene owing to the outstanding physical and chemical properties of graphene. 2D Graphene/Spinel Ferrites Spinel ferrites are face-centered cubic structured oxides (MFe 2 O 4 , M = Co, Cu, Zn, Ni, Mn, etc.) that have been widely applied in magnetic resonance imaging, electronic equipment, processing heavy metal waste and chemical sensors [26]. The properties of spinel ferrites largely depend on the position, nature, and quantity of the metal incorporated in the structure. Apart from thermal and chemical stability, spinel ferrites often possess magnetic properties and superior light absorption. Many spinel ferrites, such as ZnFe 2 O 4 and NiFe 2 O 4 , have a narrower band gap of 1.90-2.20 eV than traditional photocatalysts, such as CdS (2.40 eV) and WO 3 (2.80 eV), suggesting their wider visible light absorption [39]. Spinel ferrites are effective photocatalysts and active in neutral and alkaline Fenton systems. Loading spinel ferrites on graphene hampers the agglomeration of spinel ferrite nanoparticles and the leaching of poisonous ions. Given the large specific surface area and robust light absorption of graphene, the graphene/spinel ferrite composites ought to perform enhanced catalytic activity in photo-Fenton reactions. Zinc ferrite (ZnFe 2 O 4 ) is characterized by visible light response, light stability, and low cost. Owing to the narrow band gap of 1.90 eV, ZnFe 2 O 4 is particularly popular in solar energy conversion, photocatalysis, and photochemical hydrogen production. Early in 2011, Fu and Wang [25] synthesized ZnFe 2 O 4 -graphene by using a one-step hydrothermal method readily and employed it for the photo-Fenton degradation of MB with visible light. The FESEM image in Fig. 5a shows that ZnFe 2 O 4 nanoparticles (7-10 nm) were uniformly loaded on the surface of exfoliated graphene sheets. The ZnFe 2 O 4 -graphene with 20 wt% graphene exhibited greater photo-Fenton activity (99% degradation of MB in 90 min) than pristine ZnFe 2 O 4 (20% degradation of MB in 90 min). Probing the cause of the great enhancement, electrochemical impedance spectroscopy measurements were performed to identify the electrical resistivity. As shown in Fig. 5b, the Nyquist curves of ZnFe 2 O 4 -graphene possessed a much smaller radius than those of ZnFe 2 O 4 and GO, implying lower electrical resistance and higher electronic mobility. In short, graphene is a unique 2D material with zero band gap and π-conjugation structure. On its surface, the carriers appeared as massless fermions. Thus, the photogenerated electrons produced by ZnFe 2 O 4 could quickly transfer from its conduction band to graphene, improving the activity of photo-Fenton reactions. Moreover, the catalysts were easily recovered because of the magnetic property of ZnFe 2 O 4 (Fig. 5c). The subsequent unchanged degradation rate of MB for 10 cycles exemplified the catalytic stability of ZnFe 2 O 4 -graphene. With a narrow band gap of 2.10 eV, NiFe 2 O 4 has the potential to be a photocatalyst because of its stable structure, high electrical resistance, and photochemical stability. The ferromagnetism stemming from its magnetic moment of the antiparallel spin between the Ni 2+ ion and the Fe 3+ ion reduces the cost of catalyst recovery [40]. Liu et al. [41] loaded NiFe 2 O 4 on GO sheets through a facile hydrothermal method and performed the degradation of MB with the addition of a GO-NiFe 2 O 4 composite, visible light, and oxalic acid. The authors selected oxalic acid rather than H 2 O 2 for the increased light absorption because of the formation of ferric oxalate and the large rate constant of the rate-determining step of the ferric oxalate-based photo-Fenton system. The photo-Fenton experiment (Fig. 5d) achieved such an efficient catalytic performance and that GO increased the light absorbance rather than degraded MB by itself. A recycling degradation experiment was also carried out readily because of the ferromagnetism of NiFe 2 O 4 . Results revealed that the degradation rate remained above 90% even after eight cycles (Fig. 5e). A schematic of the degradation is provided in Fig. 5f. Researchers have also disclosed the effect of GO on other spinel ferrite-based photo-Fenton systems, such as manganese ferrite ( of ~ 17 nm loaded on the surface of rGO. Malachite green was taken as the model pollutants for the photo-Fenton reactions at neutral pH. The introduction of rGO improved the degradation rate from ~ 79 to ~ 99% in 30 min, corresponding to the pseudo-first-order kinetic rate constants of 11.20 and 18.42 h −1 . The high valence band of CoFe 2 O 4 enabled the photo-generated electrons to transfer spontaneously from CoFe 2 O 4 to rGO, thereby increasing photo-Fenton activity. 2D Graphene/Iron-Based Metal Organic Frameworks Metal organic frameworks (MOFs) are crystalline porous materials composed of multidentate organic ligands and metal ions or clusters. Given their large specific surface area, MOFs have been widely used in gas storage, adsorption, and separation [44,45]. In addition, MOFs are extensively responsive to UV and visible light owing to their ligand-metal charge transfer (LMCT) property, making them promising candidates as novel photocatalysts. In particular, iron-based MOFs have attracted wide attention because of their Fe 3 -μ 3 -oxo clusters and low toxicity [46]. However, the application of MOFs as photo-induced catalysts is limited by the poor separation and migration of photo-induced carriers, whose common amelioration methods include surface modification and doping [47]. Considering the preeminent visible light absorption and electron mobility of graphene, MOF materials could be loaded on graphene to prepare hybrid photo-Fenton catalysts. The introduction of graphene is also believed to improve structural stability and inhibit photocorrosion [48]. In 2018, Liu et al. [47] polymerized ultrathin GO on the surface of MIL-88A(Fe) to prepare MIL-88A(Fe)/GO. As shown in the SEM image in Fig. 6a, MIL-88A(Fe) appeared as needle-shaped nanorods 1-3 μm long with high crystallinity and GO distributed evenly on the surface. UV-Vis DRS (Fig. 6b) revealed that all the samples absorbed light effectively at the wavelength of 200-600 nm. Concerning pristine MIL-88A(Fe), the characteristic absorption peak at about 250 nm was ascribed to the LMCT of O(II) to Fe(III), and the bands at 300-500 nm were attributed to the d-d transition of Fe(III). The absorption edges of MIL-88A(Fe)/ GO and MIL-88A(Fe)/GO were similar, but the introduction of GO considerably increased the absorption intensity, especially that in the visible light area, implying increased photo-Fenton catalytic ability. The results in the photo-Fenton degradation of RhB were consistent with the speculation. The pseudo-first-order kinetic rate constant of the degradation of RhB with MIL-88A(Fe)/GO was 0.0645 min −1 , which was 8.4 times higher than that with MIL-88A(Fe). The composite retained its high photo-Fenton performance in a wide pH range of 1-9 with no significant loss of catalytic ability after five cycles, suggesting the wide pH application range and stable catalytic stability of the composite. Xie et al. [46] reported a facile vacuum filtration strategy to prepare a GO/ MIL-88A(Fe) membrane for the separation and degradation of MB. The introduction of GO modulated the 2D nanochannel in the membrane, enabling high flux and efficient separation to be mutually compatible. Concomitant light absorption enhancement contributed to the photo-Fenton reaction for the removal of contaminants clogging the interior of the membrane. The degradation of MB could be intuitively observed in Fig. 6c-h. After the separation (Fig. 6c-e), the residue of MB inside the membrane (Fig. 6f) was completely eliminated after the photo-Fenton process with visible light irradiation in 30 min (Fig. 6g, h). The mechanism conjectured from the results of ESR spectra is as follows. First, GO/MIL-88A(Fe) membrane generated electron-hole pairs when irradiated by visible light, in which holes directly oxidized organic pollutants and electrons migrating onto the GO surface promoted the decomposition of H 2 O 2 into ·OH. Meanwhile, electrons could also be captured by oxygen to form superoxide radicals (O 2 ·− ) on the membrane surface, and the generated ·OH and O 2 ·− completely degraded organic pollutants. Within the catalyst, Fe-O clusters also facilitated the decomposition of H 2 O 2 through Fenton-like reactions, and active sites provided by GO nanosheets enhanced the photo-Fenton catalytic ability. Gong et al. [44] used MIL-100(Fe) to synthesize an Fe 3 O 4 @GO@MIL-100(Fe) magnetic catalyst with a core-shell structure for the degradation of 2,4-dichlorophenl (2,4-DCP). Figure 7a-c displays the microstructures of Fe 3 O 4 , Fe 3 O 4 @GO, and Fe 3 O 4 @GO@MIL-100(Fe), respectively. Fe 3 O 4 is spherical with a rough surface whose diameter is 300-350 nm. After loading GO, a rugate surface texture appeared on Fe 3 O 4 @GO because of the GO shell. Further wrapped with MIL-100(Fe), the surface became rougher with the MIL-100(Fe) shell, contributing to the spectacular specific surface area of 1048.1 m 2 /g, which was much larger than that of Fe 3 O 4 @ GO (79.4 m 2 /g). The degradation rates of 2,4-DCP in the photo-Fenton system with Fe 3 O 4 @GO@MIL-100(Fe) exceeded 90%, and the total organic carbon (TOC) removal rates reached 50% in four cycles. Coupled with the inherent magnetism, the Fe 3 O 4 @GO@MIL-100(Fe) hybrid catalyst displayed its great potential for practical recycling. Analysis of the ESR spectra (Fig. 7d, e) revealed that the peaks of DMPO-HO· with an intensity ratio of 1:2:2:1 and the peaks of DMPO-O 2 ·− with an intensity ratio of 1:1:1:1 were strong, suggesting that more ·OH and O 2 ·− radicals were generated in the system. Combining additional PL spectra and photocurrent response measurements, the authors proposed the possible mechanism. When irradiated by visible light, photo-generated electrons were generated in MIL-100(Fe), captured by GO rapidly, and transferred to Fe 3 O 4 and Fe 3+ in MIL-100(Fe), promoting the reduction of Fe 3+ to Fe 2+ . The increased O 2 ·− from the dissolved oxygen and ·OH promoted the degradation of 2,4-DCP. The aforementioned papers on 2D graphene/iron oxides, 2D graphene/spinel ferrites, and 2D graphene/iron-based MOFs suggested the following common points: (1) graphene materials exhibit excellent light absorption, pollutants adsorption, and electron transport capabilities, thus improving the efficiency of photo-Fenton systems; (2) graphene is competent for the firm support of photo-Fenton catalysts that effectively inhibit the agglomeration of active components; (3) researchers tend to prepare iron-containing catalysts and increase the reaction pH, thus broadening the pH conditions of the reaction and hampering the formation of iron sludge; and (4) magnetic materials are becoming popular in photo-Fenton systems because of easy recycling. However, even magnetic 2D graphene-based materials are difficult to be separated from the sewage sludge under the actual scenario. The incidental agglomeration of 2D graphene-based composite catalysts after the reaction and the dissolution of active substances also leads to irreversible deactivation. Consequently, researchers have focused on using 3D graphene-based materials. Three-Dimensional Graphene-Based Photo-Fenton Systems Although 2D graphene-based materials have many advantages, their lamellar structures easily agglomerate, which reduces the specific surface area and active sites. Small 2D graphene-based catalysts may enter the aquatic environment, leading to potential pollution [38]. Fortunately, 3D graphene aerogel and hydrogel materials obtained by the self-assembly of 2D graphene materials not only inherit the intriguing properties of 2D graphene-based materials but also are easily separated from the aqueous solution. The special 3D porous structure also provides a myriad of channels for the transport and diffusion of reactants. Accordingly, 3D graphene materials have been widely applied in sensors [49], energy storage [50], and pollutant control [21]. Three-Dimensional Graphene-Based Aerogels As a novel carrier, the graphene aerogel is characterized by a porous 3D framework, large specific surface area, excellent electron mobility, mechanism stability, and great adsorption [51]. After the self-assembly of 2D graphene materials, the original contact resistance between graphene sheets disappears, accelerating the electron transport. In addition, the intricate and conductive graphene network renders a multitude of paths for transport and diffusion [38]. Consequently, graphene aerogels have been widely investigated in sensors, oil absorption, energy storage, and catalysis [52]. As early as 2015, Qiu et al. [21] applied graphene aerogels in photon-Fenton reactions and laid a solid foundation for subsequent research in the synthetic method. In general, metallic oxide nanoparticles, conductive polymers, or other carbon materials are introduced to effectively regulate the nanostructure and function of graphene aerogels [51], which are discussed as follows. The metallic oxide nanoparticles on graphene aerogels function as the active sites in photo-Fenton reactions. In addition, graphene aerogels offer 3D support for the high dispersion of metallic oxide nanoparticles on the graphene surface. Qiu et al. [21] first reported the application of Fe 2 O 3 on graphene aerogels (Fe 2 O 3 /GAs) in photo-Fenton in 2015. The Stöber-like method was adopted to grow Fe 2 O 3 nanocrystals in situ on graphene aerogels. Figure 8a displays the macroscopic structure of Fe 2 O 3 /GAs after a hydrothermal process, whose size could be easily modulated by altering the vessel. The Fe 2 O 3 /GA material possessed ultra-light mass characteristics and very low density (8 mg/cm 3 ) despite containing 18.3 wt% Fe 2 O 3 nanocrystals. Figure 8b shows that Fe 2 O 3 /GAs had a 3D hierarchical macroporous structure, where granular Fe 2 O 3 nanocrystals were inserted into the graphene skeleton without any observable agglomeration. The TEM image in Fig. 8c demonstrates that Fe 2 O 3 nanoparticles, mostly in the size of 25 nm, were highly dispersed on the graphene surface. The consistent absence of agglomeration suggested that the Stöber-like approach was particularly conducive for the in situ growth of highly dispersed Fe 2 O 3 nanocrystals on graphene aerogels. In addition, the mechanical strength of Fe 2 O 3 /GAs was proved preeminent, as shown in Fig. 8d. It could withstand continuous compression for dozens of times, after which Fe 2 O 3 /GAs was almost completely expanded, indicating the great elasticity and preeminent mechanical strength. The stress-strain curve also indicated the same conclusion, and the diagrams of the sandwich biscuit-like structure are shown in Fig. 8e. The catalytic property of Fe 2 O 3 /GAs in photo-Fenton reactions was also satisfying. Comparison of the photo-Fenton degradation of MO (Fig. 9a) showed that Fe 2 O 3 and Fe 2 O 3 /2D-graphene (Fe 2 O 3 /GR) suffered rapid deactivation after the two cycles of degradation owing to iron leaching. Although the amounts and particle sizes of Fe 2 O 3 in Fe 2 O 3 /GR and Fe 2 O 3 /GAs were highly similar, the degradation rate in the Fe 2 O 3 /GAs photo-Fenton system did not change appreciably even after 10 cycles because of its tough 3D network structure, which precluded iron leaching. This proposed reason was validated by Fe 2+ capture experiments, where 1,10-phenanthroline monohydrate (Phen) was employed to capture the dissolved Fe 2+ ions after degradation (Fig. 9b). The dissolution of Fe 2+ in Fe 2 O 3 was much higher than that in Fe 2 O 3 /GAs, which reacted with OH − to form Fe(OH) 3 and impeded the Fe 3+ /Fe 2+ circulation. Moreover, Fe 2 O 3 /GAs exhibited excellent photo-Fenton activity in a wide pH range of 3.5-9.0 (Fig. 9c), but pristine Fe 2 O 3 was gradually deactivated under neutral conditions with the increase in the number of cycles because of the iron sludge covered on the surface (Fig. 9d). Figure 9e illustrates the proposed mechanism, which is similar to that in 2D graphene-based photo-Fenton systems. Polymers, such as polypyrrole, cellulose, and polyvinyl alcohol, have been exploited as cross-linkers for the selfassembly of GO nanosheets, modulating the structures and properties of graphene aerogels. Tong et al. [53] prepared a reduced graphene oxide/Prussian blue/polypyrrole aerogel (rGO/PB/PPy) hybrid catalyst for photo-Fenton reactions, where rGO, PB, and PPy functioned as the skeleton, the active site, and the cross-linker, respectively. As illustrated in Fig. 10a, the pyrrole monomer was oxidized and polymerized on the surface of GO nanosheets during the synthetic process. The obtained PPy wrapped the surface of GO nanosheets and adhered the adjacent GO layers by the π-π interaction and hydrogen bond interaction, promoting the self-assembly of 2D GO nanosheets to aerogels. In addition, some oxygenated groups on the GO surface were reduced by the reductive pyrrole monomer. The structural impact brought by the cross-linker is shown in Fig. 10b. The rGO/ PB/PPy aerogel presented a 3D framework with a myriad of interconnected holes, and the plane size of its rGO skeleton could reach hundreds of microns. The red arrows marked the PPy in the shape of cauliflowers, which acted as the crosslinker inside. The specific surface area and the pore size distribution reckoned by BET measurements were 70 m 2 /g and 5.0-50 nm, respectively. Such a large surface area and mesoporous channels promoted the adsorption and diffusion of pollutants, improving the degradation efficiency. Under the optimal conditions of photo-Fenton reactions with visible light irradiation, the RhB degradation rate in 30 min with Reproduced with permission from Ref. [21]. Copyright 2015 The Royal Society of Chemistry the addition of rGO/PB/PPy was 95.2%, and the calculated pseudo-first-order rate constant was 0.0766 min −1 , which were greatly larger than those (5.5% and 0.00337 min −1 , respectively) with PB nanoparticles. This great improvement can be ascribed to the fact that rGO and PPy are highly capable of adsorbing organic dyes, resulting in a high concentration of RhB near PB nanoparticles. The other reason lays on the outstanding electrical conductivity of rGO and PPy, which accelerated Fe 3+ /Fe 2+ circulation. Other carbon materials further improve graphene aerogels-based photo-Fenton systems by providing additional channels and averting the agglomeration or dissolution of the active sites. Yao et al. [54] employed a two-step synthesis method to sandwich Fe x O y between the reduced GO nanosheet (rGS) and the nitrogen-doped carbon layer (NCL) to prepare the rGS/Fe x O y /NCL composite. As shown in Fig. 10c, thin folds with blue marks indicate the rGSs, and the irregular bulges pointed by red arrows represent NCLs. rGSs were interconnected to form a consecutive framework because of the adhesion brought by the insertion of NCLs. In a study on the photo-Fenton activity of rGS/Fe x O y /NCL aerogels, RhB completely disappeared within 150 min. According to the pseudo-first-order equation, the calculated rate constant K was 0.0237 min −1 . A TOC removal experiment indicated a mineralization efficiency of 76.1% in 150 min. Coupled with the high degradation rate maintained in the pH range of 2.1-10.1, the overall excellent photo-Fenton catalytic performance of the rGS/Fe x O y /NCL aerogel was exemplified. To disclose the reason, rGS/Fe x O y aerogel was taken as the control group. The degradation rate obtained under the same conditions (66.7%) was lower than that with the rGS/Fe x O y /NCL aerogel, which was credited to the π-π interaction and the formation of hydrogen bonds. Moreover, the superior conductivity of rGSs and NCLs accelerated the electron transfer between Fe 3+ and Fe 2+ , which was supported by the increased reduction current in Fig. 10d. Importantly, the degradation rate of RhB with rGS/ Fe x O y /NCL aerogels remained 94.6% after five cycles. No aggregation or loss of Fe x O y was found in the TEM images of the recovered sample. ICP atomic spectroscopy also proved that only 3.3 wt% of iron was dissolved after the five cycles, indicating the protective effect of NCLs and rGSs on Fe x O y nanoparticles. In general, this sandwich structure not only provided multiple channels for the rapid diffusion and adsorption of reactants but also averted the leaching of Fe x O y , thus achieving the preeminent catalytic ability, stability, and reusability of the structure. A comparison of catalytic performances between non-graphene catalysts and 3D graphene-based catalysts is provided in Table 2 to highlight the significance of the 3D graphene in photo-Fenton reaction. Three-Dimensional Graphene-Based Hydrogels Composed of a porous framework, hydrogels are popular soft materials in electrochemistry, catalysis, sensors, drug delivery, and water treatment [55]. Different from hydrophobic aerogels, hydrogels can absorb large volumes of water in the expanded state because of the abundant hydrophilic groups on their polymeric networks. This extraordinary water-absorbing capability directly results in the adsorptive ability to remove or recycle organic pollutants and heavy metals from aqueous systems, which are not only adsorbed on the surface but also trapped in the inflated 3D framework [56]. Hitherto, many studies have unraveled that graphene-based hydrogels are free from the disadvantages of traditional hydrogels in mechanical properties and adept at the adsorption of organic pollutants. However, few researchers adopted graphene hydrogels in photo-Fenton reactions to degrade the adsorbed pollutants completely [57,58]. To our knowledge, Dong et al. [57] reported the only study on the use of graphene-based hydrogels in photo-Fenton systems, demonstrating the feasibility of this direction. In that report, Fe 3 O 4 , rGO, and polyacrylamide (PAM) were employed to prepare Fe 3 O 4 /rGO/PAM hydrogels by using a two-step chemical synthesis method. The results of a series of experiments proved that this 3D graphenebased hydrogel has excellent mechanical strength, photo-Fenton activity, stability, and potential for practical applications. The SEM image of the Fe 3 O 4 /rGO/PAM hydrogel is shown in Fig. 11a. Fe 3 O 4 nanoparticles were evenly distributed on the surface where PAM chains were wrapped by rGO, forming many small protuberances. The Fe 3 O 4 / rGO/PAM hydrogel possessed novel mechanical strength. As shown in the compress stress (σ)-compress strain (ε) plot (Fig. 11b), the Fe 3 O 4 /rGO/PAM hydrogel had higher compress strain than the PAM hydrogel, suggesting the greater anti-compression property of the former than that of the latter. The excellent tensile property of the Fe 3 O 4 / rGO/PAM hydrogel is displayed in Fig. 11c in the pH range of 3.5-6.5, suggesting that the Fe 3 O 4 / rGO/PAM hydrogel is an excellent photo-Fenton catalyst under multiple conditions. Meanwhile, the catalytic stability was ascertained by over 90% removal of RhB after 10 cycles. The concentration of Fe 2+ in aqueous solutions of the ground and unground Fe 3 O 4 /rGO/PAM hydrogels was measured using Phen as the chromogenic agent to investigate the stability of the hydrogels. The intensity of the peak ascribed to the ground Fe 3 O 4 /rGO/PAM hydrogel (1.357) was much higher than that of the unground one (0.298), which manifested that the hybrid hydrogel greatly retarded the iron leaching and deactivation of catalysts. Original fine chemical wastewater was also chosen as the model of actual water pollution to test the practical application potential of the Fe 3 O 4 /rGO/PAM hydrogel (Fig. 11d). The initial chemical oxygen demand (COD) of the sewage was 10,400 mg/L, and it dropped to 2840 mg/L after 1 h of visible light irradiation, proving the potential for actual wastewater treatment. Compared with the 2D graphene-based materials, the novel 3D graphene-based materials are equipped with more practical advantages. Graphene-based aerogels are characterized by low density and high mechanical strength, and the super adsorption capacity of hydrogels is also alluring. The enhanced adsorption and diffusion of pollutants brought by internal multi-channels, the inhibitory effect on agglomeration or dissolution of active sites, and the convenience for recycling as 3D macroscopic catalysts endow 3D graphene-based materials with great value in practical application. Thus, researchers tend to focus on the stability of the material and the trial for degrading actual sewage to lay a solid foundation for the possible application in industrial wastewater treatment in the future. In general, many problems remain to be solved before the practical application of 3D graphene-based materials, including but not limited to the high cost of synthesis and difficulties in catalyst regeneration. Nevertheless, 3D graphene-based materials are the general directions of future graphene-based systems, at least in the photo-Fenton degradation of pollutants. Conclusions and Outlooks In this study, the advances of graphene-based photo-Fenton systems were briefly introduced according to the classification of 2D and 3D graphene-based catalysts and discussed in terms of their special morphologic structures, great enhancements brought by graphene, the efficiency of treating pollutants, and practical application potential. Iron compounds have been selected as the active sites and anchored to the surface of graphene in 2D graphene-based photo-Fenton systems to mitigate the inevitable iron leaching in traditional photo-Fenton reactions. The consequent heterogeneous catalysis hampers the loss of catalytic activity and secondary pollution caused by iron sludge. The excellent light absorption, pollutant adsorption, and electron mobility of graphene materials are on full display in 2D graphene-based photo-Fenton systems. Fe 3+ reduction and Fe 3+ /Fe 2+ circulation can be carried out efficiently through rapid electron transfer among various active components, thus achieving alluring performance in the degradation of organic pollutants. In addition, many magnetic materials, such as Fe 3 O 4 and ZnFe 2 O 4 , which can be quickly separated from aqueous solutions by an external magnetic field, were employed to prepare 2D graphene-based magnetically separable catalysts to overcome the difficulty in 2D catalyst recovery to some extent. Three-dimensional graphene-based materials, including graphene aerogels and graphene hydrogels, inherit the aforementioned advantages of 2D graphene-based materials and are equipped with additional porous 3D macroscopic structures. The structures not only provide a large number of internal channels for the diffusion and aggregation of pollutants and H 2 O 2 but also effectively encapsulate the active sites, preventing their agglomeration or dissolution to form iron sludge. Such 3D macroscopic structures render the convenience in recycling. In addition, aerogels and hydrogels are characterized by their own properties. The former has extremely low density and high mechanical strength, whereas the latter possesses an excellent adsorption capacity for organic pollutants and heavy metals in aqueous solution. In general, 3D graphene-based composite catalysts are the future directions of graphene-based photo-Fenton systems. Loading active components on 2D/3D graphene is challenging, considering its large surface that is prone to fold. In addition, the traditional synthetic method of GO is complex and not eco-friendly. Thus, researchers should still pursue the synthesis of highly active graphene-based composites with stably and evenly loaded active substances by using simple, low-cost, and eco-friendly methods. Moreover, 3D graphene-based materials do not necessarily have to be in the form of aerogels or hydrogels because both forms lack robustness and are costly in synthesis and reactivation. For example, recent studies have used sponge as a cheap and robust support of graphene-based Fenton catalysts for the treatment of actual wastewater. The synthesis was simpler and cheaper than traditional methods. It provided excellent inspiration for the future preparation of 3D graphene-based materials. In sum, graphene materials still have promising application potential in photo-Fenton reactions and industrial water treatment. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
9,633.4
2021-01-04T00:00:00.000
[ "Engineering", "Chemistry" ]
Geochemical Characterization and Mineralogy of Babouri-Figuil Oil Shale, North-Cameroon Organic geochemistry methods such as high temperature combustion, Rock-Eval pyrolysis and gas analysis were used to analyze oil shale from Babouri-Figuil Basin. Results show that the average content of organic matter is 36.25 %wt, while that of mineral matter is 63.75 %wt. The total organic carbon (TOC) is between 15.93 %wt and 26.82 %wt. The HI vs. Tmax diagram indicates an immature Type I kerogen. The average value of the oil potential (S2b) is 149.95 mg HC/g rock. The gases obtained by retort process are H2, CO2, CO and CnH2n, CnH2n+2. Finally, it emerges that, the organic matter of Babouri-Figuil shales was immature or has just reached the beginning of the oil window. The mineralogical study of Babouri-Figuil oil shale has been carried out by means of XRD (X-Ray Diffractometry) and XRF (X-Ray Fluorescence spectrometry). The results show that mineral matrix contains silica, carbonates, sulphates, oxides and clay minerals. Besides, compounds contain metals and metalloids like Fe, In, Ca. The main oxides are SiO2 (majority), CaO, Fe2O3, Al2O3, SO3, and K2O. Corresponding author. A. J. Nyangono Abolo et al. 360 Introduction As the overall situation of conventional oil and gas resources becomes increasingly severe, oil shale resources begin to be paid more and more attention.Since oil shale is characterized by beneficial features, economic values and large resources, it is considered as an important substitution resource for the 21 st century [1] [2]. In Cameroon, one of the main deposits of oil shale is localized in the Cretaceous Basin of Babouri-Figuil, North Region, precisely in the Mayo Figuil and Mayo Tafal series (or outcrops).Due to the economic and scientific interest of this resource, the Laboratory of Petroleum and Sedimentary Geology of University of Yaounde I-Cameroon has decided to study these oil shale formations. The aim of this study is to give the organic and inorganic composition of Babouri-Figuil oil shale and to characterize their organic matter.All the information has been obtained by high temperature combustion method, Rock-Eval pyrolysis, gas analysis, XRD and XRF. Geological Setting The Babouri-Figuil Basin is one among the numerous lower Cretaceous intra-continental small basins of Northern Cameroon (Figure 1) and belongs, like the Benue trough, to the West and Central African Rift Systems (WCARS), linked to the opening of the Southern Atlantic Ocean [3]- [7].The structure of this basin is half-graben and consists of a synclinal feature that has an East-West extension [5] [8].The total surface is about 251 km 2 .In terms of sedimentary succession, the Neocomian-Barremian series (with a maximum thickness of 1500 m) begins with breccias, conglomerates, micro-conglomerates, sandstones, claystones and arkoses.Above those layers, clays, marls and sandstones occur in alternance.The series lie unconformably on a granitic basement and is cross-cut by volcanic rocks and plutonic intrusions [8]- [10].The depositional environment in the Babouri-Figuil basin is lacustine and/or fluvial-lacustrine [5] [7] [8]. In this basin, the oil shales have been discovered in two series, Mayo Figuil and Mayo Tafal [5] [8].From the sedimentological and environmental viewpoint, it emerges that, the lithology of the two series are nearly similar and consists of conglomerates, sandstones, limestones, clays, schistose marls (abundant) and oil shales all restful on a crystalline plinth (Figure 2).In the Mayo Figuil series, the oil shale beds occur at in the top of the series, while in the Mayo Tafal series, they occur from the bottom to the top. The oil shale deposits consist of numerous black or grey coloured beds, with the thicknesses of a few centimeters to about tenths of meters.These layers outcrop at the surface and disappear at the depth.They cut up either into rocky leaf or in small parts of rocks. Sampling, Materials and Methods In the Babouri-Figuil Basin, two representative oil shale samples (F1, F2) were obtained from Mayo Figuil series (Figure 3 The methods have consisted in crushing, grinding and sieving to yield samples with a particle size of 75 µm. In order to study the content and composition of mineral matter, organic matter was prior removed [1].In our case we used high temperature combustion method; which consisted in burning the rock powder at 550˚C in the furnace.Following this operation, the shale ash obtained was analyzed by XRD, whereas oil shale powder which was obtained by crushing and grinding was analyzed by XRF.XRD was carried out by the AGEs Laboratory of the University of Liege-Belgium, whereas XRF has been realized at the MIPROMALO and CAPAM Laboratories (Cameroon Mining and Material Laboratories).Concerning the study of the organic matter, the mineral matrix content was also prior removed.For this operation, Babouri-Figuil oil shales were treated repeatedly with HCl (6N), HCl (4N) and 40% HF (HCl was used for mineral carbonates removal, and HF for removal of mineral oxides and silicates) [1] [11]. The Rock-Eval instrument was developed at IFPEN (former Institut Français de Pétrole (IFP)) in 1977 [12]- [14].The method consists in estimating petroleum potential of rock samples by pyrolysis according to a programmed temperature pattern [15].Rock-Eval analyses were performed at the geochemistry-petrophysics department of IFPEN, using a Rock-Eval 6 device. The "reservoir method" was used to analyze these samples and this method consists to perform firstly a pyrolysis cycle starting from 180˚C (initial temperature during 10 min) to 650˚C using a heating rate equal to 25˚C/min and, secondly, an oxidation cycle from 300˚C to 850˚C at a temperature rate equal to 25˚C/min. From this method, the following parameters were obtained S 1r , S 2a , S 2b , TOC, T max , HI and OI, among others.S 1r = lightest free or sorbed hydrocarbons; S 2a = heavier free or sorbed hydrocarbons; S 2b = hydrocarbons potentially generated from thermal maturation of sedimentary organic matter (kerogen); TOC = total organic carbon; T max = temperature of the maximum of S2b peak; HI = hydrogen index and OI = oxygen index. The gas analysis has been performed at the China University of Petroleum (Beijing).Oil shale was crushed and heated to approximately 520˚C by direct contact with heated ceramic balls.At this temperature, the organic matter in oil shale rapidly decomposes to produce hydrocarbon vapor.Subsequent cooling of this vapor yields crude oil shale and light hydrocarbon gases [16]. High Temperature Combustion The percentage of mineral and organic matter can be easily obtained by the high temperature combustion method.In fact, during the heating of raw oil shale at 550˚C, the mass loss is attributed mostly to the removal of organic matter, while the remaining material (shale ash), can be considered as the mineral matter [17].The usual formula used for the determination of the organic matter percentage is: dry mass 550 mass Loi550 *100 dry masse where, dry mass is the initial weight of the oil shale sample (g), and 550 mass is the sample weight at the end of the combustion of organic matter (g).It's known that, oil shale sample constituted by organic and mineral matter, and then the equation of the total mass (wt%) of raw oil shale material can be written as: Mineral matter wt% organic matter wt% raw oil shale 100 wt% Note that, organic matter = Loi550. The Table 1 shows the different percentage of mineral and organic matter giving by high temperature combustion method for the Babouri-Figuil oil shale. The mineral matter content in Babouri-Figuil samples is between 55 %wt -73 %wt and the average value is 63.75 %wt; while the organic matter content is between 27 %wt -45 %wt, for an average value of 36.25 %wt.Concerning the organic matter, all the samples show a percentage greater than 10 %wt meaning that the rock samples analyzed belong to the oil shale group.In comparison, the results obtained by high temperature combustion method for the Fushun and Maoming oil shale in China are respectively 72.2 %wt of mineral matter and 27.8 %wt of organic matter; and 71.9 %wt of mineral matter and 28.1 %wt of organic matter [1]- [18].These data show that, the ratio of organic matter in Fushun and Maoming oil shale is lower than those of Babouri-Figuil Basin.It is the same case for the German Dotternhausen samples which show an organic matter ratio of 8.8.With a ratio of 50.5 %wt, the organic matter of Kukersite oil shale in Estonia is higher than the one of Babouri-Figuil oil shale (see Table 2). Rock-Eval Pyrolysis Organic matter abundance, type, thermal maturity and hydrocarbon potential of rock samples can be investigated by Rock-Eval pyrolysis [2].The results are summarized in the Table 3. The TOC content of Babouri-Figuil oil shale varies between 15.93 %wt -26.82 %wt, with an average value of 21.08 %wt.Samples show TOC values greater than 15 %wt, meaning that samples are organic matter-rich rocks.The average TOC content of Mayo Tafal samples is 21.4 %wt, whereas the average TOC content of Mayo Figuil samples is 20.8 %wt, it emerges that, the TOC content in Mayo Tafal samples is slightly higher than that the one of Mayo Figuil.These high values of TOC are assigned to favorable environment of production and preservation of the organic matter. The free or sorbed hydrocarbons (S 1r + S 2a ) in the samples vary between 3.32 and 14.85 mg HC/g rock, for an average value of 7.51 mg HC/g rock. The oil potential (S 2b ) ranges from 121.58 to 199.42 mg HC/g rock in the Mayo Tafal series and, from 127.54 to 151.27 mg HC/g rock in the Mayo Figuil.The value is 149.95 mg HC/g rock.These S 2b values indicate that, oil shale samples from Babouri-Figuil have a strong tendency to hydrocarbons generation.The HI of all the samples is more than 650 mg HC/g TOC; and the OI value of the samples is between 9.47 -11.99 mg CO 2 /g TOC.High HI and low OI values in Babouri-Figuil oil shale samples reflect a sapropelic organic content. The Rock-Eval T max (422˚C -435˚C) indicate that, organic matter of Babouri-Figuil shales is immature or has just reached the beginning of the oil window. The Rock-Eval HI and T max are important parameters reflecting the type and evolution of organic matter [19] [20].The HI vs. T max diagram classifies the kerogen of Babouri-Figuil oil shale as Type I kerogen (Figure 5), indicating a lacustrine environment associated to anaerobic conditions.This domination of Type I kerogen indicates that, Babouri-Figuil oil shale has a high oil generating potential.Usually, the Type I kerogen is derived essentially from algal material and terrestrial bacteria [21] [22].Probably, the cyanophyceae (blue-green algal) identified by [7] in the basin, is the main algal type which constitutes the organic matter content of Babouri-Figuil oil shale. Gas Analysis The composition of retort gas of Babouri-Figuil oil shale produced by Chinese retorts processing is shown in Table 4. From this table, the gas contains is as follows: 34.70% H 2 , 19.74% CO 2 , and 5.74% CO.The C n H 2n gaseous hydrocarbons are between 1.51% -3.07%, for a total of 9.68%.The C n H 2n+2 gaseous hydrocarbons vary between 0.63% -18.53%, for a total of 30.24%, and the high value (18.53%) belongs to methane (CH 4 ).From oil shale retorting gas, hydrocarbons represent about 40% of the total gaseous compounds.The oil shale retort gas includes many interesting and potentially valuable components.These range from simple fuel compounds like methane to more exotic compounds such as butylene, propylene and ethylene [23]. The mineral matter in oil shale can be divided into three types according to its source.The first type is the mineral matter originally existing in planktons and vegetables, the remnants of which became the mineral matter of oil shale, such as silicon oxide from bacillariophycene and calcium oxide from shells.The second type is the mineral matter derived from dismantling of surrounding relief and infiltrated into oil shale during its formation, or some was carried by rivers and underground water into lakes, where oil shale was formed.The third type is the mineral matter which is formed during the chemical reaction occurring inside the sediment.The first and third types are regarded as intrinsic mineral matter, while the second type is called extrinsic mineral matter [1]. The mineral matter of Babouri-Figuil oil shale may probably be derived by those three sources.In this way, some minerals are derived from rock alteration and were deposited in the basin after transport whereas others are formed directly within the sedimentary basin.This last group is called authigenous mineral and it may be the case of carbonates mineral in Babouri-Figuil Basin. X-Ray Fluorescence Spectrometry The XRF analysis permits to obtain the chemical composition of Babouri-Figuil oil shale.It emerges from Table 5 that, the main oxides are SiO 2 (57.95%),CaO (11.51%),Fe 2 O 3 (7.25%),Al 2 O 3 (6.4%),SO 3 (4.15%),K 2 O (2.45%), Na 2 O (0.5%) and TiO 2 (0.3%).Beside, the same Table 5 [1] (modified) shows that, the composition of Babouri-Figuil oil shale is close to the Fushun and Maoming one regarding the high silicon oxide (SiO 2 ) contents and the low calcium and magnesium oxides contents (CaO, MgO respectively).This is due to the oil shale formation conditions in ancient age and can also be related to the current climate conditions that causes the loss of Ca and Mg and the concentration of Al and Fe [1]. Spectrometer was also used to determine the elements contents of Babouri-Figuil oil shale and the results are summarized in Table 6.The main elements found in the basin are iron (Fe) with content varying between 2% -6.85%; calcium (Ca) with content varying 0.75% -3.1% and indium (In) with content varying 0.12% -45.01%.Potassium (K), titane (Ti), cobalt (Co), strontium (Sr), copper (Cu), zinc (Zn) are also present but, in small amounts with the percentage comprised between 0.01% -0.12%. Those elements can be also divided into two groups.The first group includes only one macro nutritional elements, which is potassium (K).The second group is the heavy metal and metalloid and consists of iron (Fe), calcium (Ca), indium (In), titanium (Ti), cobalt (Co), strontium (Sr), copper (Cu) and zinc (Zn). The presence of indium (In), with a rate of 45% in some oil shale samples can be a great opportunity for the mining sector in the Babouri-Figuil Basin. Conclusions The Babouri-Figuil Cretaceous Basin in North-Cameroon includes rich organic matter sediment.These sediments are named "oil shale".The host rocks are mainly claystone, in which organic matter is heterogeneously and finely dispersed.The current oil shale deposits are localized in Mayo Figuil and Mayo Tafal series.The organic geochemical and mineralogical methods used during this study have made their characterization possible.High temperature combustion method showed that, organic matter content varies from 27 %wt to 45 %wt, while mineral matter content is between 55 %wt and 73 %wt.The Rock-Eval analysis indicates that, the content of TOC is higher than 15 %wt for the whole sample of the Basin.The average value is 21.08%.The average value of the potential hydrocarbon is 149.95 mg HC/g rock, and the average value of total free hydrocarbons is 7.51 mg HC/g rock.The T max value of the entire sample is lower than 435˚C.The main kerogen is Type I.The organic matter of these shales is derived from planktonic biomass (lacustrine origin), probably associated with a bacterial and terrestrial material. The mineralogical study shows that, the mineral matter composition of Babouri-Figuil oil consists of a variety of minerals, like silica, carbonates, sulphates, oxides and clay minerals.Besides, compounds containing Fe, In, Ca, etc. K, Ti, Co, Sr, Zn, Cu are also sometimes present in small amounts.The main oxides are SiO 2 , CaO, Fe 2 O 3 , Al 2 O 3 , SO 3 , and K 2 O. Na 2 O and TiO 2 are also present. Finally, the mineral matter of the Babouri-Figuil oil shale is derived from mineral matter originally existing in planktons and vegetables and mainly from dismantling of surrounding relief and chemical transformations that occurred during the sedimentation. Figure 5 . Figure 5. Relationship between HI and T max showing the position of the four Babouri-Figuil oil shales. Table 1 . Characteristics of the initial samples, %. Table 4 . Composition of Babouri-Figuil oil shale retorts gas.These gases can also be classified into two groups according to their carbon and hydrogen atoms numbers.The first group is light gas which consists of CO, CO 2 , H 2 , CH 4 and C 2 H 4 .The second group is the heavy gas which is made up of C 2 (10.18%),C 3 (5.76%),C 3 (3.21%)andC5(2.14%) hydrocarbons, and consists of (C 2 H 6 , C 3 H 6 , C 3 H 8 , C 4 H 8 , C 4 H 10 , C 5 H 10 , C 5 H 12 ). Table 6 . Contents of some metals and metalloids in Babouri-Figuil oil shale.All those parameters indicate that, Babouri-Figuil oil shale is a good source rock, but has not the required temperature to generate hydrocarbons.The composition of Babouri-Figuil oil shale retort gas consists of 34.70% H 2 , 19.74% CO 2 , and 5.74% CO.The content of C n H 2n gaseous hydrocarbons is 9.68%, whereas the content of the C n H 2n+2 gaseous hydrocarbons is 30.24%.
3,959.2
2014-10-06T00:00:00.000
[ "Geology" ]
PPCAS: implementation of a Probabilistic Pairwise model for Consistency-based multiple alignment in Apache Spark . Large-scale data processing techniques, currently known as Big-Data, are used to manage the huge amount of data that are generated by sequencers. Although these techniques have significant advantages, few biological applications have adopted them. In the Bioinformatic sci-entific area, Multiple Sequence Alignment (MSA) tools are widely applied for evolution and phylogenetic analysis, homology and domain structure prediction. Highly-rated MSA tools, such as MAFFT, ProbCons and T-Coffee (TC), use the probabilistic consistency as a prior step to the progressive alignment stage in order to improve the final accuracy. In this paper, a novel approach named PPCAS (Probabilistic Pairwise model for Consistency-based multiple alignment in Apache Spark) is presented. PPCAS is based on the MapReduce processing paradigm in order to enable large datasets to be processed with the aim of improving the performance and scalability of the original algorithm. Introduction The probabilistic pairwise model [10] is an important step in all consistencybased MSA tools.A probabilistic model can simulate a whole class of objects, assigning an associated probability to each one.In the multiple alignment field, the objects are defined as a pair of residues from the input set of sequences, and the associated weight is the probability of being aligned [14].For any two sequences, there are many possibilities of residue matches, Length(sequence 1 ) * Length(sequence 2 ).The probabilistic model assigns each residue match a score.The higher this is, the better.For a complete dataset of sequences, the collection of the all the residue matches, which implies all the pairs of sequence evaluations, is known as the Consistency Library.This library is used to guide the progressive alignment and thus improve the final pairwise accuracy.A well-known MSA tool that uses consistency is T-Coffee [3]. The computation of the consistency library evaluates N * (N − 1)/2 combinations, N being the number of sequences, and that may be cataloged as embarrassingly parallel [3].With the advent of the Next-Gen Sequencing, the number of sequences to align and their length have grown exponentially, with the corresponding negative impact on execution time and memory requirements.The use of massive data processing techniques can provide a solution to these limitations. High Performance Computing (HPC) is the way to aggregate computer resources to provide parallel processing features for advanced applications.However, the fixed memory resources on each computational node and the fact that data is distributed through the interconnection network mean it is unviable for easy application to the Multiple Sequence Alignment problem.Currently, new computing technologies have been designed to manage and store huge amounts of data.These technologies, such as Hadoop [23] or Spark [17], are commonly applied to Big-Data processing and can be used to deal with this challenge.The main advantage is the ability to partition the whole data between all the nodes. However, the increase in the number of sequences in the dataset to be treated could finally exceed the global distributed memory.The solution is the use of specialized distributed databases, such as HBase or Cassandra [1], that provide enough storage capacity to allocate any consistency library. Thus, in the present paper, the authors present a new tool, the Probabilistic Pairwise model for Consistency-based multiple alignment in Apache Spark (PPCAS).This is able to generate the parallel probabilistic pairwise model for large datasets of proteins and can also store it in a distributed platform using the T-Coffee format. The paper is organized as follows: Section 2 presents a brief state of the art of consistency-based MSA tools.In Section 3, we outline the development of PPCAS.In Section 4, the performance and accuracy evaluation are shown and finally, the main conclusions are presented in Section 5. State of art Traditional aligners, like ClustalW [7], MAFFT [6] and T-Coffee [3], are based on Gotoh [5] or Myers & Miller's [11] dynamic programming techniques, using scores from two different sources (a consistency library or substitution matrices such as PAM and BLOSUM [9]) to perform the optimal alignment of two sequences. Unfortunately, the application of dynamic programming is inefficient for alignments consisting of many (10 -100) sequences.Instead, a variety of heuristic strategies have been proposed, the most popular, progressive alignment [12], builds up a final alignment by combining pairwise alignments following a guide tree (beginning with the most similar sequences to the most distantly related).However, errors in the early stages not only propagate to the final alignment but may also increase the likelihood of misalignment due to incorrect conservation signals. To lessen these early errors, consistency-based methods, such as T-Coffee [3], MAFFT [6], ProbCons [4] or DIALIGN [21], introduce consistency as a collection of pairwise alignments obtained from computing all-against-all pairwise alignments.T-Coffee uses this via a process called library extension1 .MAFFT uses a new objective function combining the WSP score from Gotoh and the COFFEElike score ( [14]) that evaluates the consistency between multiple and pairwise alignments.ProbCons improves the traditional sum-of-pairs scoring system by incorporating Hidden Markov Models to specify the probability distribution over all alignments between a pair of sequences.Furthermore, DIALIGN-T reformulates consistency by finding ungapped local alignments via segment-to-segment comparisons that determine new weights using consistency. The main drawback of consistency-based aligners is the high computational resources (CPU and memory) required to calculate and store the consistency information.For example, the consistency library in T-Coffee has a complexity of O(N 2 L 2 ), N being the number of sequences and L their average length.These requirements mean the method is not scalable, it being limited to aligning a few hundred sequences on a typical desktop computer.Therefore, these aligners are not feasible for large-scale alignments with thousands of sequences. This problem of scalability is common to other tools and algorithms.Nowadays, Bioinformatics is challenged by the fact that traditional analysis tools have difficulties in processing large-scale data from high-throughput sequencing [24].The utilization of HPC and BigData infrastructures has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and cloud computing services.The open-source Apache Hadoop project [23], which adopts the MapReduce framework [2] and a distributed file system, is able to store and process Petabytes of information efficiently.Moreover, Hadoop has a complete stack of services and frameworks (Spark, Cassandra, Mahout, Pig, etc) that provides a wide range of machine-learning and data-analysis tools to process any type of workflow. Over recent years, new tools have been developed in the bioinformatics field to improve the performance and scalability of massive data processing in current applications.In [16], a novel approach is proposed that combines the dynamic programming algorithm with the computational parallelism of Hadoop data grids to improve accuracy and accelerate Multiple Sequence Alignment.In [25], the authors developed a DNA MSA tool based on trie trees to accelerate the centre star MSA strategy.It was implemented using the MapReduce distributed framework.The use of the MapReduce paradigm and Hadoop infrastructures enabled the scalability and the alignment time to be improved. There are more MapReduce solutions in the area of mapping short reads against a reference genome.These applications, CloudBurst [18], SEAL [15] and CloudAligner [13], implement traditional algorithms like RMAP [20] and BWA [8] using the MapReduce paradigm. PPCAS Method The programming language selected was Python with the Ctypes extension that provides C language compatibility data types and also the ability to call external shared libraries.Thus, it is possible to obtain similar performance to native compiled code in CPU-intensive applications. The main step in the development was to adapt the probabilistic pairwise algorithm to the MapReduce paradigm used in the big data frameworks [2].The MapReduce paradigm enables the parallel/distributed computational resources (processors, memory and disks) to be exploited in a simple and scalable way.The MapReduce paradigm breaks down the problem into multiple Map tasks that can be executed in parallel on multiple computers/processors.After this initial Map stage, all the partial results obtained are merged and then processed by several Reduce tasks, in order to finally aggregate them. Spark is a fast engine for large-scale data processing in real-time executed over Hadoop.Spark has a master/slave architecture.It has one central coordinator (Driver) that communicates with many distributed workers (Executors).The driver is the process where the main method runs and the executors are those that process the data received. In the implementation of PPCAS, the map stage is responsible for defining all the tasks in charge of computing the probability score for a set of pairs of sequences.In Algorithm 1, the driver generates these tasks for all the N * (N − 1)/2 pair combinations (line 1) and distributes them in a balanced way among all the Map tasks using a Resilient Distributed Dataset (RDD) (line 2).Then, in line 3, the map tasks are launched and scheduled for processing on the executors.As a result, each map generates a portion of the library in parallel, and this persists in the HDFS file system.for each sequence Sj ∈ taskj do 6: libraryC = ctypes.CDLL("./PPCAS.so")7: libraryC.pairwise(Si, Sj) 8: end for 9: end for Algorithm 1: Spark parallel pairwise probability calculation The executor, lines 4-9, performs a subset of the pairwise combinations.This is done in the double-nested loop in lines 4-5, which obtains the different combinations of sequences assigned to the task.It calculates the library for each of these combinations by calling the pair wise(S i , S j ) function of the shared library (PPCAS.so).This function calculates the probabilistic pairwise model for these two sequences and writes this portion of the library to the disk (HDFS). Results and discussion In this section we evaluate PPCAS2 .The experimental study is focused on (1) the use of PPCAS as the main consistency library of T-Coffee by comparing the accuracy achieved and the corresponding execution time, (2) the scalability of the PPCAS when the number of nodes increases and finally, (3) the performance behavior when the number of sequences grows. To perform the tests, we used two different multiple alignment benchmarking suites: -BALiBASE [22] is a database of high-quality documented and manuallyrefined reference alignments based on 3D structural superpositions.The accuracy of the alignments is measured using two metrics: the Sum-of-Pairs (SP) and the Total Column Score (TCS), which are obtained by comparing the user alignment against a reference alignment. -HomFam [19]: The existing benchmark datasets are very small (150 and 50 sequences in BALiBASE and Prefab respectively).Homfam provides large datasets using Pfam families with thousands of sequences.In order to validate the results of aligning a Pfam family, the Homstrad site contains some reference alignments and the corresponding Pfam family.These references are previously de-aligned and shuffled into the dataset.After the alignment process, the reference sequences are extracted and compared with the originals in Homstrad. HomFam contains almost one hundred sets.We selected the top five manually, sorted by size (Acetyltransf, rrm, rvp, sdr and zf-CCHH) to evaluate the method.The results for the execution time presented in this section represent the average results obtained after evaluating the corresponding family.Furthermore, each experiment corresponds to five iterations in order to show the robustness of the results.The execution environment is a distributed memory cluster made up of 20 nodes, each one characterized in Table 1. Evaluating the PPCAS consistency library To assess the correctness of PPCAS, a final alignment must be done.To this end, an MSA tool is needed.TC allows the input of an externally-generated consistency library using its −lib flag, so a library was built for each set with PPCAS and introduced into TC via the parameter, which generates the alignment. This study compares the results obtained from executing T-Coffee using its own consistency library, and the same T-Coffee using the library generated with PPCAS by processing the same dataset.The experimentation focused on the differences in accuracy and the possible execution time penalties.The BAliBASE benchmark was used for the accuracy test.The results obtained are shown in Table 2.The first column indicates the library algorithm used and the Sum-of-Pairs (SP) produced using the Bali score appears in columns 2-7.The average score over all the families is given in the last column.The results demonstrate that using the PPCAS library, T-Coffee is able to obtain an equivalent accuracy.The slightly differences in accuracy are due to the fact that, unlike PPCAS, T-coffee removes the smallest weighted library.This validates using the new library instead of the original one from T-Coffee. Next, the execution time required to calculate the consistency library in T-Coffee (using the −lib only flag) was compared with the time obtained with PPCAS, only using a single node with a quad-core processor in both cases and increasing the number of sequences.The results obtained are shown in Figure 1. As can be observed, PPCAS always outperforms T-Coffee for execution time.However, when the number of sequences is low (100-200), the improvement is not very large, because there is not enough parallel work to obtain the maximum infrastructure performance.Nevertheless, with a large number of sequences (over 200), the PPCAS execution time improvement increases, meaning that the code is more efficient in PPCAS than in T-Coffee. Moreover, we verified that, with 8 GB of memory, it is only possible to calculate the consistency library for a dataset with a maximum of 1,000 sequences using T-Coffee, and this takes more than 5,000 seconds.Meanwhile, PPCAS takes only 3,338 seconds to calculate the same library, which implies a 1.62x improvement.Attempts to evaluate more sequences in TC failed because the library size did not fit into the local memory. Both the accuracy and execution time tests demonstrate that PPCAS can be used as a new method to provide the consistency library required by TC without any penalty, and furthermore, simultaneously increasing its performance. Scalability study of PPCAS To demonstrate the real benefits of using a Big-Data infrastructure, the scalability of the method when more nodes are added must be measured.We also compare the results with the original T-Coffee to have a reference point.Thus, in this test, a fixed size of 1,000 sequences (HomFam) was used, this being the maximum number of sequences TC can handle.Figure 2 depicts the results obtained.The left axis shows the execution time, and the right one depicts the speedup obtained.It can be seen that the PPCAS speedup tends to be almost linear, taking 3,338 seconds with a single node, while it can be reduced to 183 seconds when using 20 nodes.This represents an 18.18x speedup over the single node execution time and 29.45x over the TC version presented in the previous section (5,409 seconds).These speedups are linear, denoting a good scalability as the theoretical maximum is 20x. PPCAS Scalability increasing the number of sequences This final experimentation evaluates the behavior of PPCAS with the same computational resources when the number of sequences increases.Table 3 compares the execution time required to calculate the library in T-Coffee using a single node with a quad-core processor, (using the −lib only parameter) with PPCAS using the complete cluster infrastructure with 20 quadcore nodes.We also analyzed the speed-up and efficiency (speedup/nodes, which rates the improvement against cost) as the number of sequences increases. It is important to note that it is possible to calculate bigger libraries with PPCAS because there is no limitation to the main memory of a single node.The last column shows that the library size does not fit in the memory of a single traditional computer.Thus, it was possible to calculate the library with up to 20,000 sequences, which took 64,012 seconds.When the number of sequences is low (100-200), the speedup and efficiency are not good, although the lack of parallel work mitigates the infrastructure performance.However, with a large number of sequences (more than 500), both of them achieve good values.Thus, they improve as more sequences are added. Figure 3 shows the scalability of PPCAS on a logarithmic scale for the number of sequences to be aligned.We can observe the correlation between the size of the resulting consistency library and the time required to calculate it as the number of sequences increases.It can also be seen that the growth in execution time is proportionally smaller than the increase in size, which demonstrates the efficiency of PPCAS for calculating the library. Conclusions In this paper, the authors present a scalable method to compute the probabilistic pairwise model for consistency-based multiple alignment. We show that PPCAS is able to produce a quality library relying on a Hadoop infrastructure with Spark.In terms of execution time, the method behaves better under the same environment (single node) and benefits from almost linear speedups when more nodes are added to the ecosystem.It is also capable of computing more sequences with the same memory requirements. In the future, we will integrate PPCAS with an aligner with a distributed database like Apache Cassandra as the interface.Storing the constraints in a high-performance database will completely eliminate the memory problems, while supplying the progressive stage with the required data.Our other aim is to reduce the execution time of the progressive itself, this being the other problematic half of an MSA with consistency. Fig. 1 . Fig. 1.Comparison of library building under a single node with HomFam sets. Fig. 3 . Fig. 3. Scalability of PPCAS regarding the execution time and output size. Table 1 . Hardware and software used in the experimentation Table 2 . Comparison between T-Coffee and PPCAS library with BAliBASE. Table 3 . Library building comparison between a single TC node and PPCAS multi node with HomFam sets.
3,998.4
2017-08-21T00:00:00.000
[ "Computer Science" ]
Ejection dynamics of a ring polymer out of a nanochannel We investigate the ejection dynamics of a ring polymer out of a cylindrical nanochannel using both theoretical analysis and three dimensional Langevin dynamics simulations. The ejection dynamics for ring polymers shows two regimes like for linear polymers, depending on the relative length of the chain compared with the channel. For long chains with length $N$ larger than the critical chain length $N_{c}$, at which the chain just fully occupies the nanochannel, the ejection for ring polymers is faster compared with linear chains of identical length due to a larger entropic pulling force; while for short chains ($N<N_c$), it takes longer time for ring polymers to eject out of the channel due to a longer distance to be diffused to reach the exit of the channel before experiencing the entropic pulling force. These results can help understand many biological processes, such as bacterial chromosome segregation. I. INTRODUCTION The properties of a polymer confined in a nanochannel have attracted broad interest [1][2][3][4][5][6][7] because they are of fundamental relevance in polymer physics and are also related to many biological processes, such as doublestranded DNA genomes packaging inside the phage capsid [8], polymers transport through nanopore [9,10] and viruses injecting their DNA into a host cell [11]. The importance of cyclic structures in biological macromolecular science is strikingly demonstrated by the existence of circular DNA, cyclic peptides and cyclic oligosaccharides and polysaccharides [12]. Ring closure of a polymer is one of the important factors influencing its statistical mechanical properties. Understanding the static and dynamic properties of ring polymer is a challenging problem due to the difficulties inherent to a systematic theoretical analysis of such objects constrained to a unique topology. The scaling behavior of isolated, highly diluted, ring polymers has been studied. des Cloizeaux [13], Deutsch [14] and Grosberg [15] discussed the effect of topological constraints on the properties of ring polymers, and found that the topological constraint and the excluded volume have similar effects. The radius of gyration for large single ring polymers obey the same scaling relationship as that of linear chains [14,15], although this is not true for ring polymers in a melt or ring polymer brushes [16][17][18]. Ring closure acts as an important role in a wide range of biophysical contexts where DNA is constrained: segregation of the compacted circular genome of some bacteria [19], formation of chromosomal territories in cell nuclei [20], compaction and ejection of the knotted DNA of a virus [21,22], migration of a circular DNA in an electrophoresis gel [23] or in a nanochannel [24]. After three decades of intensive research, the conformational properties of a self-avoiding polymer chain confined in a slit or in a cylindrical nanochannel are relatively well understood. [25][26][27][28][29]. However, a deeper understanding of the basic properties of ring polymer in confined environments is a field in its infancy [30,31]. Only few studies have addressed semiflexible ring polymers. Ostermeir et al. [32] investigated the internal structure of semiflexible ring polymers in weak spherical confinement and found buckling and a conformational transition to a figure eight form. Fritsche and Heermann [33] examined the conformational properties of a semiflexible ring polymer confined to different geometrical constraints and found that the geometry of confinement plays a important role in shaping the spatial organization of polymers. Most recently, we have found the helix chain conformation of flexible ring polymers confined to a cylindrical nanochannel, and demonstrated that the longitudinal size along the channel for a ring polymer scales as N σ(σ/D) 2/3 , the same as that for a linear chain but with different prefactors. Here D is the radius of the channel, N the chain length and σ the Kuhn length of the chain [34]. We further gives the theoretical ratio value 0.561 of the longitudinal size for a ring polymer and a linear chain of the same N . As to the dynamics of the polymer under confinements, Milchev et al. [27] have investigated the ejection of linear chain out of nanopore using Monte carlo simulation and found that the ejection dynamics depends on the chain length. Unlike its linear polymer counterpart, the dynamics of confined ring polymers is still lacking, although many bimolecules are circular. To this end, in this work we study the ejection dynamics of a ring polymer confined in a nanochannel by means of analytical techniques and Langevin dynamics simulations. The basic questions associated with this process are the following: (a) what's effect of the chain length and the channel length on the ejection dynamics? (b) what's the difference of the ejection dynamics for ring polymers compared with the linear one? For a fixed channel, which one is faster compared a ring polymer with a linear chain of the identical length? We believe that this work is interesting and important for understanding biological systems with more complexity, such as viruses injecting their DNA into a host cell, the behavior of DNA inside phages or the spatial organization of the bacterial nucleoid in E. coli. II. MODEL AND METHODS In our numerical simulations, the polymer chains are modeled as bead-spring chains of Lennard-Jones (LJ) particles with the Finite Extension Nonlinear Elastic (FENE) potential. Excluded volume interaction between beads is modeled by a short range repulsive LJ potential: 6 ] + ε for r ≤ 2 1/6 σ and 0 for r > 2 1/6 σ. Here, σ is the diameter of a bead, and ε is the depth of the potential. The connectivity between neighboring beads is modeled as a FENE spring with U F EN E (r) = − 1 2 kR 2 0 ln(1 − r 2 /R 2 0 ), where r is the distance between consecutive beads, k is the spring constant and R 0 is the maximum allowed separation between connected beads. We consider a schematic representation as shown in Fig. 1, where a ring polymer is confined in a cylindrical channel with one end sealed. The nanochannel and the sealed surface are described by stationary particles within distance σ from one another which interact with the beads by the repulsive Lennard-Jones potential. The particle positions of the nanochannel and the sealed surface are not changed in the simulations. In the Langevin dynamics simulation, each bead is subjected to conservative, frictional, and random forces, respectively, with [35] Here m is the bead's mass, ξ is the friction coefficient, v i is the bead's velocity, and F R i is the random force which satisfies the fluctuation-dissipation theorem. In the present work, the LJ parameters ε, σ, and m fix the system energy, length and mass units respectively, leading to the corresponding time scale t LJ = (mσ 2 /ε) 1/2 and force scale ε/σ, which are of the order of ps and pN, respectively. The dimensionless parameters in the model are then chosen to be R 0 = 1.5, k = 15, ξ = 0.7. In our model, each bead corresponds to a Kuhn length (twice of the persistence length) of a polymer. For a single-stranded DNA (ssDNA), the persistence length of the ssDNA is sequence and solvent dependent and varies in a wide range, to our knowledge, usually from about 1 to 4 nm. We assume the value of σ ∼ 2.8 nm for a ssDNA containing approximately four nucleotide bases. The average mass of a base in DNA is about 312 amu, so the bead mass m ≈ 1248 amu. We set k B T = 1.2ε, which means that the interaction strength ε is 3.39 × 10 −21 J at actual temperature 295 K. This leads to a time scale of 69.2 ps and a force scale of 1.2 pN. The Langevin equation is then integrated in time by a method described by Ermak and Buckholz [36]. We initially fix the last monomer of the linear chain but anyone of the ring polymer at the sealed bottom of the nanochannel, while the remaining monomers are under thermal collisions described by the Langevin thermostat to obtain an equilibrium configuration. In order to learn the mechanism of chain ejection out of the nanochannel, the link of the monomer with the bottom of the channel is removed, then the chain is released to diffuse along the channel. The residence time τ is measured, once all monomers pass the opening at x = h and leave the channel. Typically, we average our data over 700 independent runs. III. RESULTS AND DISCUSSION A. Scaling arguments Longitudinal size of a polymer in infinitely long nanochannel According to the blob picture, for a linear polymer confined in a infinitely long three-dimensional nanochannel of diameter D, the chain will extend along the channel axis forming a string of blobs of size D. The center of the blob is on the axis of the nanochannel. For each blob, D = Ag ν σ due to the dominant excluded volume effects, where g is the number of monomers in a blob, σ is the Kuhn length of the chain, ν is the Flory exponent in three dimensions, and A is a constant. Thus, each blob contains g = ( D Aσ ) 1 ν monomers, and the number of blobs is ν . The free energy cost for the chain confinement is proportional to the number of blobs, thus the free energy in units of k B T is F = B l N (Aσ/D) 1/ν , with B l being a constant. The blob picture then predicts the longitudinal size of the linear chain to be R In order to model the the chain conformation for a ring polymer confined in a nanochannel, we have extended the blob picture [34]. For a ring polymer, the chain will extend along the channel axis forming two strings of blobs of D/2,the two strings of blobs show helix structure. For each blob of size D/2, D/2 = Ag ν r σ with g r being the the number of monomers in a blob. Here, the same prefactor A for ring polymers and linear chains is due to the same solution environment. Thus, each blob contains g r = ( D 2Aσ ) 1 ν monomers, and the number of blobs 1 ν , with B r being a constant. By geometrical analysis, the distance between two successive layers is √ 2 4 D, and so the total length occupied by blobs in the channel is Therefore, the longitudinal size along the channel for a ring polymer scales as R ∼ N σ(σ/D) 2/3 , the same as that for a linear chain but with a different prefactor. The ratio of the longitudinal sizes along the nanochannel (or the prefactors) for a ring polymer and a linear chain is If using more accurate value of ν = 0.588, we have The simulation results [34] confirm the above predictions and give (Aσ) 1/ν = 1.367±0.009 for the parameters used in the model. Ejection dynamics of a polymer confined in a nanochannel Intuitively, for the ejection of a polymer out of a nanochannel, the dynamics is controlled by the relative length of polymer compared with the channel height h. There exists a critical polymer length N c , where the polymer just fully occupies the channel, namely R (N c ) = h. Thus, for linear chains the critical length N c,l is while for ring polymers the critical length N c,r reads The ratio of the critical length for the ring polymers and the linear polymer is Short chains with chain length of N < N c are initially fully confined in the nanochannel while long polymers with chain length of N > N c initially occupy the whole channel with several segments outside the channel exit. For long chains with N > N c , the ejection is a driven process where the pulling force f is from the entropy and is induced by already ejected monomers [6]. For short chains with N < N c , polymer needs to move to the channel exit by a diffusive process, and then experiences a pulling force as for long chains. We assume the ejection process to be quasiequilibrium. For long chains, the pulling force can be estimated from the free energy F of a chain partially confined in the nanochannel with the innermost monomer being at distance x from the channel exit. For long linear chains, x = n(t)(Aσ) 1/ν D 1−1/ν and the free energy F = B l n(t)(Aσ/D) 1/ν k B T = B l kB T D x, with n(t) being the number of monomers inside the channel at time t. The differential of the free energy allows an estimate of the pulling force It is worthy of noting that f is independent of the tail length as well as h, but inversely proportional to D. For long ring chains, x = n(t) We further have the ratio of the pulling force for long ring polymers and linear chains which is only determined by the universal prefactors for ring polymers and linear chains. During the ejection process, the pulling force induced by the tail is balanced by the total friction. Namely, for long chains we have where ξ is the friction coefficient per monomer. Taking into account the relationship of x(t) and n(t), we obtain the ejection time for long linear chains, and for long ring polymers. Therefore, the ratio of the ejection time for long ring polymers and linear chains is where ν = 3/5 is used. As noted above, for short polymers (N < N c ), it undergoes a diffusive process before the first segment exiting the channel, and subsequently the ejection process driven by a pulling force. Accordingly, we divide the total ejection time τ into two parts, τ 1 for the diffusive process and τ 2 for the driven process. For the the diffusive process of short linear chains, τ 1 is with D dif f = kB T N ξ being the diffusion constant. In addition, for the driven process τ 2 can be written as Here, τ 2,l is negligible compared to τ 1,l for quite short chains, and then the ejection time τ l ≈ τ 1,l . Based on the differential of the ejection time with N , where the residence time τ l reaches to its maximum value τ max,l = 2ξh 3 27(Aσ) For the the diffusive process of short ring chains, τ 1 is In addition, for the driven process τ 2 can be written as Again, τ 2,r is negligible compared to τ 1,r for quite short chains, and then τ r ≈ τ 1,r . Based on the differential of the ejection time with N , ∂τr Eq. (5) and another resolution where the ejection time τ reaches to its maximum value Thus, we have B. Simulation results The average ejection time τ as a function of the ring polymer length N for different channel diameters (D = 5, 7 and 9) at fixed channel height h = 20.5 and for different channel heights (h = 20.5, 30.5, and 40.5) at channel diameter D = 7 are shown in Fig. 2a and Fig. 2b, respectively. The two pictures show that ejection time increases with the increase of channel diameter and channel height. Moreover, we get a special polymer length N * at which the ejection time meets its maximum. Fig. 3 shows the plot of N * r against hD 2/3 for different D and h. All the data points collapse on the same line, which is in agreement with the prediction in Eq. (20). The line plotted in Fig. 4 proves the prediction in Eq. (21). As noted before, there exists a critical polymer length N c at which the polymer just fully occupies the channel. Short chains (N < N c ) are initially fully confined in the nanochannel while long polymers (N > N c ) initially occupy the whole channel with several segments outside the channel exit. From the platforms in Fig. 2, we obtain the ejection time τ long for long polymers (N > N c ). Fig. 5a and Fig. 5b show the scaling plot of τ long with h 2 D 5/3 for both ring polymers and linear chains, respectively. For different polymer lengths, channel heights and channel diameters, all data points collapse on the same line in Fig. 5a and Fig. 5b, respectively. These results confirm the predictions in Eqs. (11) and (12). In addition, the slopes are 0.042 and 0.053 for ring polymer and linear chain, respectively. This indicates τ long,r τ long,l = 0.042 0.053 = 0.792. Based on Eqs. (11) and (12), we have B r = 1.60, B l = 4.02 and thus Br B l = 0.398 using the parameters ξ = 0.7, T = 1.2 and (Aσ) 1/ν = 1.367. Moreover, we further obtain fr f l = 2.250 through Eq. (9), which demonstrates that the driving force induced by confinement for long ring polymers is larger than that for linear chains. Using Eqs. (18) and (19) to fit curves in Fig. 2, we find that the numerical results are qualitatively described by theoretical findings. To compare the ejection dynamics for ring polymers with that for linear chains, we show the ejection time as a function of the chain length N for D = 7 and h = Fig. 6. One does see characteristic differences: for short chains (N < N c ), it takes longer time for ring polymers to eject out of the channel than that for linear chains; while for long chains (N > N c ), linear chains need longer time. These findings are in agreement with the predictions in Eqs. (13) and (22). Ring polymers has smaller R than that for linear chains of the same N and thus ring polymers must diffuse longer distance to reach the exit of the channel. When the chain length is larger than the critical chain length (N > N c ), the force exerted on the residual segments for ring polymer is larger than that for linear chain due to the smaller blob size in the channel for ring polymers than that for linear chains as predicted fr f l = 2.250. The platform of the force at small time t shown in Fig. 7 for both ring polymer and linear chain confirms this prediction. in In Fig. 6, we find The mean-squared distance x 2 (t) of the last monomer with respect to the channel exit against elapsed time after the release of the last monomer is shown in Fig. 8 for both ring polymers and linear chains. The lines plotted according to the curves are based on the equation t for a ring polymer, which indicates that the Eq. (10) can correctly describe the ejection dynamics. In order to know the details in the ejection process, we record the number of residual monomers inside the channel in the total process, n(t)/n(0) (normalized by its value at t = 0). We see that the short ring polymer experiences a diffusion process before it starts to eject out of the channel, corresponding to the platform in the plot for N = 30 as shown in Fig. 9a. When the chain length N > N c , the ejection process is faster for the ring polymer than that for a linear one, which can also be inferred from the portion of residual monomers at time t, as presented in Fig. 9b. Fig. 10a shows the histograms of the ejection time for ring polymers with different chain lengths. The ejection time distribution for polymer of length N = 30 has a long tail and is much wider than that for N = 50. The ejection time distributions for ring polymers and linear chains at both short and long chain lengths are given in Fig. 10b and Fig. 10c, respectively. For short chain N = 20, it takes longer time for ring polymer to leave the channel than that for linear chain, and the ejection time distribution for the ring polymer is wider and has a long tail. For long chains N = 300, however, the result is opposite, reflecting the larger driving force for the ring polymer than that for the linear chain. Nature not only imposes geometrical constraints on biopolymers by confinement through cell membrane, the cell nucleus or viral capsid, but also exploits the advantages of certain underlying chain topologies, such as the ring structure. In fact, E. coli has a rod-shaped geometry and its chromosome is not a linear polymer but a circular one. Based on Monte Carlo simulations, Jun and Mulder [19] addressed a basic physical issue associated with bacterial chromosome segregation in rod-shaped cell-like geometry. By simulations of two ring polymers, in the same setting as the linear ones and they found that two ring polymers segregate more readily than linear ones in confinement. According to our above theoretical analysis and simulation results, for ring polymers confined in a cylindrical nanochannel the blob size for ring polymers is smaller than that for linear polymers, which indicates that during the chromosome segregation the driving force for ring polymers is larger than that for linear one, leading to faster segregation. IV. CONCLUSIONS We investigate the ejection dynamics of a ring polymer out of a cylindrical nanochannel using both theoretical analysis and three dimensional Langevin dynamics simulations. The ejection dynamics for ring polymers shows two regimes like for linear polymers, depending on the relative length of the chain compared with the channel. For long chains with length N larger than the critical chain length N c , at which the chain just fully occupies the nanochannel, the ejection for ring polymers is faster compared with linear chains of identical length due to a larger entropic pulling force; while for short (N < N c ), it takes longer time for ring polymers to eject out of the channel due to a longer distance to be diffused to reach the exit of the channel before experiencing the entropic pulling force. These results can help understand many biological processes. Our results should enable a new understanding of the conformational statistics and dynamics of confined ring biopolymers such as DNA. The concrete graph about ring polymer confined in a nanochannel needs more deep study so as to realize many complex problems in both biochemistry and theoretical study. Our findings are of interest for (molecular) biological/biochemical, technology as well as physics reasons.
5,243
2011-09-18T00:00:00.000
[ "Materials Science", "Physics" ]
Robotic-assisted partial nephrectomy for a neuroendocrine tumor in a horseshoe kidney: a case report Abstract Neuroendocrine tumors of the kidney are exceedingly rare. We report the first case of robotic-assisted partial nephrectomy for such tumors in horseshoe kidneys. A 65-year-old woman was incidentally found to have a 27 mm renal mass in the isthmus of her horseshoe kidney during computed tomography. Based on contrast-enhanced computed tomography results, we initially suspected renal cell carcinoma originating from the horseshoe kidney. Subsequently, robotic-assisted partial nephrectomy with isthmus transection was performed. Intraoperatively, we adjusted the port position for camera insertion and the patient’s positioning to facilitate better visualization for dorsal isthmus and vessel dissection. Pathological examination and immunohistochemical analysis revealed a well-differentiated neuroendocrine tumor. Therefore, robotic-assisted partial nephrectomy is a safe and effective approach for managing neuroendocrine tumors in the isthmus of horseshoe kidneys. Given the nonspecific clinical presentation of renal neuroendocrine tumors and their rarity, the optimal management of these tumors remains controversial. Introduction Neuroendocrine tumors (NETs) are rarely observed in the kidney, although they can occur in multiple organs [1].Most renal NETs arise in a normal kidney, but they may also manifest in a horseshoe kidney [2]. A horseshoe kidney is a congenital fusion anomaly, often accompanied by a complex blood supply [3].Renal neoplasms occurring in horseshoe kidneys are infrequent, and performing minimally invasive surgery for renal tumors in these cases is technically challenging due to the variability of renal vessels [4]. To our knowledge, we present the first case of a patient with NET in the isthmus of a horseshoe kidney, managed through robotic-assisted partial nephrectomy (RAPN). Case report A 65-year-old woman presented to our department with an incidental 27 mm renal tumor in the isthmus of horseshoe kidney.Contrast-enhanced computed tomography (CT) imaging revealed that this tumor displayed slight enhancement with a solid component, accompanied by cystic and calcified elements (Fig. 1A and B).Based on these findings, we clinically diagnosed the renal cell carcinoma cT1aN0M0.Further assessment via three-dimensional (3D) CT revealed a single artery arising from the aorta to the right kidney, along with two arteries supplying the cephalic and caudal sides of the isthmus from the aorta and iliac artery, respectively (Fig. 1C). Before RAPN, bilateral ureteral stents were placed.Five robotic ports and two assistant ports were inserted with the patient in the lateral position (Fig. 2A).Due to a history of abdominal surgery for uterine fibroids, appendicitis, and pelvic abscess, the intestine was extensively adhered to the lower abdominal wall.After securing the operating cavity through laparoscopic adhesiolysis, RAPN was initiated using the da Vinci Xi surgical system (Intuitive Surgical, Sunnyvale, CA, USA).Upon exposing the right renal artery, the caudal isthmus was not visible.Therefore, the position of the camera was switched to the port in the midline of the lower abdomen.Furthermore, by changing the patient's position to semi-lateral and head-down, the caudal side of the isthmus became visible (Fig. 2B).After exposing the tumor and isthmus, two arteries supplying the cephalic and caudal sides of the isthmus were cut.Once the patient's position was returned to the f lank position and the camera position was switched to the outer port, the right renal artery was clamped, and the isthmus was transected using a vessel sealer ( Fig. 2C).Subsequently, the tumor was resected with a sufficient margin. The operation lasted 355 minutes, with 196 minutes spent on the console.The warm ischemia time was 19 minutes.Histopathology revealed that the tumor consisted of small, round cells arranged in ribbon and cord-like structures (Fig. 3A).Immunohistochemistry analysis revealed positivity for synaptophysin (Fig. 3B), chromogranin A, CD56 (Fig. 3C), and vimentin, while CD10 and cytokeratin 7 were negative.Additionally, the Ki67 index was 4% (Fig. 3D).Based on these findings, the renal tumor was diagnosed as a well-differentiated NET.At the 6-month follow-up, no local recurrence or metastasis was detected in CT. Discussion Renal NETs are rare, constituting only 0.18% of all primary renal neoplasms in a normal kidney [5].However, the risk of developing NETs in a horseshoe kidney is 62-fold higher than in a normal kidney [2].While common symptoms include abdominal pain, weight loss, and hematuria, ∼25% of cases are diagnosed incidentally [ 6].Conventional examinations do not reliably distinguish NETs from other renal tumors.NETs typically present as well-circumscribed and slightly enhanced masses on CT scans, often with a solid component occasionally accompanied by cystic and calcified components [7].Histopathologically, tumor cells are arranged into cords and ribbons structures [8].However, NETs lack typical organoid architecture, necessitating immunohistochemical analyses for diagnosis.NETs usually express at least one neuroendocrine marker, such as synaptophysin, chromogranin A, or CD56 [9], while CD10 and cytokeratin 7 are negative markers.Therefore, a combination of investigations for these markers is essential for an accurate diagnosis of renal NET.Furthermore, Ki67 serves as a reliable pathological grading marker according to the World Health Organization classification [ 6].In our case, the Ki67 proliferation index was 4%, indicating a grade G2 NET.NETs are characterized by a low degree of malignancy and slow growth, making surgical radical resection the preferred approach.However, some cases may develop systemic metastases several years after resection, underscoring the necessity of long-term follow up [9]. Horseshoe kidney is a congenital renal fusion anomaly associated with multiple vascular changes, occurring in 0.15% to 0.25% of the population [3].Preoperative imaging of blood vessels is necessary for surgery on renal tumors with horseshoe kidneys.Additionally, 3D CT is more useful for identifying various vasculatures [10].Although there have been several reports of RAPN for renal tumors in horseshoe kidneys, there has been only one report of RAPN for renal tumor in the isthmus of a horseshoe kidney; in particular, isthmus transection was simultaneously performed via pure robotic surgery [11].RAPN is suitable for small renal masses, particularly in cases involving horseshoe kidneys, owing to the advanced instrumentation of robotic systems in dissecting complex vessels and performing tumor resection, compared to conventional laparoscopic approaches.In our investigation, although the f lank position and the usual camera position were optimal for accessing the renal artery, dissecting the dorsal side of the isthmus was challenging due to poor visibility.Consequently, we relocated the camera to the lower abdominal port, following the method described by Sawada et al. [11].Additionally, adjusting the patient's position to a semi-lateral, head-down orientation facilitated the safe transection of the dorsal vessels and isthmus.Therefore, by adapting camera and body positions, RAPN can be safely performed for intricate renal tumors near the isthmus. Renal NET is a rare neoplasm, and precise diagnosis often hinges on immunohistochemical analysis.Performing minimally invasive surgery for renal tumors in horseshoe kidneys presents unique challenges due to anatomical anomalies.Establishing a preoperative surgical strategy, including a comprehensive evaluation of the location of tumor localization and vessel anatomy through 3D-CT imaging, is necessary to safely perform RAPN. Figure 1 . Figure 1.Preoperative CT scans.(A) Contrast-enhanced CT showed heterogeneous enhancement of the tumor in the arterial phases (arrow).(B) The cystic area of the tumor had no enhancement in the venous phase (arrow).(C) 3D-CT demonstrated one artery arising from the aorta to the right kidney (arrow) and two arteries supplying the isthmus and tumor arising from the ventral side of the aorta and left common iliac artery (arrowhead). Figure 2 . Figure 2. Position of the port for RAPN (A) and intraoperative findings of RAPN (B-D).(B) The dorsal side of the isthmus and artery were dissected with the number 4 port of the camera position.(C) The isthmus was transected with the number 3 port of the camera position.
1,716.4
2024-05-01T00:00:00.000
[ "Medicine", "Engineering" ]
The Impact of Teachers’ Verbal and Non -Verbal Communication on Students’ Motivation in Learning English English ______________ Abstract _______________________________________________________ The objectives of this study are to explain the use of teachers’ verbal communication, teachers’ non -verbal communication, and the impact of teac hers’ verbal commun ication and non-verbal communication on students' motivation in learning English.This study used the framework of teacher s’ talk proposed by verbal Sinclair and Brazil (1985) and Wang and Loewen (2015) to analyze t he teachers’ communication and non-verbal communication in the English classroom. Data were taken from lesson transcripts and teachers’ behaviors from the video recordings. Using the teachers’ utterances and teachers’ behaviors as the unit of analysis, procedures of analyzing data include: organizing and preparing the data, coding, describing, and interpreting. To validate the findings, initial results of the analysis had been examined by data source triangulation and using focused group discussion. The findings showed that the most- used teachers’ verbal communication is questioning while the most-used teachers’ nonverbal communication is hand movements and facial expressions. The most-motivating INTRODUCTION Motivation is the main problem for many students in Indonesia. It was caused by many factors. For example, limited vocabulary in English, the complexity of English sentences, and the way how teachers explain the materials really influencing the students' motivation. Especially related to Curriculum 2013 the teacher is the facilitator who has the main function to facilitate students in English mastery. Teachers should find some ways to make students more active in the class. To make them more active, of course they need a big motivation to follow the learning process. Some researches proved that teachers' immediacy influenced their students' motivation in the classroom. According to Armstrong and Hope (2016), there is positive correlation between teacher communication and student motivation for four communication dimensions (challenging, non-verbal support, understanding and friendly, encouragement and praise). It appears that verbal and nonverbal behaviours of a course instructor/ teacher may be related to certain aspects of student's motivation. Teachers as educators specifically need to be aware of, communicate, and model elements of immediacy to teacher candidate. Velez and Cano (2008) clarified by praising student's effort, using humour in the classroom, encouraging students to talk, and being open and willing to interact with students outside the class, teacher as educators can begin to model behaviours to candidate which will help them to develop the closeness inducing skills of verbal and nonverbal immediacy. There was a strong relationship among quality, amount, and the method of using nonverbal communication by teachers while teaching. Especially in English class, where students' have limited vocabulary non-verbal communication will be helpful for students to understand teachers' verbal communication. In addition, Bambaeero and Shokrpour (2017) state more teachers use verbal and non-verbal communication, the more efficacious their education and students' academic progress. Through this study, it is hoped that English teachers could improve their communication skills by using verbal and nonverbal communication effectively in order to motivate students in English classroom. In the case of students' communication, it also affects students' motivation instead of the verbal responses and verbal politeness in some utterances of the students (Fitriati et al., 2017;Mujiyanto, 2017). However, based on our literature review on teachers' communication in the classroom, the writers recognized that teachers' talk can affect students' motivation in learning English. Studies about the impact of teachers' verbal and non-verbal communication on students' motivation in learning English did not found. So it is needed to be conducted further. Teachers' verbal communications mean teachers talks. There are three phases in most interaction occur. Those are an initiating move, a responding move, and a follow up move. The phases are as the basic structure of exchange that takes place in every process of imparting knowledge, especially in language classes. There are five categories of nonverbal behaviors of teachers in the classroom. Those are hand gestures, head movements, affect displays, and emblems. Nonverbal messages include facial expressions, eye contact or lack of eye contact, proximity, and closeness, hand gestures, and body language. Chaudhry and Arif (2012) state that nonverbal communication is the unspoken communication that goes on in every face to face encounter with another human being. It tells us their feelings towards us and how well are our words being received. Beside verbal and nonverbal communication, this study also focuses on students' motivation in learning English. According to Dorneyi (2001), motivation is one of the major individual difference variables that has proved to have significant impact on the language learning success. Basically, the main goal of communication in teaching and learning process even verbal or nonverbal is to motivate students in order to get good result. To reach it up, communication used by teacher gives many affects. By using appropriate communication, students will get high motivation and achievement of the study will be good. In this regard, to see the students' motivation in learning English, the researchers pick the brains of communication used by teachers in English classroom. The writers hope, the settlement of an appropriate communication would help students to improve their motivation so the result of learnig will be good. METHODS This study aimed to explain the use of teachers' verbal communications and teachers' nonverbal behaviors to analyze the use of them in English classes and the impacts on students' motivation in English classes. In order to fulfill the objectives of the study, the researchers oncerned with a qualitative case study. According to Creswell (2012), in a qualitative case study, we identify the participant and site on purposeful sampling. It can help us to understand our central phenomenon, gain access to these individuals and sites by obtaining permissions, consider what types of information will be the best answer for the research question, design instrument for collecting and recording the information, and administer the data collection with special attention that may arise. For the access, this study held in a junior high school in Semarang and the participants were three English teachers and three classes of the eight-year students. The information collected from the observation, teacher and students answer from interview that was held in focus group interview after showing some recordings. This study used the framework of teachers' talk proposed by Sinclair and Brazil (1985) and Wang and Loewen (2015) to analyze the teachers' verbal communication and non-verbal communication in English classroom. The data which are in the form of transcript and the teachers' behaviors from video recording were analyzed by using those both frameworks. By using the teachers' utterances and teachers' behaviors as the unit of analysis, several methods of analyzing data were done including organizing and preparing the data, coding, describing, and interpreting. To analyse the data, several procedures were done including organizing, preparing, coding, and analyzing. In order to avoid bias, the researchers used triangulation as a tool to test the validity of the study (Cohen, et al., 2007, p. 142). According to Symonds and Gorard (2008), triangulation is seen to increase validity when multiple findings either confirm or confound each other (thus reducing the chances of inappropriate generalisations. In this study, the writers used methodological triangulation. The writer used more than one method to gather the data. The data was collected with the same method in two times, in the first semester by one by one interview and the second semester by focusing group discussion with the same interview list and the same questionaire. RESULTS AND DISCUSSION This part will follow the statement of research objectives. This section discusses the findings of data analysis to answer the research questions. The focus of data analysis is on the use of teachers' verbal communication, the use of teachers' nonverbal communication, the impact of teachers' verbal and non-verbal communication on students' motivation in learning English. The Use of Teachers' Verbal Communication in Learning English Based on framework of teachers' talk proposed by Sinclair there are some verbal communication by teachers in the English classes. Those are questioning, invitation,direction, inform, prompt, encouragement, criticizing, ignoring,acknowledgement and comment The instance of questionings used by Mrs. Dwi in teaching and learning process can be seen in Extract 1. Extract 1 What have you learned on last meeting? Have you learned about "will"? The extract 1 shows that the teacher utters some questions to review and summarize the previous lesson. Besides, to start teaching and learning process, a teacher needs to check students' knowledge by using some questionings. The instance of invitations used by Mrs. Dwi in teaching and learning process is displaye in Extract 2 Extract 2 Please lead the class to pray To start teaching and learning process, teacher needs to invite students in this case the captain of the classroom in leading his/her friends to pray. The instance of directions used by Mrs. Dwi in teaching and learning process is displayed in Extract 3. Extract 3 Ok, before we start our material, so we have to review or recall your memories about the previous meeting Before continuing to the next materials, teacher needs to tell the students to memorize the last meeting lesson. So, teacher used those utterances. The instance of informs used by Mrs. Dwi in teaching and learning process is displayed in Extract 4. Extract 4 Read this now, and this one know One of the additional skills in English is pronunciation. So, it is important for teacher to teach that skill. By using that utterance, teacher tries to correct students' pronunciation. The instance of prompt used by Mrs. Dwi in teaching and learning process is displayed in Extract 5. Extract 5 Depend on the subject, what subject? and should not In Extract 5, teacher gives clue to students' answer and to give clue to students to identify by their own whether their answer is right or wrong. The instance of encouragement used by Mrs. Dwi in teaching and learning process is showed in Extract 6. Extract 6 Ok, let's try Extract 6 belongs to the encouragement. It is to make the students confidents to try to make a sentence related to the materials. The instance of criticizing used by Mrs. Dwi in teaching and learning process is displayed in Extract 7. Extract 7 please listen and repeat after me The sentence is used in order to get students' attention. There were no ignoring and acknowledgement used by Mrs. Dwi in English classes. The instance of comment used by Mrs. Dwi in teaching and learning process is displayed in Extract 8. Extract 8 You should clean your classroom The purpose Extract 8 is to make the students care to the environment. The Teachers' Verbal Communication in English Class by Mr. Yulis. The instance of questionings used by Mr. Yulis in teaching and learning process is displayed in Extract 9. Extract 9 How is your life? In teaching and learning process, teacher needs to open the class by asking the students' condition or the class atmosphere. The instance of invitations used by Mr. Yulis in teaching and learning process is displayed in Extract 10 Extract 10 Come on everyone in the behind, come on another listen In teaching and learning process, teacher needs to calm down the students. Besides, teacher used some utterances to get the students' attention or they can focus on every step in the classroom. The instance of directions used by Mr. Yulis in teaching and learning process is displayed in Extract 11 Extract 11 I will read the text and then you will repeat after me Teacher needs to engage the students in every step of learning. In Extract 10, teacher asks students to repeat what teacher said. The instance of informs used by Mr. Yulis in teaching and learning process is displayed in Extract 12. Extract 12 beside has and can not, another else? In teaching and learning process, teacher should give the same chance to all students. In Extract 11 teacher wants to get other students' opinion. The instance of prompt used by Mr. Yulis in teaching and learning process is displayed in Extract 13. Extract 13 Swim or swimming In this case, teacher gives clue in correcting students answer to know the form of a vocabulary. There is no encouragement used by Mr. Yulis during his class. Mr.Yulis uses Indonesian language in criticizing the students during the class, showed in extract 14. Extract 14 Yang belakang perhatikan Mr. Yulis critics the students in behind to be quite, so they would not disturb the class. There were no ignoring utterances by Mr. Yulis in English classes The instance of acknowledgement used by Mr. Yulis during the English class is displayed in Extract 15 Extract 15 OK,good. The function of that utterance is to appreciate the students to motivate them in the class. There was no comment used by Mrs. Dwi in teaching and learning process. The Teachers' Verbal Communication in English Classes by Mr. Kusumo. The instance of questionings used by Mr. Kusumo in teaching and learning process is displayed in Extract 16. Extract 16 Have you found it? Are you ready? To start teaching and learning process, teacher needs to check students' preparation in order to get the good result. The instance of invitation used by Mr. Kusumo in teaching and learning process is displayed Extract 17. Extract 17 Let's read the greeting card together ! One way to make students active in the classroom is by inviting them to do an activity such as asking them to read a text. In this case, teacher used that utterance to ask the students for doing or reading a text. The instance of directions used by Mr. Kusumo in teaching and learning process is displayed in Extract 18. Extract 18 You can also find this greeting card on your book on page sixty seven. Page sixty seven In Extract 18 teacher explained to the students about the directions of lesson or material. The instance of informs used by Mr. Kusumo in teaching and learning process is displayed In Extract 19. Extract 19 Today, we are going to learn about greeting card In Extract 19, teacher informs the students about the material in that day. There was no prompt used by Mr. Kusumo in teaching and learning process. The instance of encouragement used by Mr. Kusumo in teaching and learning process is displayed in Extract 20. Extract 20 then you can do it start from today when we go home after school so let us say it The purpose of Mr. Kusumo said encouragement to the students is to give the real example for the students to apply the materials in daily activities. There was no criticizing, ignoring, acknowledgement, and comment used by Mr. Kusumo in English classes. From the interview, it shows that the most used of teachers' verbal in English class is questioning. It means that in teaching and learning process, teacher should involve the students to be active and communicative. Teacher gives feedback/comment if the students face the problem or the confused things. It proves that teachers used verbal communication in kinds of comment is the least. The Use of Teachers' Non -Verbal Communication in Learning English. The use of non-verbal communication in English classes by the teachers is known after observing the English teaching and learning process in English classes. The result of teachers' non-verbal communication in English classes is from the interview of the students as the participant after showed them the video of teaching and learning that they have followed with their English teachers. Based on the interview, there are some teachers' hand movements done by the teachers' are pointed at the student, pointed the whiteboard, clapping hand, pointed the answer, pointed the picture, lift up his shoulders, open and close the book, touch the students' shoulder, shake hand, looking the watch, and shaking hands. Some teachers' hand gesture are open and close the book to encourage the students to open the book immediately, pointed at the students to give chance to the students to participate during the class, pointed the picture to give point to the certain picture related to the material, pointed the answer to give point the subject that is discussed, put the finger in front of the lips to ask the students to keep silent, cross the hand to tell the students that something done is wrong, clap the hand to ask the students' attention, employing the palm hand to support teachers' explanation, raising their hand to explaine to the students about good habit if they want to ask or answer, crossing arms to show more attention when the students talk to the teacher, and click the finger to respon the students' answer. Some head movements that is done by the teachers are nodding their head to show that the students' answer is good or right, shook the head to show that the students' answer is wrong, turn around their head to control the condition. But there are only few teachers' head movement during the learning process, it is showed that teacher B and teacher C did not do any head movement, only teacher A who did the head movement. Facial cues are the primary way we reveal our feelings nonverbally. Affect displays can be used to influence others. A speaker, for example, displays enthusiasm and hopes it exudes to the audience. Affect displays may also be emotional expressions and not necessarily symbolic. Based on the observation and the interview, some affect displays done by the teachers are teacher smiles to show that the teacher enjoy the teaching and learning process, showing angry expression to make the students bequite, showing happy expression to show enjoying moment of learning, frown the forehead to clarify the studnts' miss behaviour, lift up the eyebrow to give code that the teacher give chance to the students to ask question, showing serious expression to give focus on the material that is explained, smilling to respond students and laughing to respon students' the enjoying moments. Emblems are movements that have a direct verbal translation, generally a word or phrase. These are often culture specific. Emblem conveys message without verbal word as long there is the same understanding by the members of certain culture. There are some emblems used by the English teacher during the English class, such as touch his head refers to make joke that the students still do not understand the right things, counting with fingers to emphasize the amount of kind something that is explained by the teacher, giving the thumb to appreciate students respond when they express their idea or answer the question or do something good, put the finger in front of the lips to ask the students to keep silent, cross the hand to show that the students' answer or behavior is wrong or un proper, clap the hand as symbols to ask the students to be silent, crossing finger to show that the answer is wrong and raising thumb to appreciate students respond when they express their idea or answer the question or do something good during the lesson. Impacts of Teachers' Verbal Ccommunications on Students' Motivation in Learning English The impacts of teachers' verbal communication on students' motivation in learning English are known after interviewing the students as the participant in English class. Based on the questionaire with thirty students, 40% of the students agree and 54% of them very agree that question from the teachers' motivated them in learning English, while 6% of them admit that teachers' questions only burden them. The result showed that 50% of the students agree and 10% of them very agree teachers' invitation motivate them. While 30% of the students could not decide whether teachers' invitation motivate them or not, and 10% of them admit that teachers' invitaion could not motivate them in learning English. The result of the questionnaire showed 94% of the students believe teachers' direction motivates them in learning English. It helps them understand the step to finish the task. It is suggested for the teacher to give direction in English and Indonesian language. Based on the questionnaire, 46% of students agree that teachers' information motivates them in learning English. While 37% of them feel unsure if the teachers' inform motivates them in learning English. 10% of the students think that teachers' inform useless for them and cannot motivate them in learning English. From the questionnaire, it is showed that 53% of the students agree that teachers' prompt gives more understanding, 40% of the students very agree that prompt of the teachers help them in learning English. While 7% of them feel unsure if teachers' prompt motivates them in learning English. The study showed most of the students need the teachers' encouragement to understand the material, but few of them feel unsure if the encouragement can help them in learning English. Some of them believe that critics as good advice can motivate them. If the critics as bad advice, it will take them down. Some of the students also think that the critics only makes them uncomforted or annoyed, because teachers' critics make the feels that they do some mistake seriously. Some of the students feel annoyed with the teachers' ignorance because it seems that the teacher does not respect them, and some of them disappointed if the teachers give no response to their answer or their activity. The result of the questionnaire showed that teachers' acknowledgment means appreciation from the teacher and they feel very proud of it and very motivate them in learning English. In another hand, some students think that the teachers' acknowledgment does not influence them in the learning process. The result of the questionnaire showed that 40% of the students agree and 17% of them very agree if teachers comment motivate them, especially the positive comment. While 27% of them unsure about teachers' comments can motivate in learning .the rest of 10% disagree and 6% of them very disagree if teachers comment motivates them. The Impacts of Teachers' Non-Verbal Ccommunications on Students' Motivation in Learning English Tachers non-verbal consist of four kinds, there are hand movement,head movement, affect display and emblem. There are some examples of hand gestures done by teachers that motivate the students in learning English are pointed to the students to give chance for the students to express the idea, employing his/her hands to support their explanation, and pointed the projector to make the students focus on learning. From the interview, it is concluded that 64% of the students agree and 23% very agree that teachers' hand gestures motivate them in learning English, while 23% of them unsure if teachers' hand gestures motivate them in learning. There are some teachers' head movements that seem in the learning process. Some of them are nodding his/her head, shaking her/his head. Both of head movements as the most seeing by the students in the class. Some of the students rarely see the teachers' head movements, so 40% of the students unsure of the teachers' head movement can motivate in learning, 30% of them agree, 17% of them very agree, 10% of them disagree and 3% very disagree. Teachers' affect display most seen by the students is facial expression. 53% of the students agree and 7% of them very agree if teachers' facial expressions motivate them in learning English. Especially teachers' happiness facial expression. But, 37% of the students unsure if teachers' facial expressions motivate them in the learning process and 3% of them disagree. The emblem is known as teachers' nonverbal known by people in the same group. An emblem that is familiar for the students when teachers' put his/her forefinger in front of his/her lips, it means they have to be quiet. Another example when the teacher moves his palm hand up and down to ask the student to get closer. The result of the study showed 54% of students are agreed and very agree if teachers' emblems can motivate them in learning English. But 3% of them disagree and 3% of them unsure that teachers' emblems can motivate them in earning English because they do not understand with teachers emblems. CONCLUSIONS Based on the analysis and discussion of this present study, there are some conclusions that can be drawn as follows: The most used of teachers' verbal communication in the learning process is questioning. It is used to stimulate the student's attention in the learning process. And the second most used verbal communication by teachers' in the class is giving direction. The direction from the teachers uses to give feedback and help the students more understand about the learning process. There is some non-verbal communication used by the teachers' to support their verbal communication are hand gestures, head movements, affect display, and emblems. The example of hand gestures has clicked the finger. The example of head movements is nodding his/her head to show their agreement with the students. Affect display that is used by teachers' in class identically with the facial expressions that are used to show the students agree with the student's response. The last are emblems that are used by the teacher by employing the finger to communicate with the students. The most motivating of teachers' verbal communication for students is questioning. Because the students will pay more attention when they try to answer the teachers' questions. It will be different if the teacher gives no question it will make the students passive because they did not get stimulation to solve any problems. Teachers' non-verbal communication most influenced by the students is facial expression. Especially the teachers' happiness expression, it makes them spirit full to learn more and it helps them to enjoy the learning process so they get more understanding in the learning process.
5,959.8
2020-12-23T00:00:00.000
[ "Education", "Linguistics" ]
A Multiscale Clustering Approach for Non-IID Nominal Data Multiscale brings great benefits for people to observe objects or problems from different perspectives. Multiscale clustering has been widely studied in various disciplines. However, most of the research studies are only for the numerical dataset, which is a lack of research on the clustering of nominal dataset, especially the data are nonindependent and identically distributed (Non-IID). Aiming at the current research situation, this paper proposes a multiscale clustering framework based on Non-IID nominal data. Firstly, the benchmark-scale dataset is clustered based on coupled metric similarity measure. Secondly, it is proposed to transform the clustering results from benchmark scale to target scale that the two algorithms are named upscaling based on single chain and downscaling based on Lanczos kernel, respectively. Finally, experiments are performed using five public datasets and one real dataset of the Hebei province of China. The results showed that the method can provide us not only competitive performance but also reduce computational cost. Introduction Clustering is one of the vital data mining and machine learning techniques, which aims to group similar objects into the same cluster and separate dissimilar objects into different clusters [1]. It is so prominent and recently attracted significant attention of researchers and practitioners from different domains of science and engineering [2]. ousands of papers had been published [3][4][5][6]. However, these investigations only concentrated on clustering at a single perspective. e scale can be equivalent to the following concept: generic concept, level of abstraction, or perspective of observation; the same problem or system can be perceived at different scales based on particular needs [2]. at is called multiscale phenomena and has been widely applied to the academic fields, such as geoscience [3,4] and mathematics [5]. Based on the clustering for distribution of scapularis nymphs at different spatial scales of lyme disease occurrence areas in southern Quebec, Canada, reference [6] helps people understand the change of risk and take corresponding measures. In [7], an average linkage hierarchical clustering algorithm was proposed through the regionalization algorithm to identify uniform rainfall areas in nonstationary precipitation time series based on multiscale selflifting sampling. e authors in [8] proposed a multiscale Gaussian kernel-induced fuzzy C-means algorithm to segment lesions and determine the edges of lesions. From the current research situation, multiscale clustering has been widely studied in various disciplines. However, from the analysis of the attribute type of data, most of the research studies are only for the numerical dataset, the quantitative analysis, and prediction of the data, but there is very little qualitative analysis about nominal dataset. Most of the datasets use characters to represent attribute values and do not have the properties of numbers. Even if they are represented by numbers (integers), they should be symbols and cannot be analyzed quantitatively. To study the nominal dataset, not only the complex data characteristics need to be obtained but also the proposed method needs to have some flexibility. e main contributions of this paper are as follows: (1) a multiscale clustering approach for Non-IID nominal data is proposed by introducing unsupervised coupling metric similarity; (2) combined with the scale-transformation theory and the idea of condensed hierarchical clustering, a scale calculation method based on a single chain is proposed to transform the clustering results from benchmark scale to target scale; and (3) combining the scale-transformation theory with Lanczos interpolation idea and based on the split hierarchical clustering idea, the Lanczos kernel-based downscaling algorithm was proposed to carry out multiscale clustering for the Non-IDD nominal datasets. e rest of this paper is organized as follows. Section 2 discusses the related work. Some definitions are reviewed briefly in Section 3. e framework of multiscale clustering is designed in Section 4. Section 5 details the comparison experiments. e conclusion and some future research directions are given in Section 6. Related Work Clustering has attracted more and more attention from researchers and can be applied to many fields, such as time series analysis [9,10], brain-computer interface [11][12][13][14][15], epilepsy [16,17], and sleep staging [18,19]. Clustering usually requires that the number of "classes" to be set in advance, and then the dataset is divided into each "class" according to the specific partitioning algorithm. e partitioning method assigns a dataset into k clusters such that each cluster must contain at least one element. e k-means algorithm proposed by MacQueen in 1967 is the most classical representation of the partitioning method [20], and that is one of the best-known and simplest clustering algorithms [21]. Frey and Dueck in 2007 proposed the "affinity propagation (AP)" algorithm [22]. Different from previous clustering algorithms, this algorithm does not need to determine the clustering center in advance but uses a N-order square matrix to store the relationship between data and performs iterative clustering on this square matrix with obvious results. In 2013, Kawano et al. proposed a greedy clustering algorithm based on k members and applied it to collaborative filtering tasks [23]. In 2015, Agarwal et al. proposed the improved k-means algorithm k-means ++, which is as decentralized as possible in the selection of centroid, and the clustering effect is significantly improved [24]. Spectral clustering [25] originates from graph theory. Data are regarded as vertices in the graph, and the relationship between data is regarded as edges in the graph. e graph is divided into several subgraphs by "graph cutting" technology, and the subgraphs correspond to clusters in the clustering. e common feature of these algorithms is that they can only handle numerical data. For the clustering problem of nominal data, Huang was inspired by the k-means algorithm and proposed the k-modes algorithm [26] for the first time in 1998. is algorithm adopted a new way of measuring object similarity to partition data objects. In 2018, Nguyen et al. improved the k-modes algorithm [27] and used the privacy protection mechanism to solve the problem of transparent data input. In 2014, Saffarzadeh et al. used the multiscale linear algorithm to analyze retinal images to determine whether eye lesions occurred [28]. In 2015, Lim et al. applied the multiscale spectrum clustering algorithm in the field of geosciences to improve the reliability of earthquake prediction [29]. In 2016, Parisot et al. applied multispectral clustering to the medical field to improve the efficiency and accuracy of magnetic resonance imaging [30]. In 2018, Ripoche et al. investigated the distribution of lyme disease at three different spatial scales in southern Quebec, Canada, and the density of pupae in different woodlands and in different plots and sections of the same woodlands, to provide guidance on the understanding and prevention of lyme disease [6]. Vu et al. developed a new multithreaded tool, fMLC, and addressed the problem of clustering largescale DNA sequences [31]. A multilevel clustering for star/ galaxy separation was designed in 2016, consisting of three phases: coarsening clustering, representative data clustering, and merging [32]. In 2019, Zunic et al. proposed a multilevel clustering algorithm that is used on the Internal Banking Payment System in a bank of Bosnia and Herzegovina and explained how the parameters affect the results and execution time in the algorithm [33]. ese all algorithms aim to a specific application and solve the corresponding problems. On the premise that the clustering results of small-scale data sets have been obtained, Chen et al. [34] proposed a method named SUCC to solve the clustering for large-scale data. We will propose a multiscale clustering approach for Non-IID nominal data. In clustering, we need to evaluate the dissimilarity among objects by using distance measure [35]. Minkowski distance is the most commonly used measure for numerical data. e most popular distance measure is Euclidean distance, another well-known measure is the Manhattan distance, and they are all special cases of Minkowski distance. e dissimilarity between two binary attributes is computing a dissimilarity matrix from the given binary data. e above measurement methods are mainly for numerical data, and quantitative processing and analysis are carried out. However, there are also nonnumerical attribute values of data, also known as nominal data. At present, there are few studies on qualitative analysis of nominal data, especially the data are Non-IID. Couple metric similarity (CMS) [36] is good for measuring the distance of Non-IID nominal data. Preliminaries To facilitate the discussion in the remainder of this paper, CMS is reviewed briefly in this section. CMS measures the similarity of two objects by capturing both the intra-and inter-attribute coupling relations of objects, where the former characterizes the coupling similarity between the frequency distribution and the value of attribute and the latter aggregates attribute dependencies between different attribute values relationship by considering the intersection of the condition attribute values co-occurrence probability of the different characteristics [36]. Definition 1 (intra-attribute similarity). e intra-attribute similarity between two objects A and B on attribute j is S Ia (A j , B j ) and is defined as follows: represents is the set of objects whose values of attribute are A j , and |•| represents the number of the set. Definition 2 (inter-attribute similarity). e inter-attribute similarity between two values of attribute A j and B j on attribute j with other attributes is S Ie (A j , B j ) and is defined as follows: where d represents the number of attributes in dataset, r k|j is the weight of each attribute k to attribute j, and S k|j (A j , B j ) represents the inter-attribute similarity candidates with attribute k and is defined as follows: is the set of values of attribute k for all objects in N(A j ) and W k consists of those attribute values of attribute k which co-occur with both A j and B j , W i k is the ith element of W k . Definition 3 (coupled metric similarity). e coupled metric similarity (CMS) between two objects A and B is S(A, B) and is defined as follows: where β j represents the weight of the coupled metric attribute value similarity of the an attribute j and S j (A j , B j ) is defined as follows: where α is the weighted harmonic mean of inter-attribute similarity and intra-attribute similarity. Different α reflects the different proportions of the intra-attribute similarity and inter-attribute similarity in forming the overall object similarity. roughout in this paper, we use CMS to measure the similarity of two objects. Proposed Framework e multiscale clustering framework proposed in this paper is shown in Figure 1. Instead of directly clustering on all scale datasets, this method first selects the best scale dataset that is named benchmark-scale dataset, then calls the classical mining algorithm on the benchmark-scale dataset to get the clustering results, and finally decides to push the clustering results up or down according to the relationship between the target scale and the benchmark scale. From this framework, it can be seen that the core of multiscale clustering is the benchmark-scale dataset clustering and the push up and push down of the clustering results of the benchmark-scale dataset. We design three algorithms to implement the framework. Firstly, according to the probability density discretization method [37], the properties of the representational scale are divided into multiple scales by the probability density. Secondly, the optimal scale is determined according to the attenuation of the information entropy of each scale [38] and clustering on benchmark-scale dataset by using the spectral method. Details of Algorithm 1 are as follows. We calculate the distance between every pair sample in the benchmarkscale dataset by using CMS and construct the similarity matrix, where 1 ≤ α, β ≤ ij. After the clustering benchmark-scale dataset is completed, the cluster center of big-scale dataset can be deduced from the cluster center of the benchmark-scale dataset. In this paper, inspired by the idea of condensed hierarchical clustering, an upscaling algorithm based on CMS (UACMS) is proposed (line 2-4). Its basic ideas are as follows: each cluster center of the benchmark scale was taken as a cluster, and the CMS was distance measured, and the two nearest clusters were merged into one until the termination condition was reached (line [5][6][7][8][9]. e specific process is as Algorithm 2. e downscaling algorithm based on Lanczos (DSAL) obtains the cluster center of the small-scale dataset from the cluster center of the benchmark-scale dataset, and the Computational Intelligence and Neuroscience process is exactly opposite to UACMS in Algorithm 3. at is, its principle is to adopt top-down thinking. Firstly, all the benchmark-scale cluster centers are regarded as a cluster, and Lanczos kernel function is used to calculate the weight of each cluster to generate new cluster centers (line 1). en, more and smaller clusters are obtained according to the coupling similarity between them until the termination condition is met (line 2-5). Performance Evaluations In this section, we compare our method with classical methods: k-modes and the spectral clustering that are based on 5 measures (CMS, HM [39], OF, IOF, and Eskin [40]) on 6 datasets. e clustering evaluation index includes Normalized Mutual Information (NMI) [41], F-score [42], which belongs to external index score, and Mean Squared Error (MSE) [43,44], which belongs to internal indicators, this section will use these three indicators to evaluate the accuracy of the proposed algorithm, and it also demonstrates the runtime advantage of the proposed algorithm. Data and Experimental Settings. In order to verify the validity and feasibility of the framework and algorithm proposed in this paper, Kaggle and UCI public datasets (Zoo, Soybeanlarge, Dermatology, BreastCancer, and Titanic) and real datasets (renkou for short) were used for experimental verification, as shown in Table 1. To facilitate description, the datasets Soybeanlarge, Dermatology, and BreastCancer are represented by Sol, Der, and BrC, respectively, in this section. Our program has been implemented with Python and performed on a computer with a Inter(R) Core (TM) i7-3770 4-Core 3.4 GHz CPU, 8 GB RAM, and the windows 10 × 64 Home operating system. Upscaling. e NMI values of the algorithm UACMS in this paper and the six comparison algorithms on each dataset are shown in Figure 2. It can be seen from the figure that the OF's NMI value is basically the smallest in each dataset, and the NMI value of the algorithm UACMS is the highest, except for BrC and Titanic. e main reason is that the relationship between the element attributes of the two datasets is complex. It is not easy to reflect this complex relationship by adjusting the parameters that restrict the weight of the relationship intra-attributes and the relationship inter-attributes of objects, which is also a challenge faced with the algorithm. Of course, UACMS performs well on Der, renkou, and other datasets. In general, the NMI value of the algorithm UACMS is increased by 13% on average compared with other algorithms. To facilitate comparison, the MSE value of seven different algorithms in the dataset Brc was reduced to 40% of the original value, as shown in Figure 3. It can be seen from the figure that the algorithm proposed in this section has a dominant MSE value on the four datasets. In general, compared with other algorithms, the MSE value of the algorithm proposed is reduced by 0.83 on average, which shows certain advantages of UACMS. It is worth noting that Figure 3 shows that the MSE value of the method OF on Sol and renkou datasets is small, and the mean value of MSE on the 6 datasets is second only to UACMS. Since the MSE value reflects the tightness of objects in the cluster, the cluster generated by the method OF is relatively tight. Figure 4 shows the F-score values of UACMS and the six comparison algorithms. Although CMS had the highest Fscore in the dataset BrC, Eskin had the highest F-score in the dataset Sol, and UACMS had the best performance in the other four datasets and had the highest F-score mean of all datasets, which was about 13% higher than the mean of all comparison algorithms. Conversely, k-modes perform poorly on all datasets. is explains the reason for the k-modes' dependence on random initialization centers and lack of consideration for the interrelationships between attributes of objects. Table 2 shows the runtime of the algorithm UACMS and the 6 comparison algorithms on 6 datasets. e algorithm UACMS has significant advantages on all datasets, and the average running time is improved by 11.32 minutes. Other six algorithms need more runtime along with the increase in the amount of dataset, but the runtime algorithm UACMS is not affected basically by the amount of dataset; this is because UACMS does not deal the original data, but the cluster centers of benchmark-scale dataset and the size of benchmark-scale dataset's cluster centers are far less than raw dataset. As CMS measures the similarity between objects, it needs to consider both the internal similarity of object attributes and the similarity between object attributes, which requires a relatively large amount of calculation, so Computational Intelligence and Neuroscience the algorithm needs much more time, as shown in Table 2. e other five comparison algorithms are mature and efficient, especially k-modes, with short running time, but they have a common characteristic: with the increase in data volume, the execution time will increase accordingly. In particular, CMS and Eskin methods in the experiment are derived from the literature and were not optimized, so the operation efficiency was low. In conclusion, through experiments, this section verifies that the proposed algorithm (UACMS) is superior to the other six algorithms in the clustering result indexes (NMI, MSE, and F-score) on most datasets. In addition, the biggest advantage of UACMS is that the runtime of UACMS is significantly shorter than that of the comparing algorithms, and it does not change much along with increasing data volume. is is because the UACMS deals with the knowledge on the benchmark-scale dataset rather than the original data. As a result, UACMS is available and efficient. Figure 5 shows the NMI values of DSAL and 6 comparison algorithms on 6 datasets. Except for the dataset BrC, DSAL has the highest NMI value on the other five datasets, and the mean NMI value of DSAL on all datasets is about 19% higher than that of the six comparison algorithms. In contrast, the k-modes algorithm performs poorly in the experiment because, on the one hand, this method is built on the assumption that the attributes of the objects are independent, while the attributes of the objects in the experimental dataset are dependent; on the other hand, the k-modes algorithm randomly selects the cluster center during the execution, which leads to the randomness of clustering results. As the DSAL algorithm takes into account the interaction between different attributes, the clustering results have obvious advantages. To facilitate comparison, the MSE values of six different algorithms in the dataset Brc are reduced to 40% of the original value, and the final MSE values of all algorithms are shown in Figure 6. It can be seen from Figure 6 that the MSE value of the algorithm DSAL is slightly unsatisfactory on the two datasets BrC and Titanic and is dominant on the three datasets except for renkou. However, the MSE value of the two algorithms HM and OF is slightly lower on the dataset renkou. e reason may be that there are fewer different attribute values in one attribute, which affects the performance of the relevant algorithm. Overall, the MSE value of DSAL in 3 of the 6 datasets was smaller than that of the comparison algorithm, with an average decrease of about 0.03. It shows that the compactness of cluster formed by the DSAL algorithm has a slight advantage over other comparison algorithms. e F-score values of DSAL and the comparison algorithms are shown in Figure 7. It can be seen from Figure 7 that the DSAL has the highest F-score values in all the other five datasets except BrC, especially in dataset renkou, and the F-score value of this algorithm is about 46% higher than that of other methods. e average F-score of the algorithm OF is the least. e reason for the poor performance of the DSAL algorithm on the dataset BrC may be that the relationship between the attributes of the data objects is complex, and the designed function cannot fully reflect the relationship. However, overall DSAL's F-score improved by about 16% over the comparison algorithms. F-score takes both accuracy and recall rate into consideration. e larger the F-score, the better the clustering effect. erefore, this algorithm has significant advantages in the real dataset renkou. e runtime of the DSAL and the 6 comparison algorithms is shown in Table 3. Obviously, the CMS algorithm has the longest runtime on all datasets and needs further optimization. e algorithm DSAL is based on CMS, but the runtime is much shorter than the other six comparison algorithms, and the runtime is basically one order of magnitude shorter. Downscaling. is is mainly because the DSAL is related to the number of cluster centers of the benchmarkscale dataset, not amount of original data. erefore, its running time is affected by the clustering results of benchscale dataset, while the other six algorithms directly process the original data (after preprocessing), and their running time naturally increases gradually with the increase in data volume on the whole. On the dataset Titanic, the DSAL algorithm has less obvious advantage than k-modes, only using 0.27 seconds less, because it takes more time to solve the weight of cluster center using the kernel function Lanczos on this data. In particular, CMS and Eskin methods Computational Intelligence and Neuroscience in the experiment were derived from the literature without any optimization, so the running time was relatively long. Since the running time is affected by computer hardware configuration and code optimization level, in addition, the running time in the comparison experiment is calculated under the specific environment, which is for reference only. is section verifies that the proposed algorithm (DSAL) has obvious advantages in the external indicators (NMI and F-score) of clustering results on most datasets. Compared with the other algorithms, the DSAL's internal evaluation index MSE has a slight advantage. In addition, the biggest advantage of DSAL is that its runtime is significantly shorter than other algorithms, and it does not change much along with increasing data volume. is is because DSAL deals with the knowledge on the benchmark-scale dataset rather than the original data. erefore, DSAL is available and efficient. Conclusions In this paper, a multiscale clustering algorithm based on coupling metric similarity is proposed, multiscale data mining is carried out for the multiscale nominal datasets with non independent and identical distribution, and a scale conversion method based on the benchmark-scale clustering results is proposed: the scale estimation method based on single chain UACMS and the scale estimation method based on Lanczos kernel. e experimental results show that proposed framework is efficient and effective on the datasets whose attributes are obvious multiscale properties. In future work, we mainly focus on two aspects: (1) we are applying multiscale theory to frequent itemset mining and (2) the practical application of our study is worthy of attention, and we will consider applying multiscale clustering to collision detection and rule detection based on previous research studies. Data Availability e data underlying the results presented in the study are included within the manuscript. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,425
2021-10-11T00:00:00.000
[ "Computer Science", "Mathematics" ]
Effects of the SLC38A2–mTOR Pathway Involved in Regulating the Different Compositions of Dietary Essential Amino Acids–Lysine and Methionine on Growth and Muscle Quality in Rabbits Simple Summary China is not only a huge meat rabbit consumer but also the largest meat rabbit producer in the world, contributing a large amount of rabbit meat products to the domestic and foreign markets every year. Therefore, it is important for the domestic and international rabbit meat market to improve rabbit breeding production efficiency and rabbit meat quality based on the good use of domestic feed resources in China. It is well known that dietary amino acid nutrition is of great importance to animal growth. Lysine and methionine are limited in the common domestic rabbit feed sources in China, but they play an important role in rabbit growth. Moreover, different lysine and methionine compositions of the diets respond differently to rabbit growth. Consequently, the search for a better composition of dietary lysine and methionine is the main objective of this study. Abstract In recent years, ensuring food security has been an important challenge for the world. It is important to make good use of China’s domestic local feed resources to provide safe, stable, efficient, and high-quality rabbit meat products for China and the world. Lysine and methionine are the two most limiting essential amino acids in the rabbit diet. However, little is known about the rational composition of lysine and methionine in rabbit diets and the mechanisms that affect growth and development. Accordingly, in this study, we sought to address this knowledge gap by examining the effects of different compositions of lysine and methionine in rabbit diets. Subsequently, the growth status, nitrogen metabolism, blood biochemical indexes, muscle development, muscle quality, and the growth of satellite cells were evaluated in the animals. The results showed that diets containing 0.80% Lys and 0.40% Met improved average daily weight gain, feed conversion, nitrogen use efficiency, and muscle quality in the rabbits (p < 0.05). Additionally, it altered the amino acid transport potential in muscle by upregulating the expression of the SLC7A10 gene (p < 0.05). Meanwhile, the cell viability and the rate of division and migration of SCs in the 0.80% Lys/0.40 % Met composition group were increased (p < 0.05). SLC38A2 and P–mTOR protein expression was upregulated in the 0.80% lysine/0.40% methionine composition group (p < 0.05). In conclusion, 0.80% Lys/0.40% Met was the most suitable lysine and methionine composition in all tested diets. SLC38A2 acted as an amino acid sensor upstream of mTOR and was involved in the 0.80% Lys/0.40% Met regulation of muscle growth and development, thus implicating the mTOR signaling pathway in these processes. Introduction Amino acids are of great physiological importance, serving as the building blocks for proteins, as well as substrates for the synthesis of low-molecular-weight substances [1]. These biomolecules have traditionally been classified as nutritionally "essential" or "nonessential" based on the growth or nitrogen balance of animals [2]. The carbon skeleton of essential amino acids cannot be synthesized de novo by animal cells, and these amino acids must be obtained from the diet to sustain life. In contrast, nutritionally non-essential amino acids can be synthesized de novo in sufficient amounts within cells, and are normally considered dispensable in the diet [3]. However, the nitrogen balance is not a sensitive indicator of optimal dietary amino acid requirements [4]. Most amino acids also function as signaling molecules in the regulation of animal metabolism, and thus their levels must be fine-tuned to meet a variety of important needs, such as energy balance, protein synthesis, and cell and tissue development [5]. Lysine (Lys) is the most limiting essential amino acid in mammalian grain diets and is believed to promote the growth of muscle fibers in vertebrate skeletal muscle through the stimulation of protein synthesis [6,7]. Lys deficiency can result in significant physical growth restriction and weight loss [8]. The important role of Lys in promoting skeletal muscle growth has been demonstrated in animal husbandry and is attributable to increased protein synthesis [9]. Methionine (Met) is an essential amino acid in mammals. In addition to being a component of proteins, Met also plays a role in many important metabolic and non-metabolic pathways, including epigenetics (S-adenosylmethionine synthesis), nuclear activity (polyamine production), detoxification (as a constituent of glutathione), and the methylation of cell membrane phospholipids (regulation of cell metabolism) [10]. Moreover, the Met cycle is closely related to folic acid metabolism, thereby indirectly regulating nucleotide biosynthesis [11]. The supplementation of limiting essential amino acids for protein synthesis has long been thought to increase weight gain and muscle mass via an unknown molecular pathway [12]. In addition to being substrates for protein synthesis, amino acids are also nutritional signals and regulators of protein metabolism, for example, by regulating the functions of translation initiation factors and elongation factors [13]. Neutral aliphatic amino acids, including Met and branched-chain amino acids, reportedly stimulate the phosphorylation of ribosomal protein S6 kinase, a downstream target of the mammalian target of rapamycin (mTOR) signaling pathway, thus promoting protein synthesis [14]. Meanwhile, Lys regulates skeletal muscle growth and inhibits myotube protein degradation by activating the mTOR pathway in skeletal muscle [15]. The mTOR pathway has been shown to play an important role in the activity of satellite cells (SCs), especially their division and proliferation [16]. SCs are skeletal muscle stem cells important for the maintenance of the morphological and functional stability of muscle fibers. Their ability for self-renewal and proliferation not only helps maintain the muscle stem cell pool but also represents a source of abundant muscle-derived cells. The proliferation, differentiation, and fusion of SCs lead to the formation of new muscle fibers and the reconstruction of functional contractile devices [17]. Lys and Met are the two most limiting essential amino acids in the rabbit diet. However, little is known about the effects of dietary lysine and methionine composition on rabbit growth and development. This study was undertaken to evaluate the effects of different dietary Lys and Met compositions on muscle growth and development in rabbits. For this, the growth status (feed intake, body weight, and survival rate), nitrogen metabolism, blood biochemistry, and muscle quality were evaluated. The effects of Lys and Met composition on the expression levels of relevant target genes and proteins in tissues and SCs were also assessed. Our findings provide not only novel insights into the formulation of rabbit diets for the improvement of meat quality and the exploration of the underlying mechanisms but also an important reference for future dietary amino acid utilization in the diets of rabbits and other animals. Animal Housing and Diets The rabbit house was naturally ventilated and illuminated, with a temperature of approximately 28 • C at noon and 20 • C at night (May in Tai'an, China). Five rabbits were kept inside a cage (200 cm × 200 cm × 100 cm) and shared feed and water (free feeding and watering). Cages had open cage tops and food-grade rigid plastic floors. The basic feed was formulated according to the NRC (National Research Council) (1977) Nutritional Requirement of Rabbits guidelines and Nutrition of the rabbit [18]. The test diets containing different compositions of lysine and methionine were formed by adding different levels of lysine and methionine to the levels of lysine and methionine contained in the basal diet. The composition and nutrient levels of the basic diet are shown in Supplementary Table S1 while the amounts of Lys and Met added to the experimental diet are shown in Supplementary Table S2. Experiment 1:240 male Hyla rabbits (35 days old) with similar body weight (1100 ± 10 g) were divided into eight groups (6 replicates per group, with 5 rabbits per replicate). Eight levels of Lys and Met composition (0.75% Lys/0.10% Met, 0.75% Lys/0.25% Met, 0.75% Lys/0.50% Met, 0.75% Lys/0.75% Met, 0.60% Lys/0.40% Met, 0.80% Lys/0.40% Met, 1.00% Lys/0.40% Met, and 1.20% Lys/0.40% Met) were selected for testing. According to the feed intake, daily weight gain, and health status of the rabbits after 10 days of the test ( Figure S1), the 0.75% Lys/0.25% Met, 0.75% Lys/0.50% Met, 0.80% Lys/0.40% Met, and 1.00% Lys/0.40% Met composition levels were selected for use in Experiment 2. Experiment 2: A total of 120 male Hyla rabbits (35 days old) with similar body weight (1100 ± 10 g) were divided into four groups (6 replicates per group, with 5 rabbits per replicate) and fed experimental diets containing the above-mentioned Lys and Met composition. At the beginning of the experiment, all the rabbits were weighed and the feed intake of each group was determined once every 5 days. After 40 days (75 days old), the weight, feed intake, and health status of the rabbits were assessed. At the end of the experiment, six rabbits in each group (each duplicate selected one, the same below) were selected (a rabbit whose weight was the average weight of each replicate) for blood collection. Blood was collected with a syringe from ear veins of the animals and transferred to vacuum blood collection tubes containing an anticoagulant. After centrifugation at 3000 rpm for 10 min, the supernatant was collected and stored at −80 • C. The 24 rabbits were then euthanized by cervical dislocation, and samples (liver, kidney, muscle) were collected, weighed, snap-frozen in liquid nitrogen, and stored at −80 • C. Detection of Nitrogen Metabolism and Feed Conversion Ratio According to previous studies by Chen et al. [19], during the last 3 days of the experiment, six rabbits in each group were randomly selected for the once-daily collection of feces and urine. After weighing, the collected samples were fixed in 10% sulfuric acid and stored at −80 • C for subsequent testing. The nitrogen content in the samples was detected using a Kjeldahl Nitrogen Analyzer (FOSS, Hilleroed, Denmark). Nitrogen-related parameters were calculated using the following formulae: Digestible nitrogen (g/day) = ingested nitrogen-fecal nitrogen; Deposition of nitrogen (g/day) = ingested nitrogen-fecal nitrogenurinary nitrogen; Apparent digestibility of nitrogen (%) = digestible nitrogen/ingested nitrogen × 100%; Nitrogen utilization rate (%) = nitrogen deposition/ingested nitrogen × 100%; Nitrogen biological titer (%) = nitrogen deposition/digestible nitrogen × 100%; Feed conversion ratio (%) = average daily gain/average daily feed intake × 100%. Quantitative Real-Time PCR (RT-qPCR) Total RNA extraction was performed as previously described [20]. The quality and quantity of extracted RNA were determined using agarose gel electrophoresis and a biophotometer (Eppendorf, Hamburg, Germany), respectively. Primers targeting exon-intron junctions were designed using Primer 6.0 software (Primer-E Ltd., Plymouth, UK). The primer sequences are shown in Table S3. RT-qPCR was performed according to the method described in Accurate Biology SYBR ® Green Premix Pro Taq HS qPCR Kit (AG11718, Accurate Biology, Hunan, China). Relative gene expression levels were calculated using the 2-∆∆CT method after normalization to the levels of the glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and the β-actin genes. Based on the cycle threshold (CT) values, GAPDH and β -actin mRNA expression was stable across treatments in this study (p > 0.1). SLC38A2 Knockout Determine the screening concentration of puromycin: Cells were tested for sensitivity to puromycin at concentrations such as 0, 0.2, 0.5, 1, 1.5, 2, 3, 4, and 5 µg/mL. The lowest concentration at which all cells die after two days was the puromycin screening concentration for that cell. SCs were inoculated into 6-well plates and cultured with cell numbers for the next day to reach 50% cell fusion. After incubation at 37 • C overnight, 3 µg/mL of puromycin was added to the culture medium. The cells were infected by Beyotime's SLC38A2 knockout lentivirus product (L23166, Beyotime, Shanghai, China). After two days, the medium containing the virus was aspirated, and medium containing puromycin was added. After two days of incubation at 37 • C, live cells were collected and assayed for SLC38A2 protein expression. Western Blotting Total protein was extracted from skeletal muscle SCs using RIPA(Radio immunoprecipitation assay) lysis buffer containing the protease inhibitor PMSF. Protein concentration was measured using the BCA (Bicinchoninic acid) Protein Assay Kit (Thermo Fisher, Waltham, MA, USA) after centrifugation at 12,000 rpm for 15 min at 4 • C. A total of 10 µg of protein was separated by 8-10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), transferred to polyvinylidene fluoride membranes (Millipore, Darmstadt, Germany), blocked for 1 h, and then incubated with primary antibody overnight at 4 • C. After four 10 min washes, the membrane was incubated with the secondary antibody for 1 h and then washed again four times, 10 min each wash. Immunoreactivity was detected using an enhanced chemiluminescence (ECL) kit (P2300, NCM Biotech, Suzhou, China) and visualized using the Fluor Chem M system. Image J v2 software was used for quantitative analysis. Supplementary Materials and Methods The methods used for detection of detection of muscle quality, plasma biochemistry, SCs isolation and culture, the cell migration assay, SCs identification ( Figure S2), immunofluorescence, the cell cycle and apoptosis assays and mTOR pathway activation and inhibition assays are described in Supplementary Materials. Statistical Analysis Data were analyzed by a one-factor general linear model (GLM) using the SAS v9.2 software package (SAS Inst. Inc., Cary, NC, USA). Duncan's multiple range test was used to indicate the significance of differences at p < 0.05. Data were expressed as means ± SEM. Means were considered to be significantly different when p < 0.05 and a tendency when 0.05 ≤ p ≤ 0.10. Effects of Different Lys and Met Composition in Diets on the Growth and Body Metabolism of Rabbits As shown in Table 1, the average daily feed intake of rabbits was highest when the diet contained 0.75% Lys and 0.25% Met; the average daily feed intake was lowest when the diets contained 0.10% and 0.40% Met (p < 0.05). However, the average daily gain was highest with the 0.80% Lys/0.40% Met composition (p < 0.05). Among the four experimental groups, the highest feed conversion was observed in the group containing 0.80% Lys/0.40% Met, and the lowest feed conversion was observed in the group containing 0.75% Lys/0.25% Met (p < 0.05). No significant differences in liver weight were detected among the four diets (p > 0.05). However, the 0.75% Lys/0.25% Met composition group had the lowest kidney weight, with significant differences compared to the 0.75% Lys/0.50% Met composition group (p < 0.05). Table 1. Effects of different lysine and methionine composition in the diet on the production performance of rabbits (n = 30). Item Lys 0 Abbreviations: ADFI = average daily feed intake; ADG = average daily gain. Means without a common lowercase superscript letter in a row are different in p < 0.05. By examining the effect of different lysine and methionine composition of diets on nitrogen metabolism in rabbits, we found that none of the four test diets exerted significant effects on nitrogen intake, digestible nitrogen, and retention of nitrogen by the rabbits (p > 0.05, Table 2). Among the four test groups, the 0.80% Lys/0.40% Met composition group had the lowest fecal and urinary nitrogen content (p < 0.05, Table 2). Nitrogen apparent digestibility, nitrogen utilization, and nitrogen biological value were significantly higher in rabbits provided with 0.80% Lys/0.40% Met in the other diet than in those provided with the 0.75% Lys/0.25% Met composition (p < 0.05, Table 2). Meanwhile, after blood biochemical tests in rabbits, we found plasma uric acid and urea were lowest in the 0.80% Lys/0.40% Met composition group, but differed significantly only from the 1.00% Lys/0.40% Met composition group (p < 0.05, Table 3); however, albumin, glucose, total cholesterol, triglyceride, and total protein levels did not differ significantly among the four test groups (p > 0.05, Table 3). Effects of Diets with Different Lys and Met Composition on Muscle Traits and Gene Expression in Rabbits In the four experimental groups, we detected changes in muscle tissue fiber types in rabbits by immunofluorescence, and we found MYH1 had the highest protein expression in the groups providing 0.75% Lys/0.25% Met and 0.75% Lys/0.50% Met composition, followed by the 0.80% Lys/0.40% Met composition group, and the lowest was the 1.00% Lys/0.40% Met composition group (p < 0.05, Figure 1A,B). However, MYH7 protein expression was significantly downregulated in the 0.75% Lys/0.25% Met and 0.75% Lys/0.50% Met composition groups compared with that in the other two groups (p < 0.05, Figure 1A,C). Through testing of other muscle quality indicators, we found that muscle shear force was greatest in the 0.75% Lys/0.25% Met composition group, followed by the 0.80% Lys/0.40% Met composition group; the smallest muscle shear force was seen in the group provided with 1.00% Lys/0.40% Met composition (p < 0.05, Table 4). The greatest drip loss was observed in the group administered the 0.75% Lys/0.50% Met composition, with the lowest being recorded with 1.00% Lys/0.40% Met composition (p < 0.05, Table 4). At 45 min post-euthanasia, the 0.80% Lys/0.40% Met composition group exhibited the lowest muscle pH values and the 0.75% Lys/0.25% Met composition group the highest (p < 0.05, Table 4). However, 24 h after euthanasia, muscle pH values were not significantly different among the four groups of rabbits (p > 0.05, Table 4). Similarly, no significant changes in flesh color (a *, b *, L *) were observed among the groups (p > 0.05, Table 4). Further, by RT-qPCR assays of target genes related to muscle tissue development, we identified the highest SLC7A10 gene expression level was found in the 0.80% Lys/0.40% Met composition group, with significant upregulation being observed relative to the 0.75% Lys/0.25% Met and 1.00% Lys/0.40% Met composition groups (p < 0.05, Figure 1G). Similarly, SLC38A2 gene expression was also increased in the group administered the 0.80% Lys/0.40% Met composition relative to that in the other three groups, reaching significance compared with the 0.75% Lys/0.25% Met composition group (p < 0.05, Figure 1H). The expression of the Myf5 gene was significantly higher in the 0.80% Lys/0.40% Met composition group than in the other three groups (p < 0.05, Figure 1I). Meanwhile, MYOG gene expression was also highest in the 0.80% Lys/0.40% Met composition group, and differed significantly from that seen in the group administered 0.75% and 0.25% Met in the diet (p < 0.05, Figure 1K). However, the transcript levels of SLC7A2, SLC7A5, SLC7A8, MYOD, and MSTN did not differ significantly among the four experimental groups (p > 0.05, Figure 1D-F,J,L). Effects of Different Lysine and Methionine Composition in Diets on the Growth of Rabbit SCs To further determine the mechanism of the effect of different lysine and methionine composition of diets on muscle growth and development, we conducted an in vitro experiment with rabbit muscle satellite cells. As shown in Figure 2, there was no significant difference in the SCs migration rate among the four experimental groups from 0 to 8 h (p > 0.05, Figure 2A,B). From 8 to 16 h, the 0.80% Lys/0.40% Met composition group exhibited the highest cell migration rate, reaching significance compared with the 0.75% Lys/0.25% Met composition group (p < 0.05, Figure 2A,C). Additionally, we found that the 0.80% Lys/0.40% Met composition group displayed the lowest proportion of apoptotic cells among the four groups, with a significant difference being noted relative to the 0.75% Lys/0.25% Met composition group (p < 0.05, Figure 2D-H). Similarly, the 0.80% Lys/0.40% Met composition group exhibited the smallest percentage of cells in the G2 phase of the cell cycle, and differed significantly when compared with the group treated with the 0.75% Lys/0.25% Met composition level (p < 0.05, Figure 2I,J). The numbers of cells in the G1 and S phases were not significantly different among the four test groups (p > 0.05, Figure 2I,J). Effects of Different Lys and Met Composition on the mTOR Signaling Pathway in Muscle Tissue and SCs By examining the mTOR pathway in SCs, we found no significant difference in mTOR protein expression was found among the four groups (p > 0.05, Figure 3A). The level of mTOR phosphorylation (P-mTOR) was significantly higher in the 0.80% Lys/0.40% Met composition than in the other three groups (p < 0.05, Figure 3B). Similarly, the 0.80% Lys/0.40% Met composition group displayed the largest P-mTOR/mTOR ratio of the three groups, which differed significantly from that of the group receiving the 1.00% Lys/0.40% Met composition level (p < 0.05, Figure 3C). Effects of Different Lys and Met Composition on the mTOR Signaling Pathway in Muscle Tissue and SCs By examining the mTOR pathway in SCs, we found no significant difference in mTOR protein expression was found among the four groups (p > 0.05, Figure 3A). The level of mTOR phosphorylation (P-mTOR) was significantly higher in the 0.80% Lys/0.40% Met composition than in the other three groups (p < 0.05, Figure 3B). Similarly, the 0.80% Lys/0.40% Met composition group displayed the largest P-mTOR/mTOR ratio of the three groups, which differed significantly from that of the group receiving the 1.00% Lys/0.40% Met composition level (p < 0.05, Figure 3C). Figure 3. Effects of lysine and methionine composition in the diet on the mTOR signaling pathway in rabbit muscle. (A-C) Detection of the mTOR signaling pathway in rabbit muscle. Relative mTOR protein expression in muscle (A), relative levels of phosphorylated mTOR protein (P-mTOR) in muscle (B), and the P-mTOR/mTOR ratio (C). Data are expressed as means ± SEM (n = 3). Comparisons between groups that contain only different lowercase letters indicate significant differences (p < 0.05). Western blots for each set of reference protein and target protein were from one blot, and the black line was the cropped edge of the blot. Further, mTOR protein expression in cells treated with -/rapamycin was significantly lower than that in cells treated with -/-or MHY1485/-(p < 0.05, Figure 4A). In the -/treated cells, mTOR protein expression was significantly higher in the 0.80% Lys/0.40% Met composition group than in that containing 0.75% Lys/0.25% Met and 1.00% Lys/0.40% Met (p < 0.05, Figure 4B). No significant difference in mTOR protein expression was observed among the four groups of cells treated with -/rapamycin or MHY1485/-(p > 0.05, Figure 4C, D). The levels of P-mTOR were significantly downregulated in cells treated with -/rapamycin compared with that in cells treated with -/-or MHY1485/-(p < 0.05, Figure 4E). In cells treated with -/-, P-mTOR levels were significantly higher in the 0.80% Lys/0.40% Met composition group than in the group administered 1.00% Lys and 0.40% Figure 3. Effects of lysine and methionine composition in the diet on the mTOR signaling pathway in rabbit muscle. (A-C) Detection of the mTOR signaling pathway in rabbit muscle. Relative mTOR protein expression in muscle (A), relative levels of phosphorylated mTOR protein (P-mTOR) in muscle (B), and the P-mTOR/mTOR ratio (C). Data are expressed as means ± SEM (n = 3). Comparisons between groups that contain only different lowercase letters indicate significant differences (p < 0.05). Western blots for each set of reference protein and target protein were from one blot, and the black line was the cropped edge of the blot. Further, mTOR protein expression in cells treated with −/rapamycin was significantly lower than that in cells treated with −/− or MHY1485/− (p < 0.05, Figure 4A). In the −/− treated cells, mTOR protein expression was significantly higher in the 0.80% Lys/0.40% Met composition group than in that containing 0.75% Lys/0.25% Met and 1.00% Lys/0.40% Met (p < 0.05, Figure 4B). No significant difference in mTOR protein expression was observed among the four groups of cells treated with −/rapamycin or MHY1485/− (p > 0.05, Figure 4C,D). The levels of P-mTOR were significantly downregulated in cells treated with −/rapamycin compared with that in cells treated with −/− or MHY1485/− (p < 0.05, Figure 4E). In cells treated with −/−, P-mTOR levels were significantly higher in the 0.80% Lys/0.40% Met composition group than in the group administered 1.00% Lys and 0.40% Met (p < 0.05, Figure 4F). No significant difference in P-mTOR protein expression was observed among the four groups of cells treated with −/rapamycin or MHY1485/− (p > 0.05, Figure 4G,H). We further found that the P-mTOR/mTOR ratio was significantly smaller in −/rapamycin-treated cells than in cells treated with −/− or MHY1485/− (p < 0.05, Figure 4I). There was no significant difference in the P-mTOR/mTOR ratio among the respective internal four test groups treated with −/−, −/rapamycin, or MHY1485/− (p > 0.05, Figure 4J-L). Viability was significantly reduced in cells treated with −/rapamycin compared with that in cells treated with −/− or MHY1485/− (p < 0.05, Figure 4M). Among the four groups treated with −/−, cell viability was significantly higher in the 0.80% Lys/0.40% Met composition group than the other three groups (p < 0.05, Figure 4N). No significant difference in cell viability was recorded between their respective four experimental groups treated with −/rapamycin or MHY1485/− (p > 0.05, Figure 4O,P). Met (p < 0.05, Figure 4F). No significant difference in P-mTOR protein expression was observed among the four groups of cells treated with -/rapamycin or MHY1485/-(p > 0.05, Figure 4G, H). We further found that the P-mTOR/mTOR ratio was significantly smaller in -/rapamycin-treated cells than in cells treated with -/-or MHY1485/-(p < 0.05, Figure 4I). There was no significant difference in the P-mTOR/mTOR ratio among the respective internal four test groups treated with -/-, -/rapamycin, or MHY1485/-(p > 0.05, Figure 4J-L). Viability was significantly reduced in cells treated with -/rapamycin compared with that in cells treated with -/-or MHY1485/-(p < 0.05, Figure 4M). Among the four groups treated with -/-, cell viability was significantly higher in the 0.80% Lys/0.40% Met composition group than the other three groups (p < 0.05, Figure 4N). No significant difference in cell viability was recorded between their respective four experimental groups treated with -/rapamycin or MHY1485/-(p > 0.05, Figure 4O, P). . Comparisons between groups that contain only different lowercase letters indicate significant differences (p < 0.05). Western blots for each set of reference protein and target protein were from one blot, and the black line was the cropped edge of the blot. Effect of SLC38A on mTOR Signaling Pathway To verify the upstream signaling role of SLC38A2, we performed SLC38A2 knockout assays on SCs and found the SLC38A2 protein expression was significantly reduced in the knockout group (p < 0.05, Figure 5A). Meanwhile, P-mTOR protein expression was significantly downregulated in the SLC38A2 knockout group compared with the SLC38A2 non-knockout group (p < 0.05, Figure 5D). In the SCs after SLC38A2 non-knockout, both SLC38A2 and P-mTOR protein expression were highest in the 0.80% Lys/0.40% Met composition group (p < 0.05, Figure 5B,E). However, in SCs after SLC38A2 knockout, SLC38A2 and P-mTOR protein expression were not significantly different in any of the four experimental groups (p > 0.05, Figure 5C,F). Expression of P-mTOR was detected in four experimental groups without (E) or after knockout (F) of SLC38A2, respectively. Data are expressed as means ± SEM (n = 3). Comparisons between groups that contain only different lowercase letters indicate significant differences (p < 0.05). Western blots for each set of reference protein and target protein were from one blot, and the black line was the cropped edge of the blot. Discussion Lys and Met are essential amino acids for the nutritional needs of monogastric animals and play many important metabolic functions. Appropriate Lys and Met intake is important to ensure healthy growth, development, and reproduction [21,22]. Early studies in pigs found that diets containing 1.8% Lys and 0.50% Met could greatly improve performance regarding average daily gain and average daily feed intake [23]. Limited increases in Lys and Met concentrations in broiler diets can improve feed conversion, body weight, carcass yield, and breast muscle production [24]. In rabbits, meanwhile, dietary Lys and Met supplementation was reported to not be effective at lowering the incidence of enteritis in rabbits, but led to a significant increase in the weaning weight of young animals [25]. Similarly, in this study, we found that diets containing the 0.80% Lys/0.40% Met composition promoted the greatest average daily gain in rabbits and also significantly improved feed conversion. During or after nutritional intake, amino acid homeostasis is primarily controlled via autoregulatory processes. Amino acid transporters play a crucial role in the distribution and circulation of amino acids in cells and organs [26]. In our study, we found that the expression of genes encoding amino acid transporter proteins, namely SLC7A10 (ASC-1) and SLC38A2, were significantly upregulated in the muscle of rabbits in the 0.80% Lys/0.40% Met composition group. SLC7A10 is mainly involved in mediating the Na+-independent transport of glycine, L-alanine, L-cysteine, and other amino acids [27]. SLC38A2 is a system A transporter that accumulates small neutral amino acids directly or indirectly through the activation of the ASCT1 and LAT1/2 transporter proteins [28]. These indicated that the different compositions of lysine and methionine in the diet could affect the absorption of other amino acids. Our experimental results also found that the 0.80% Lys/0.40% Met composition group reduced nitrogen emissions and increased the efficiency of nitrogen utilization in rabbits. Plasma urea, a product of hepatic nitrogen metabolism, is negatively correlated with protein utilization. Amino acid balance is essential for improving protein utilization and reducing plasma urea levels [29]. In this study, we found that rabbits fed diets containing 0.80% Lys and 0.40% Met had the lowest plasma urea and uric acid contents and the highest nitrogen utilization efficiency. This suggests that reasonable levels of Lys and Met composition (0.80% Lys and 0.40% Met) in the diet can promote amino acid balance, which was in agreement with that previously reported [30][31][32]. The supplementation of limiting amino acids to the diet, either alone or in combination with other nutrients, is a feasible approach for improving animal production performance [33]. Improving dietary-restrictive amino acid content can reportedly improve chicken breast tenderness and carcass yield, and, to some extent, also muscle quality [34]. In addition, Lys supplementation was shown to increase sarcoplasmic protein concentrations, an effect that was positively correlated with muscle tenderness [35]. The characteristics of muscle fibers are important determinants of the quality of meat and are closely related to traits such as color, tenderness, pH, and water retention properties [36]. Based on contractile and metabolic properties, muscle fibers are usually classified as oxidative/slow (type I) or glycolytic/rapid (type II) [37]. Type I muscle fibers have been positively associated with good meat quality, while a greater percentage of type II fibers was negatively correlated with a higher incidence of pale, soft, and exuding (PSE) meat [38]. In general, the initial pH of rabbits at 45 min post-mortem is approximately between 6.1 and 6.9, and the shear force of rabbit muscle is approximately 11.45 N [39,40]. After 24 h, the pH of rabbit meat varied between approximately 5.66 and 5.80 [40]. Consistent with previous reports, we found that as the Lys content in the diet increased, the proportion of type I muscle fibers in skeletal muscle increased, muscle tenderness increased, and muscle water retention capacity was enhanced, thereby improving muscle quality to some extent. Growth in the number and size of muscle fibers determines the main process of muscle growth [41]. In developing muscle, SCs undergo extensive proliferation and most of them fuse with muscle fibers; in contrast, SCs may undergo apoptosis during muscle atrophy [42]. For high developmental growth, new fibers are formed on the surface of existing fibers by fusion of satellite cells to form multinucleated myotubes [43]. In the present study, it was found that the 0.80% Lys and 0.40% Met group had the lowest apoptosis rate and increased cell division and cell migration ability. This indicated that the presence of 0.80% Lys and 0.40% Met in the diet could, to some extent, prevent muscle atrophy, promote satellite cell division and proliferation, and maintain normal muscle condition. Muscle hyperplasia and hypertrophy involve populations of myogenic precursor cells, which are also called satellite cells and are regulated by a number of both positive and negative factors. MYOD, Myf5, and MYOG are involved in the positive regulation of muscle development, while MSTN has a negative effect on muscle development [44]. Further, the combination of our results showed the involvement of Myf5 and MYOG in the positive regulation of 0.80% Lys and 0.40% Met composition on the promotion of muscle development. It is also reported that SLC38A2 is extensively regulated by cellular stress, nutritional availability, and hormonal signaling, and acts as an amino acid sensor upstream of mTOR in the regulation of processes such as protein synthesis and cell proliferation [45]. Interestingly, we have now also found a direct involvement of SLC38A2 in regulating the effect of 0.80% Lys and 0.40% Met on the mTOR signaling pathway. Additionally, the combined results showed that 0.80% Lys and 0.40% Met increased SCs viability via the SLC38A2-mTOR signaling pathway, which in turn promoted the growth and development of muscle tissue. Conclusions In all test diets, 0.80% Lys/0.40% Met was the most suitable lysine and methionine composition. Furthermore, 0.80% Lys/0.40% Met promoted the absorption and utilization of dietary nitrogen by rabbits, and adjusted the growth status and production performance of rabbits, especially promoting muscle growth, development, and muscle quality. SLC38A2 acted as an amino acid sensor upstream of mTOR and was involved in the 0.80% Lys/0.40% Met regulation of muscle growth and development, thus implicating the mTOR signaling pathway in these processes. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ani12233406/s1, Figure S1: Effects of different lysine and methionine composition of diets on the growth of rabbits in Exp. 1; Figure S2: Identification of isolated primary muscle satellite cells; Supplementary Table S1: Composition and nutrient levels of basal diet (air-dry basis); Supplementary Institutional Review Board Statement: All study procedures were approved by the Shandong Agriculture University Animal Care and Use Committee and were in accordance with the Guidelines for Experimental Animals established by the Ministry of Science and Technology (Beijing, China). Data Availability Statement: None of the data were deposited in an official repository. The data presented in this study are available on request from the corresponding author.
7,870.8
2022-12-01T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Development and Implementation of a Low Cost µ C-Based Brushless DC Motor Sensorless Controller: A Practical Analysis of Hardware and Software Aspects : The ongoing technological advancements of brushless DC motors (BLDCMs) have found a wide range of applications. For instance, ground-based electric vehicles, aerial drones and underwater scooters have already adopted high-performance BLDCMs. Nevertheless their adoption demands control systems to monitor torque, speed and other performance characteristics. Precise design structure and the particular motor functional characteristics are essential for the suitable configuration and implementation of an appropriate controller to suit a wide range of applications. Techniques which do not use Hall sensors should be used then. This paper deals with the analysis of hardware and software aspects during the development of such a microcontroller based and low cost speed controller for motors up to 500 W, along with its practical implementation. The sensorless method employed is based on the zero crossing point (ZCP) detection of the back-electromotive forces’ (back-EMF) di ff erences, as the ZCPs of these quantities match to the time points at which the commutation sequence changes. Additionally, the study presents hardware and software details through calculations, figures, flowcharts and code, providing an insight of the practical issues that may arise in such a low cost prototype. Finally, results obtained by experiments validate the presented hardware / software architecture of the controller. Introduction The availability and the low cost of high energy density permanent magnets made of rare earth materials, has resulted during the last decades in the development of electrical machines which rely on magnets for the creation of the excitation field instead of excitation windings that carry currents. Thus, permanent magnet synchronous motors (PMSMs) were adopted, along with their subclass, namely the brushless direct current motors (BLDCMs). Additionally, in the recent years there has been a rapid increase in the demand for electrification in modern drive-trains. Electric motors have been utilized so far in almost all the forms of vehicle applications, either in purely electrical or in hybrid setups. Until recently, conventional direct current motors (CDCMs), e.g., permanent magnet DC motors, were the only type of motor used, mostly in car applications. On the other hand, BLDCMs seem to have become the dominant type of motors nowadays in any type of electrified vehicle. This can be justified easily though, since they exhibit higher efficiency levels, higher power density characteristics, higher torque/weight ratio and inherently better operational flexibility compared to interconnections between the different stages of the controller and (d) to propose a simple but effective control strategy accompanied by the corresponding software source code. It is to be noted that similar efforts providing as a single document an extended degree of technical completeness, as the one presented here are not found in the scientific literature. In the above context, the paper presents a straightforward practical implementation approach of a low-cost BLDCM sensorless controller system. Having the simplicity and the low cost as a priority, several issues regarding the hardware selection are discussed and discrete components selection is commented. Detailed schematics of the proposed architecture clarify the overall scheme. Moreover, the corresponding control strategy adopted is given thoroughly, based on the simplicity and portability. Additionally, the relative microcontroller source code is presented. Based on suitable components and microcontroller architectures, another potential feature is the applicability of the proposed controller in harsh environments. The operating environments where this scheme can be applied may cover a wide range of conditions such as ambient temperature. The selected here microcontroller's manufacturer datasheet, reveals that this limits can be from −40 • C to +85 • C/+125 • C for industrial/extended high temperature environments (storage temperature may vary from −65 • C to +150 • C), which obviously promise satisfactory operation in harsh environments. Finally, the prototype's experimental results reveal the degree of goodness and the validity of the procedure followed. The paper is organized as follows: The problem statement and the corresponding theory are given in Section 2. The controller design from a hardware perspective is analyzed in Section 3. The control philosophy is presented in Section 4, whereas software characteristics are highlighted. Indicative experimental results are shown in Section 5. Finally the work concludes in Section 6. Brief Theoretical Background of BLDCM Operation BLDCMs develop torque with the aid of two magnetic fields. The induced torque tends to align the fields. Specifically, the torque rotates the rotor in order to align its magnetic field with that of the stator. The torque should become maximum when the two field vectors are perpendicular to each other and tend to zero when they tend to become parallel. Since the motion of the rotor is continuous, the direction of the field created by the stator windings as the rotor moves should be changed, in order that the produced torque does not diminish. Therefore, power supply switching is required [1,2]. At least six switching elements are required to implement this operation for three phases (a four switches one has been examined in [26]), which comprise the so called "inverter bridge" or "six step commutation scheme". Therefore, the problem focuses on the proper (in terms of sequence) and precise (in terms of timing) pulsing of the switching elements. In Figure 1 the generic topology of a microcontroller based BLDCM controller scheme is shown, where-among other details-the equivalent circuit of a star-connected BLDCM, as well as the configuration of the inverter are shown. The voltages equations in such a case are given by: where v a , v b and v c are the stator phase voltages; R is the stator resistance per phase (R a = R b = R c = R); I a , I b and I c are the stator phase (or line) currents; L is the self inductance per phase (L a = L b = L c = L) and e a , e b and e c are the corresponding back EMFs. The latter can be shown as: where K e is the back-EMF constant in V/rad/s; θ e is the electrical rotor angle in electrical degrees (depending on the number of BLDCM poles); ω is the mechanical rotor speed in rad/sec. Therefore, the back-EMF for each phase is a function of rotor position f(θ)and can be expressed as: 6θ e /π 0 ≤ θ e ≤ π/6 +1 π/6 ≤ θ e ≤ 5π/6 6(π − θ e )/π 5π/6 ≤ θ e ≤ 7π/6 −1 7π/6 ≤ θ e ≤ 11π/6 6(θ e − 2π)/π 11π/6 ≤ θ e ≤ 2π Zero-Crossing Points (ZCP) Detection Technique This is one of the simplest back-EMF detection techniques. It is based on the detection of a time instant when the back-EMF of the non-excited phase becomes zero. Because back-EMF cannot be directly measured, a ZCP is detected through the supply phase voltage, which is identical to the back-EMF during the idle period. By detecting a zero point and providing a 30° phase shift, it is possible to correctly estimate the start of the next switching step and consequently the correct motor driving [27]. This can be done by activating a counter such as the inverter is led to the next switching phase after the measured time has elapsed. In addition, it is necessary to use low-pass filters before detecting the zero point due to the inverter's power components harmonics introduced. Otherwise, incorrect ZCP detection may occur, which leads to incorrect switching and improper motor operation. To reduce the noise from the inverter power components, PWM configuration is applied to the upper or lower switching devices and zero points are detected during the interruption period. This does not affect the measurement of the interference noise and thus increases the motor performance [8]. The ZCP of the back EMF for each phase could be an attractive feature for sensing, since these points do not depend on the speed and the phase winding is not excited at the rotor positions at which they occur. Yet the ZCP points do not correspond to the commutation points (CPs). Thus, the signals must be shifted in phase by 90° electrical, before they will be used for commutation. In order to overcome the phase-shifting problem different methods have been presented in the literature, including the detection of the third harmonic component in back EMF, direct current control algorithm and phase locked loops [8,12,28,29]. Nevertheless, as stated in [14], BLDC motor systems are typically applied to low-cost systems of small capacity. Therefore, implementation of the ZCP compensation method should be relatively simple. Sensorless Method with back EMF Difference Calculation The method which has been implemented in this paper utilizes the difference in back EMF of two phases instead of using phase back-EMF. Accordingly f b (θ e ) and f c (θ e ) can be calculated [9]. From Equation (1) for each phase we have: However, in practice, the line voltages are used. Thus, by performing suitable calculations Equation (4) is transformed to the following appropriate form: Also, it is known that regarding the currents with respect to the neutral point, the following expression is valid: By expanding Equation (5) and substituting in Equation (6) we get: Finally, solving for current derivatives leads to Equation (8) which is the expression that can be used when the ZCP technique is applied because it can "detect" the time instants when a voltage zero crossing is performed. It is to be noted that only two equations are needed for the voltage differences since the third one (if needed) can be easily derived: Zero-Crossing Points (ZCP) Detection Technique This is one of the simplest back-EMF detection techniques. It is based on the detection of a time instant when the back-EMF of the non-excited phase becomes zero. Because back-EMF cannot be directly measured, a ZCP is detected through the supply phase voltage, which is identical to the back-EMF during the idle period. By detecting a zero point and providing a 30 • phase shift, it is possible to correctly estimate the start of the next switching step and consequently the correct motor driving [27]. This can be done by activating a counter such as the inverter is led to the next switching phase after the measured time has elapsed. In addition, it is necessary to use low-pass filters before detecting the zero point due to the inverter's power components harmonics introduced. Otherwise, incorrect ZCP detection may occur, which leads to incorrect switching and improper motor operation. To reduce the noise from the inverter power components, PWM configuration is applied to the upper or lower switching devices and zero points are detected during the interruption period. This does not affect the measurement of the interference noise and thus increases the motor performance [8]. The ZCP of the back EMF for each phase could be an attractive feature for sensing, since these points do not depend on the speed and the phase winding is not excited at the rotor positions at which they occur. Yet the ZCP points do not correspond to the commutation points (CPs). Thus, the signals must be shifted in phase by 90 • electrical, before they will be used for commutation. In order to overcome the phase-shifting problem different methods have been presented in the literature, including the detection of the third harmonic component in back EMF, direct current control algorithm and phase locked loops [8,12,28,29]. Nevertheless, as stated in [14], BLDC motor systems are typically applied to low-cost systems of small capacity. Therefore, implementation of the ZCP compensation method should be relatively simple. Sensorless Method with back EMF Difference Calculation The method which has been implemented in this paper utilizes the difference in back EMF of two phases instead of using phase back-EMF. The waveform of the back-EMF difference is shown in Figure 2a. Zero points coincide with the time points at which the switching stage changes. Analysis of the equivalent circuit of the motor easily reveals that v ba voltage (v b -v a difference) give a waveform which crosses zero at the same instant to which the e ba voltage (e b -e a difference) waveform crosses zero. Therefore, the v ba ZCP coincides with the e ba ZCP. Thus, the ZCPs of this waveform can be used to generate the virtual Hall signal for phase B switching. This signal will coincide with the actual Hall B phase signal coming from the sensor (if existed). Thus, no phase shifting is needed to detect switching points, as in other relevant methods. Accordingly, the v ac and v cb ZCPs are used to detect the e ac and e cb ZCPs. By this method, the zero points can be detected by first rotating the engine by 60 electric degrees, which means that it can theoretically switch to sensorless mode after the first ZCP. In Figure 2b the circuit which is used in this paper for the detection of ZCP is shown. The circuit consists of two stages. The phase voltages differences are generated at the first stage, whereas the line voltage is compared with respect to ground at the second stage for the ZCP detection. The circuit output can then be used for the switching and sensorless operation of the BLDCM without phase shifting, since it can be proved that it is analogous to the Hall sensor signals [11,27]. by 60 electric degrees, which means that it can theoretically switch to sensorless mode after the first ZCP. In Figure 2b the circuit which is used in this paper for the detection of ZCP is shown. The circuit consists of two stages. The phase voltages differences are generated at the first stage, whereas the line voltage is compared with respect to ground at the second stage for the ZCP detection. The circuit output can then be used for the switching and sensorless operation of the BLDCM without For demonstrative purposes, a relative simulation of a system based on this specific technique has been performed. In Figure 2c the waveforms of voltages difference (vb-va , vb-vc , and vc-va) and the back EMF differences (eb-ea , eb-ec and ec-ea), are depicted, where it can be shown clearly that the ZCPs are common between the former and the latter differences respectively. BLDC Motor Controller Theoretical Analysis and Design The proposed topology of the BLDCM controller is shown in Figure 3 in block diagram form. It can be seen that it is governed by a modular "philosophy" rather than a single layout (the latter is mostly met in commercial products). The reason for this modular arrangement (as shown in later For demonstrative purposes, a relative simulation of a system based on this specific technique has been performed. In Figure 2c the waveforms of voltages difference (v b -v a , v b -v c , and v c -v a ) and the back EMF differences (e b -e a , e b -e c and e c -e a ), are depicted, where it can be shown clearly that the ZCPs are common between the former and the latter differences respectively. BLDC Motor Controller Theoretical Analysis and Design The proposed topology of the BLDCM controller is shown in Figure 3 in block diagram form. It can be seen that it is governed by a modular "philosophy" rather than a single layout (the latter is mostly met in commercial products). The reason for this modular arrangement (as shown in later paragraphs) is primarily based on the easiness of the manufacturing process-as it is a prototype-and secondly on the easy maintenance, when this is a requirement (i.e., the repair of a module without affecting the rest of them. Based on this feature, the next paragraphs analyze each module separately. paragraphs) is primarily based on the easiness of the manufacturing process-as it is a prototype-and secondly on the easy maintenance, when this is a requirement (i.e., the repair of a module without affecting the rest of them. Based on this feature, the next paragraphs analyze each module separately. Power Inverter Stage Module This section focuses on the analysis and design of the control circuit, namely the inverter and the components of the hardware adopted. The three-phase power inverter use here is a full-bridge type in 120 o electrical conduction mode. This operation is achieved by the simultaneous conduction of two phases while the third one is out. The stator rotating magnetic field is stepwise and not continuous. The inverter "moves" to the next stage of switching each time the rotor rotates by 60 o electrical, thereby the stator magnetic field is changing. In a full rotation, there are six successive states corresponding to six magnetic states in the stator field. In each situation only two phases are in conduction mode, whereby a current flows into each winding during 120 o electrical at each rotation of the stator field. In a general case, since the switching elements are not ideally switched "on" and "off", some transition time (dead time) is required to avoid any short circuits in the branches (see Figure 1). However, for the topology of the inverter used here it is not necessary to introduce dead time because as shown in Figure 2a, the switching elements of the same sector (sector I-VI) are turned "on" by a 60° electrical difference. Selection of MOSFETs The semiconductor switching elements selection along with the corresponding heat-sinks is of primary importance in order to maximize the overall efficiency. These choices depend on the application. The most critical factors that influence the final choice are summarized below: • The voltage drop during the conduction and their conduction resistance determine the conduction losses of the element. • Switching times (on, off and transition times) determine the switching losses and set the switching frequency limits. • Current and voltage values determine the power required to be handled by the switching elements. • The cooling requirements, that is, the temperature coefficient of the conduction resistance of the element. • Cost is also an important factor when selecting items. The following four types of switching elements are used in high power applications: thyristors, GTOs, IGBT and MOSFETs. Each one has a number of advantages and disadvantages that make it suitable for a particular application. The inverter manufactured in the context of this paper, operates at a fairly high frequency of 10 kHz-20 kHz (although 5 kHz-8 kHz could be sufficient), and can power a motor with nominal voltage equal to 24 V at 22 A on average (i.e., ~0.5 kW). This implies that the use of a thyristor and a GTO thyristor is rejected for the present application because they cannot operate at such switching frequency levels. In addition, IGBTs can operate at high frequencies and controlled voltage on their gate, but the rated operating voltage of the circuit is too Power Inverter Stage Module This section focuses on the analysis and design of the control circuit, namely the inverter and the components of the hardware adopted. The three-phase power inverter use here is a full-bridge type in 120 • electrical conduction mode. This operation is achieved by the simultaneous conduction of two phases while the third one is out. The stator rotating magnetic field is stepwise and not continuous. The inverter "moves" to the next stage of switching each time the rotor rotates by 60 • electrical, thereby the stator magnetic field is changing. In a full rotation, there are six successive states corresponding to six magnetic states in the stator field. In each situation only two phases are in conduction mode, whereby a current flows into each winding during 120 • electrical at each rotation of the stator field. In a general case, since the switching elements are not ideally switched "on" and "off", some transition time (dead time) is required to avoid any short circuits in the branches (see Figure 1). However, for the topology of the inverter used here it is not necessary to introduce dead time because as shown in Figure 2a, the switching elements of the same sector (sector I-VI) are turned "on" by a 60 • electrical difference. Selection of MOSFETs The semiconductor switching elements selection along with the corresponding heat-sinks is of primary importance in order to maximize the overall efficiency. These choices depend on the application. The most critical factors that influence the final choice are summarized below: • The voltage drop during the conduction and their conduction resistance determine the conduction losses of the element. • Switching times (on, off and transition times) determine the switching losses and set the switching frequency limits. • Current and voltage values determine the power required to be handled by the switching elements. • The cooling requirements, that is, the temperature coefficient of the conduction resistance of the element. • Cost is also an important factor when selecting items. The following four types of switching elements are used in high power applications: thyristors, GTOs, IGBT and MOSFETs. Each one has a number of advantages and disadvantages that make it suitable for a particular application. The inverter manufactured in the context of this paper, operates at a fairly high frequency of 10 kHz-20 kHz (although 5 kHz-8 kHz could be sufficient), and can power a motor with nominal voltage equal to 24 V at 22 A on average (i.e.,~0.5 kW). This implies that the use of a thyristor and a GTO thyristor is rejected for the present application because they cannot operate at such switching frequency levels. In addition, IGBTs can operate at high frequencies and controlled voltage on their gate, but the rated operating voltage of the circuit is too small to allow the use of IGBTs. Thus, for the specific application, the MOSFET elements were selected. The use of AUIRF3205Z (from International Rectifier, El Segundo, CA, USA) was chosen based on the calculations shown below with emphasis given to lowest possible switching losses [30,31]. Its main characteristics are shown in Table 1. These features combined make this design an extremely efficient and reliable device for automotive applications as well as a wide range of other applications. It should be noted however, that for the sake of safety of the overall controller, i.e., in the event of over-voltages or high currents that may be caused by transient situations, many values are beyond the actual requirements of the application. MOSFETs, as is well known, have a parallel "free-wheeling" (or else "fly-back") diode embedded that allows the handling of reverse currents from the load to ground. The characteristics of the selected MOSFET diode are sufficient for the application requirements. The diode has a maximum reversal recovery time of 42 ns and a maximum forward voltage drop equal to V SD = 1.3 V. The authors propose the addition of an extra diode between the gate and the source (G-S) of each semiconductor. A 18 V zener diode for MOSFET protection was employed here in case of an over-current fault. For example, if for any reason, the voltage between the gate and the source becomes greater than the maximum allowable, the diode will be forward biased and the voltage will remain constant at 18V preventing damage to the element. Finally, the turn-"on" and turn-"off" speed of each MOSFET can be controlled by the value of the resistance connected in series to its gate, while regulating at the same time and the corresponding current. In order to determine this resistance it is necessary to take into account the gate-to-drain charge (Q gd ), the gate-to-source charge (Q gs ), the total gate charge (Q G ), the gate threshold voltage (V GSth ), the voltage applied to the MOSFET gate (V DD ) and the driver internal resistance (R DRV(on) ). The required values are available from the manufacturer's datasheet. Then, the gate resistance can be calculated by [32]: , For our case, Equation (9), gives a gate resistance of 38.5 Ω, thus a 39 Ω value was selected (R1, R2, and R3 in Figure 4). Figure 4 shows the analytical schematic diagram of the inverter. The aforementioned MOSFETs, zener diodes and gate resistances are shown clearly. The connectors shown lead to corresponding point of the rest schematics which will be shown in next paragraphs. Cooling Considerations and Heatsink Calculations The high switching frequency of the MOSFETs and the high current flowing through them, result in both conduction losses and switching losses. This, in turn, results in an increase in their temperature which-if not reduced-may lead to their destruction. Therefore, the temperature management of the semiconductor components is critical for the correct design of the system and the semiconductor longevity. Heat dissipation is done by the use of appropriate heat sinks, which release the heat generated to the environment. Heat sinks consist of heat-conductive metal plates, which have many folds (fins) that maximize their surface area, thus transferring large amounts of heat to the environment. In order to reduce the thermal resistance between the semiconductor elements and the heat sinks, a thermally conductive but electrically insulating paste is inserted between the switch and the heat sink, which isolates the heat sink from the power circuit. Heat dissipation is done by the use of appropriate heat sinks, which release the heat generated to the environment. Heat sinks consist of heat-conductive metal plates, which have many folds (fins) that maximize their surface area, thus transferring large amounts of heat to the environment. In order to reduce the thermal resistance between the semiconductor elements and the heat sinks, a thermally conductive but electrically insulating paste is inserted between the switch and the heat sink, which isolates the heat sink from the power circuit. It is to be noted that in Equation (11), ΔVSD is the source-to-drain diode forward voltage and ISD is the reverse drain current between the ambient and operating temperature. Moreover, D is the duty cycle (assumed as 50%) and the RDS(on) value have to be considered at 100 o C. Continuing we have: Switching losses: where: Total losses (Q) are the sum of conduction losses (Q c ), switching losses (Q sw ) and leakage losses (Q L ). The latter are quite small and are not taken into account. Conduction and switching losses are due to the operation of the semiconductor switches and the free-wheel diodes. The corresponding calculations proposed here are based on [30]. Thus we have: Conduction losses : It is to be noted that in Equation (11), ∆V SD is the source-to-drain diode forward voltage and I SD is the reverse drain current between the ambient and operating temperature. Moreover, D is the duty cycle (assumed as 50%) and the R DS(on) value have to be considered at 100 • C. Continuing we have: where: where t r and t f are the rise and fall time respectively (in ns), Q rr is the reverse recovery charge (in nC) and f sw the operating switching frequency. Finally the thermal resistance (higher limit) value can be calculated by: where T j , T amb , R jc and R cs are the operating junction temperature, the ambient temperature, the junction to case thermal resistance and the case-to-sink thermal resistance respectively. Substituting the values found in the MOSFET datasheet to the above equations, the total losses found was Q = 7.23 W (which was the lowest compared to the ones found for six other candidate MOSFET examined) and the thermal resistance of the required heat sink found to be R s-amb = 8.28 • C/W. Therefore, a heat sink with a resistance value lower than that must be selected. The heat sink selected here exhibits a thermal resistance of 6.5 • C/W, which is sufficient enough for continuous operation. DC Bus Capacitor (Inverter Input) Electrolytic capacitors should be used at the input of the three-phase inverter to act as filters to cutting off any over-currents during the opening/closing of the switching elements. The following relationship is used [33,34]: For a dc bus voltage V bus = 24 V, a lowest phase inductance of L = 20 µH, a minimum switching frequency f sw = 10 kHz and an acceptable voltage ripple (∆V) as of 5%, Equation (15) gives C = 312.5 µF. Thus, a 470 µF/35 V capacitor was selected for the prototype. Driver Stage Module Full control of the three-phase inverter requires a control circuit, which generates the signals necessary to provide appropriate pulses to the MOSFETs. The control signals are generated by the microcontroller and follow a "route" as shown in Figure 5a. Initially, the signals are directed to a hex inverter chip which consists of 6 NOT gates and inverses them. The inverted pulses are driven then to opto-couplers which invert them again and at the same time provides galvanic isolation between the microcontroller and the power stage. Finally, the signals are fed to appropriate MOSFET driver chips which amplify them again and produce the final MOSFET driving pulse sequences. Pulse Amplification and Galvanic Isolation The microcontroller output signals are either 0 V or 5 V voltage pulses, however, to be able to drive the MOSFETs they must also be accompanied by a sufficient current. However, the output current of any microcontroller is not sufficient and for this reason, the output signals must be amplified before any processing and hence the need of a hex inverter chip. The chip selected here to amplify and inverse the signals coming out of the microcontroller is the SN74LS06 (Texas Instruments, Dallas, TX, USA) which consists of six NOT gates and is powered by 5V. Afterwards, the inversed and amplified pulses signals, are directed to an opto-coupler which achieves galvanic isolation between the control circuit and the power circuit. To achieve this isolation between the two circuits, there is a light-emitting diode at the inlet of the opto-coupler, which emits light when a voltage is detected. At its exit there is an open collector photo-transistor, with an isolation gap between them. From this gap no current can pass but light pulses, resulting in the separation of the Earth potentials of the two circuits and the protection of the control circuit from leakage currents. The opto-coupler used in this application is the HCPL2631 (Fairchild Semiconductors, San Jose, CA, USA) which fully covers the application requirements. It is powered by a 5 V voltage and outputs an inverted signal that now has the form it had when exited the microcontroller but is now amplified. inverted signal that now has the form it had when exited the microcontroller but is now amplified. MOSFET Gate Driving and Bootstrap Circuit This is the second part of the driver stage module (right half of Figure 5a). An appropriate chip should be selected here also which will be responsible to boost the pulses to the desired level of power to ensure that the MOSFETs are properly driven. The IR2113 (International Rectifier) is one of the most robust, reliable as well as cheap choices, which can operate at high voltage and operating frequency and it exhibits two independent (high and low) output channels. Moreover, the latter cannot be activated simultaneously (which is desirable) as the chip has been designed for bootstrap mode operation (Figure 5b). The only MOSFET Gate Driving and Bootstrap Circuit This is the second part of the driver stage module (right half of Figure 5a). An appropriate chip should be selected here also which will be responsible to boost the pulses to the desired level of power to ensure that the MOSFETs are properly driven. The IR2113 (International Rectifier) is one of the most robust, reliable as well as cheap choices, which can operate at high voltage and operating frequency and it exhibits two independent (high and low) output channels. Moreover, the latter cannot be activated simultaneously (which is desirable) as the chip has been designed for bootstrap mode operation (Figure 5b). The only disadvantage is that it needs two different supply voltages, 5 V (between V cc and V ss ) and another one in the range 10 V to 20 V (between V cc and Com). In our application, a 15 V voltage was utilized. Thus, the incoming 5 V control signals are fed to the inputs H IN and L IN and the MOSFET gate signals are coming out from the outputs H O and L O at a 15 V level. The source pin of the low-side MOSFETs is permanently connected to ground, so that whenever a pulse comes to the L IN input, the L O output is connected to the 15 V supply. In contrast, the bootstrap methodology described in detail below is used to power the upper-side MOSFETs. Specifically, when the low-side MOSFET is in conduction mode, the V S of the upper one acquires the earth potential. The bootstrap capacitor (C BOOT ) is then charged through the bootstrap diode (D BOOT ) from the 15 V supply. When the lower MOSFET stops conducting and the upper one starts conducting, then the bootstrap capacitor discharges through upper MOSFET's gate, so the latter can go off. The bootstrap diode in this case prevents the discharge current from flowing back to the V CC source. The bootstrap connection has the advantage of being simple and economical but it also has some limitations. The conduction start time and the duty cycle are limited by the charging requirements of the bootstrap capacitor. The major disadvantage of this method however, is that the negative voltage that occurs at the source of the MOSFET during turn off state, results in the load current being sharply directed to the free wheel diode of the lower side MOSFET. Negative voltage can be a problem for the output of the gate driver because it directly affects the V S terminal of the driver. Another problem arising from this negative voltage is the possibility of overvoltage at the bootstrap capacitor [32]. The bootstrap diode must have a slight voltage drop when conducting and a short reverse recovery time to minimize the load returning from the capacitor to the power supply. For this reason an ultra-fast recovery diode must be used. Therefore, the UF4007 diode was selected which was placed very close to the bootstrap capacitor. Moreover, the capacitor shall be of sufficient capacity to supply the necessary current to ignite the upper MOSFET and maintain its voltage during conduction. For this purpose, a 0.22 µF/35 V tantalum capacitor was used. Tantalum capacitors have been chosen because they have high capacitance, low internal resistance (ESR) and minimal leakage current compared to electrolytic, polypropylene or ceramic counterparts. The calculation for the capacity of the bootstrap capacitor is given below. Firstly, it is necessary to calculate the minimum ∆V BS voltage that must be maintained by the capacitor when the upper MOSFET is conducting, i.e.,: where Vcc is the supply voltage of the IR2113 (15 V), V f is the diode voltage drop (1.7 V) and V GSmin is the minimum voltage needed for maintaining the conduction of the MOSFET (4 V). Afterwards, the total charge stored in the capacitor should be calculated: where I LKcap is the bootstrap capacitor leakage current, I LKGS is the MOSFET gate-source leakage current, I QBS is the bootstrap circuit quiescent current, I LK is the bootstrap circuit leakage current, I LKDiode is the bootstrap diode leakage current, t on is the MOSFET conduction delay time and Q LS is the charge required by the internal level shifter (which is set to 3 nC for all HV gate drivers). Finally, Equation (18) gives the capacitor value which in our case resulted in 13.65 nF. For this application, tantalum capacitors with a capacity of 0.22 µF, i.e., 16 times greater than the calculated value, were selected to extinguish the possibility of rapid capacitor discharge during MOSFET conduction: Specifically, according to [35,36], the bootstrap capacitor is the most important component because it provides a low impedance path to source the high peak currents to charge the high-side switch. As a general rule of thumb, this bootstrap capacitor should be sized to have enough energy to drive the gate of the high-side MOSFET without being depleted by more than 10%. This bootstrap cap should be at least 10 times greater than the gate capacitance of the high-side FET. The reason for that is to allow for capacitance shift from DC bias and temperature, and also skipped cycles that occur during load transients. Here, we followed a more accurate approach on the total charge calculation (Equation (17)) taking into account analytically all the relevant components leakage currents and thus the "16x" factor is justified. In order to demonstrate the charging/discharging waveforms of the calculated and the selected values bootstrap capacitor, Figure 5a was implemented into a suitable transient analysis simulation software (Proteus v.8.9 by Labcenter Electronics © , Grassington, North Yorkshire, UK). In Figure 6, the relevant results are shown, from where it can be observed that the 0.22 µF capacitor used presents a much more "stable" operating pattern being depleted only by 0.2% compared to the 13.65 nF one which-when discharging-is being depleted by approximately 5% in every cycle. cap should be at least 10 times greater than the gate capacitance of the high-side FET. The reason for that is to allow for capacitance shift from DC bias and temperature, and also skipped cycles that occur during load transients. Here, we followed a more accurate approach on the total charge calculation (Equation (17)) taking into account analytically all the relevant components leakage currents and thus the "16x" factor is justified. In order to demonstrate the charging/discharging waveforms of the calculated and the selected values bootstrap capacitor, Figure 5a was implemented into a suitable transient analysis simulation software (Proteus v.8.9 by Labcenter Electronics © , Grassington, North Yorkshire, UK). In Figure 6, the relevant results are shown, from where it can be observed that the 0.22 μF capacitor used presents a much more "stable" operating pattern being depleted only by 0.2% compared to the 13.65 nF one which-when discharging-is being depleted by approximately 5% in every cycle. Back-EMF measurement Since the operating voltage of the microcontroller is 5 V it is necessary to condition (i.e., transform to appropriate voltage levels) the back-EMF voltages acquired from the BLDCM terminals. There are several methods to do this. The most cost effective and yet effective way is to use an operational amplifier (op-amp) in differential mode. The circuit shown in Figure 7 was implemented for that purpose (three identical circuits one for each phase). The goal of the circuit is to "receive" an incoming voltage between −24 V to 24 V and to convert it to [0, 5] level. The LM258-N integrated circuit from Texas Instruments was used to implement the circuit which requires a positive power supply of up to 32 V (pin 8). In this application it was supplied with 12 V DC voltage from the power supply module. The differential amplifier's positive input is connected to the terminal of each phase of the motor and the negative input is grounded. According to the manufacturer's datasheet, the current at the input should not exceed 50 mA. For this reason a number of resistors at the inputs of the operator were used to limit the input current. In this application the maximum voltage can reach 22 V, so two 22 kΩ resistors were selected to be placed in series. Continuing, the gain of the op-amp can be determined. By selecting a 10 kΩ as a feedback resistance, the gain is calculated as 0.227 and thus the output voltage of the op-amp is Vout = gainxVin = 0.227x ± 22 V = ±5 V. A 5 V voltage is added to the op-amp output voltage giving 0-10 V and finally a voltage divider (using a 10 kΩ resistor along with a variable potentiometer for calibration) is applied to convert the range to 0-5 V. To reduce the noise in the power supply lines, 100 nF capacitors were used at the power terminals of each integrated circuit. With the use of these capacitors, high-frequency currents flow through them and do not enter the power lines. Also at the output of each op-amp a 5.1 V zener diode was connected for protecting the microcontroller if the voltage goes above the permissible limits. Back-EMF Measurement Since the operating voltage of the microcontroller is 5 V it is necessary to condition (i.e., transform to appropriate voltage levels) the back-EMF voltages acquired from the BLDCM terminals. There are several methods to do this. The most cost effective and yet effective way is to use an operational amplifier (op-amp) in differential mode. The circuit shown in Figure 7 was implemented for that purpose (three identical circuits one for each phase). The goal of the circuit is to "receive" an incoming voltage between −24 V to 24 V and to convert it to [0, 5] level. The LM258-N integrated circuit from Texas Instruments was used to implement the circuit which requires a positive power supply of up to 32 V (pin 8). In this application it was supplied with 12 V DC voltage from the power supply module. The differential amplifier's positive input is connected to the terminal of each phase of the motor and the negative input is grounded. According to the manufacturer's datasheet, the current at the input should not exceed 50 mA. For this reason a number of resistors at the inputs of the operator were used to limit the input current. In this application the maximum voltage can reach 22 V, so two 22 kΩ resistors were selected to be placed in series. Continuing, the gain of the op-amp can be determined. By selecting a 10 kΩ as a feedback resistance, the gain is calculated as 0.227 and thus the output voltage of the op-amp is V out = gainxV in = 0.227x ± 22 V = ±5 V. A 5 V voltage is added to the op-amp output voltage giving 0-10 V and finally a voltage divider (using a 10 kΩ resistor along with a variable potentiometer for calibration) is applied to convert the range to 0-5 V. To reduce the noise in the power supply lines, 100 nF capacitors were used at the power terminals of each integrated circuit. With the use of these capacitors, high-frequency currents flow through them and do not enter the power lines. Also at the output of each op-amp a 5.1 V zener diode was connected for protecting the microcontroller if the voltage goes above the permissible limits. interference noise may be generated by the motor during operation and also from the switching elements (due to their high operating frequencies). Thus, in order to eliminate voltage high frequency components, it is necessary to design and implement a simple low-pass filter, one for each motor phase. This is crucial for the reliable ZCP detection because the noise would lead to inaccurate zeros "reading" and as a result incorrect phase switching would occur. The filter should be placed between the motor terminals (its input) and the op-amp described in the previous paragraph (as its output) as shown in the left part of Figure 7. The following example is provided for easier understanding of the filter components (Rf, Cf) selection. Let us suppose that the motor has eight pair of poles (p) and a rated speed (n) of 900 rpm. Thus, the motor's voltage frequency at 900 rpm is 120 Hz (since n = 60f/p) as it can be seen in Figure 8a. If we want to eliminate the higher harmonics (3rd, 5th, etc.) then the cut-off frequency must be at least 360 Hz. By setting a capacitor value at Cf = 4.7 μF and applying the well-known formula, the resistance Rf of the filter must be approximately 94 Ω (which in practice can be realized by using a Low Pass Filter In such BLDCM controller systems, high frequency voltage spikes and/or electromagnetic interference noise may be generated by the motor during operation and also from the switching elements (due to their high operating frequencies). Thus, in order to eliminate voltage high frequency components, it is necessary to design and implement a simple low-pass filter, one for each motor phase. This is crucial for the reliable ZCP detection because the noise would lead to inaccurate zeros "reading" and as a result incorrect phase switching would occur. The filter should be placed between the motor terminals (its input) and the op-amp described in the previous paragraph (as its output) as shown in the left part of Figure 7. The following example is provided for easier understanding of the filter components (R f , C f ) selection. Let us suppose that the motor has eight pair of poles (p) and a rated speed (n) of 900 rpm. Thus, the motor's voltage frequency at 900 rpm is 120 Hz (since n = 60f /p) as it can be seen in Figure 8a. If we want to eliminate the higher harmonics (3rd, 5th, etc.) then the cut-off frequency must be at least 360 Hz. By setting a capacitor value at C f = 4.7 µF and applying the well-known formula, the resistance R f of the filter must be approximately 94 Ω (which in practice can be realized by using a potentiometer). Correspondingly, the cut-off frequency can also be derived by means of a graph similar to Figure 8b: potentiometer). Correspondingly, the cut-off frequency can also be derived by means of a graph similar to Figure 8b: 1 2 c ff f πR C = (19) For our application, the goal was to cut frequencies higher than 250 Hz. A constant value resistance of 39 Ω was used in series with a 100 Ω potentiometer, and also a polyester type capacitor (MKT) at 4.7 µ F/100 V, which is suitable for the desired frequency range. Using the potentiometer it is possible to vary the Rf in a range 39 Ω to 139 Ω. This allows the cut-off frequency to range from 243.6 Hz to 868.3 Hz respectively (shown in Figure 8b). This adopted technique allows for operating the controller in a wide range of filter responses and make the application more generic targeting to different pole pairs motors. Microcontroller Module Nowadays, the family of microcontrollers provided by the manufacturers (e.g., Microchip, Chandler, AZ, USA) is so large that their selection towards an application is rather a "personal affair" than a technical prescript. However, there are quite a few general guidelines which may be followed as those listed below. Based on these, the Microchip© dsPIC30F4011 μC was selected. The main reasons are: • It is well suited for control applications of motors since it provides three independent PWM output pairs allowing the control of three-phase and single-phase inverters. • It can process and execute complex digital signal calculations very quickly as required for BLDCM control. • It is manufactured in DIP package and therefore it is suitable for prototyping and breadboard use. • It can operate in a wide range of power supply (2.5-5.5 V). • It can withstand wrong voltage levels that by mistake may be applied to its terminals without being damaged. • It can be found in the market easily and is generally cheap. • Its development program (MPLAB and compiler) is available free of charge from the manufacturer. • It is equipped with enough I/O ports for interconnecting peripherals. • It has nine analogue input channels available, of 10-bit resolution. The basic operating principles are common for all 30F-family chips so the software code developed can be easily transferred (software portability) if other model is later needed to be For our application, the goal was to cut frequencies higher than 250 Hz. A constant value resistance of 39 Ω was used in series with a 100 Ω potentiometer, and also a polyester type capacitor (MKT) at 4.7 µF/100 V, which is suitable for the desired frequency range. Using the potentiometer it is possible to vary the R f in a range 39 Ω to 139 Ω. This allows the cut-off frequency to range from 243.6 Hz to 868.3 Hz respectively (shown in Figure 8b). This adopted technique allows for operating the controller in a wide range of filter responses and make the application more generic targeting to different pole pairs motors. Microcontroller Module Nowadays, the family of microcontrollers provided by the manufacturers (e.g., Microchip, Chandler, AZ, USA) is so large that their selection towards an application is rather a "personal affair" than a technical prescript. However, there are quite a few general guidelines which may be followed as those listed below. Based on these, the Microchip© dsPIC30F4011 µC was selected. The main reasons are: • It is well suited for control applications of motors since it provides three independent PWM output pairs allowing the control of three-phase and single-phase inverters. • It can process and execute complex digital signal calculations very quickly as required for BLDCM control. • It is manufactured in DIP package and therefore it is suitable for prototyping and breadboard use. • It can operate in a wide range of power supply (2.5-5.5 V). • It can withstand wrong voltage levels that by mistake may be applied to its terminals without being damaged. • It can be found in the market easily and is generally cheap. • Its development program (MPLAB and compiler) is available free of charge from the manufacturer. • It is equipped with enough I/O ports for interconnecting peripherals. • It has nine analogue input channels available, of 10-bit resolution. The basic operating principles are common for all 30F-family chips so the software code developed can be easily transferred (software portability) if other model is later needed to be employed. The schematic diagram of the µC module developed is shown in Figure 9. A 20 MHz oscillator was used. Two push buttons (start/stop) were also utilized for the controller's operation. employed. The schematic diagram of the μC module developed is shown in Figure 9. A 20 MHz oscillator was used. Two push buttons (start/stop) were also utilized for the controller's operation. Figure 9. Schematic diagram of the microcontroller module. Power Supply Module The overall controller requires different DC supply voltages for the different modules. Most of them require 5 V. However the gate drivers require both 5 V and 15 V and the integrated circuits for the motor terminal voltage conditioning to the microcontroller voltage level require 12 V. Since the developed controller is to be used in stand-alone application, the only power source will be a 22 V/5000 mAh, 6 cell, lithium polymer battery. Therefore, the battery voltage should be stepped down to 15 V, 12 V and 5 V, with the least power losses. Although three buck converters could be used, the cost would be high, so the solution was to use cheap voltage regulators. Specifically, Fairchild's LM7805, LM7812, LM7815 regulators were used in cascade connection. Their maximum power output is 1 A, which is more than enough for the low-power modules. For each voltage level, the output of each regulator is driven to three terminal bus-bars whereas afterwards the other modules are fed through them. This power supply module is "hosted" at the same board with the microcontroller module. The schematic diagram of the power supply module is shown in Figure 10. Power Supply Module The overall controller requires different DC supply voltages for the different modules. Most of them require 5 V. However the gate drivers require both 5 V and 15 V and the integrated circuits for the motor terminal voltage conditioning to the microcontroller voltage level require 12 V. Since the developed controller is to be used in stand-alone application, the only power source will be a 22 V/5000 mAh, 6 cell, lithium polymer battery. Therefore, the battery voltage should be stepped down to 15 V, 12 V and 5 V, with the least power losses. Although three buck converters could be used, the cost would be high, so the solution was to use cheap voltage regulators. Specifically, Fairchild's LM7805, LM7812, LM7815 regulators were used in cascade connection. Their maximum power output is 1 A, which is more than enough for the low-power modules. For each voltage level, the output of each regulator is driven to three terminal bus-bars whereas afterwards the other modules are fed through them. This power supply module is "hosted" at the same board with the microcontroller module. The schematic diagram of the power supply module is shown in Figure 10. employed. The schematic diagram of the μC module developed is shown in Figure 9. A 20 MHz oscillator was used. Two push buttons (start/stop) were also utilized for the controller's operation. Figure 9. Schematic diagram of the microcontroller module. Power Supply Module The overall controller requires different DC supply voltages for the different modules. Most of them require 5 V. However the gate drivers require both 5 V and 15 V and the integrated circuits for the motor terminal voltage conditioning to the microcontroller voltage level require 12 V. Since the developed controller is to be used in stand-alone application, the only power source will be a 22 V/5000 mAh, 6 cell, lithium polymer battery. Therefore, the battery voltage should be stepped down to 15 V, 12 V and 5 V, with the least power losses. Although three buck converters could be used, the cost would be high, so the solution was to use cheap voltage regulators. Specifically, Fairchild's LM7805, LM7812, LM7815 regulators were used in cascade connection. Their maximum power output is 1 A, which is more than enough for the low-power modules. For each voltage level, the output of each regulator is driven to three terminal bus-bars whereas afterwards the other modules are fed through them. This power supply module is "hosted" at the same board with the microcontroller module. The schematic diagram of the power supply module is shown in Figure 10. Supplementary Passive Components Used Various supplementary passive elements are also proposed and used here to protect the rest of the components (mainly for the control and the power modules). With respect to Figures 4,5,7,9 and 10, these are: • Pull-up resistors (330 Ω) were installed between the power supply of the hex-inverter and its outputs, in order to limit the input current to the capacitors below 20 mA and to enable the circuit to supply the input of the next connected element with the output of the hex inverter as long as the transistor of the latter is in cut-off mode. This is because the specific chip's (SN74LS06) gates are operating through an embedded open-collector transistor. • Pull-down resistors (10 kΩ) were added between the inputs of the hex inverter and the ground. That is necessary, in case any of the terminals found unconnected for any reason, then its output would be in zero state, otherwise we may face unwanted MOSFET firing or simultaneous firing of MOSFETs of the same leg, which would lead to controller failure. • Between the outputs of the opto-couplers and the 5 V power supply, 330 Ω (pull-up) resistors were also installed to limit their embedded photo-transistor's collector current and to enable the circuit to supply properly the next module stage. • In every module, in order to reduce noise in the power lines, ceramic capacitors of 100 nF were connected so that high frequency currents would flow through them and not the power lines. • In order to set the desired motor speed i.e., changing the PWM duty cycle, a 5 V/10 kΩ slider potentiometer was used. Control Philosophy The overall circuit control strategy is essentially implemented in software source code by six routines in total. The first three of them are (a) the motor starting and the alignment of the rotor routine, (b) the open loop routine and (c) the closed loop routine, and actually correspond to the three possible BLDCM operating conditions. Another two important routines (implemented as interrupts) are also implemented, a routine that converts the signals from analog to digital (ADC interrupt) and a second one for a necessary precision timer (interrupt of microcontroller's "Timer1"). Additionally, the control process initialization was done by coding a necessary initialization routine. The code was designed in C language and the Microchip © XC16 compiler was used to compile it. It was of great help to the software development, the fact that the dsPIC30F4011 is specifically engineered for motor control applications, and therefore offers various features to the developer, including the fully customizable PWM pulse generation. The software implementation is represented by the basic flowchart which is depicted in Figure 11a. Main Routine This is the main routine of the program. It initializes the variables used and calls the initialization routines of the microcontroller units (PWM, A/D, Timer). In addition, it controls the operation of the start/stop switch by changing the status variable and invokes the corresponding routines. The flowchart of this routine is shown in Figure 11a Figure 11. Flowcharts of (a) the overall control algorithm (power-on, align-up and infinite loop); (b) the rotor alignment routine. (see Appendix B, "routine A" and "routine B" respectively). Main Routine This is the main routine of the program. It initializes the variables used and calls the initialization routines of the microcontroller units (PWM, A/D, Timer). In addition, it controls the operation of the start / stop switch by changing the status variable and invokes the corresponding routines. The flowchart of this routine is shown in Figure 11a and the corresponding code is given in Appendix B, under the name "Routine A". Rotor Alignment Routine This routine is used to start and align the BLDCM rotor. The technique used here is easy and described next. Initially, the duty cycle is set to 50% and the 3rd sector is activated (S3, S2), according to Figure 1 and Table 2. The PWM signal is applied for a period of time (1-2 s is more than enough) to ensure the rotor alignment and then the 5th sector is activated (S5, S4) for another period of time which assures the starting of the rotor rotation. The flowchart of this routine is shown in Figure 11b and the corresponding code is given in Appendix B, under the name "Routine B". Figure 11. Flowcharts of (a) the overall control algorithm (power-on, align-up and infinite loop); (b) the rotor alignment routine. (see Appendix B, "routine A" and "routine B" respectively). Rotor Alignment Routine This routine is used to start and align the BLDCM rotor. The technique used here is easy and described next. Initially, the duty cycle is set to 50% and the 3rd sector is activated (S 3 , S 2 ), according to Figure 1 and Table 2. The PWM signal is applied for a period of time (1-2 s is more than enough) to ensure the rotor alignment and then the 5th sector is activated (S 5 , S 4 ) for another period of time which assures the starting of the rotor rotation. The flowchart of this routine is shown in Figure 11b and the corresponding code is given in Appendix B, under the name "Routine B". Control Loops Routines As mentioned in Section 4.1, two loop control modes were implemented for this application: a manual open loop control mode and a closed loop control mode which is governed by a simple PI controller. The corresponding software developed is presented next. Open Loop Open-loop starting is a practical control procedure to run the BLDC motor without position sensors that is accomplished by providing a rotating stator field with a certain frequency profile [16], either in manual or automatic mode. In our case, the latter was chosen where the PWM duty cycle ratio and therefore the motor speed varies manually according to the value of the control potentiometer. Sector VI commutating sequence is initiated for a time period set by dsPIC's "Timer1" and varies according to the desired speed each time. If the motor speed exceeds 100rpm, the process goes on the closed loop mode. The flowchart of this routine is shown in Figure 12a and the corresponding code is given in Appendix B, under the name "Routine C". As mentioned in Section 4.1, two loop control modes were implemented for this application: a manual open loop control mode and a closed loop control mode which is governed by a simple PI controller. The corresponding software developed is presented next. Open Loop Open-loop starting is a practical control procedure to run the BLDC motor without position sensors that is accomplished by providing a rotating stator field with a certain frequency profile [16], either in manual or automatic mode. In our case, the latter was chosen where the PWM duty cycle ratio and therefore the motor speed varies manually according to the value of the control potentiometer. Sector VI commutating sequence is initiated for a time period set by dsPIC's "Timer1" and varies according to the desired speed each time. If the motor speed exceeds 100rpm, the process goes on the closed loop mode. The flowchart of this routine is shown in Figure 12a and the corresponding code is given in Appendix B, under the name "Routine C". Closed Loop In terms of stable operation, this routine handles programmatically the two interrupt service routines (ISR) as well as host the PI control operation. In terms of controller stable performance, it is very crucial and can be fully parameterized. Initially, the actual motor speed is compared to the reference speed and an error is generated which is used in the PI controller to generate the new PWM duty cycle value. For safety reasons, this value is limited between 0.2 and 1.0, and then the motor windings are excited for a certain period of time. The duration interval depends on the detection of zeros points in the line voltages as will be explained below in the A/D ISR operation. Finally, sampling of phases A, B, C. are performed and the cycle continues. If the speed drops below a threshold value (e.g., 100 rpm) due to a loading condition, the program flow jumps to open loop Closed Loop In terms of stable operation, this routine handles programmatically the two interrupt service routines (ISR) as well as host the PI control operation. In terms of controller stable performance, it is very crucial and can be fully parameterized. Initially, the actual motor speed is compared to the reference speed and an error is generated which is used in the PI controller to generate the new PWM duty cycle value. For safety reasons, this value is limited between 0.2 and 1.0, and then the motor windings are excited for a certain period of time. The duration interval depends on the detection of zeros points in the line voltages as will be explained below in the A/D ISR operation. Finally, sampling of phases A, B, C. are performed and the cycle continues. If the speed drops below a threshold value (e.g., 100 rpm) due to a loading condition, the program flow jumps to open loop operation. The flowchart of this routine is shown in Figure 12b and the corresponding code is given in Appendix B, under the name "Routine D". Interrupt Services Routines This class of routines serves two program interrupts, including the ADC ISR and the Timer1 ISR. Specifically, the ADC routine is used for sampling the line voltage signals and searching for zero crossing points. When finding a ZCP the commutation sequence is performed according to Table 2. In particular, if a ZCP is found for phase A, then Timer2 is also activated to calculate the actual motor speed. On the other hand, the Timer1 routine is used to calculate the phase transition time of the motor during its alignment operation and when operating in open loop. The flowchart of ADC ISR is shown in Figure 13a and the corresponding code is given in Appendix A, under the name "Routine F". The flowchart of Timer1 ISR is shown in Figure 13b and the corresponding code is given in Appendix A, under the name "Routine E". operation. The flowchart of this routine is shown in Figure 12b and the corresponding code is given in Appendix B, under the name "Routine D". Interrupt Services Routines This class of routines serves two program interrupts, including the ADC ISR and the Timer1 ISR. Specifically, the ADC routine is used for sampling the line voltage signals and searching for zero crossing points. When finding a ZCP the commutation sequence is performed according to Table 2. In particular, if a ZCP is found for phase A, then Timer2 is also activated to calculate the actual motor speed. On the other hand, the Timer1 routine is used to calculate the phase transition time of the motor during its alignment operation and when operating in open loop. The flowchart of ADC ISR is shown in Figure 13a and the corresponding code is given in Appendix A, under the name "Routine F". The flowchart of Timer1 ISR is shown in Figure 13b and the corresponding code is (a) (b) Figure 13. Flowchart of software code routines for (a) Timer1 ISR; (b) A/D conversion ISR (see Appendix B, "routine E" and "routine F" respectively). Experimental Prototype's Indicative Results The presented controller module schematics were first developed with the aid of Novarm © Diptrace software (Kedrina, Dniepropetrovsk, Ukraine), and then the corresponding printed circuit boards (PCBs) were designed manually in order for the modules to present the required connection alignment in the final form. Special care was taken in specific trace routes where large currents may flow (from the power source through MOSFETs and to motor). Afterwards the PCBs designs were transferred onto photo resistive copper clad boards which were drilled and used for assembling the components. Single sided copper boards were selected for simplicity in manufacturing. The assembled modules and the connection between them, composed an experimental test bed where test runs were performed. A view of the test bed is depicted in Figure A2a in Figure 13. Flowchart of software code routines for (a) Timer1 ISR; (b) A/D conversion ISR (see Appendix B, "routine E" and "routine F" respectively). Experimental Prototype's Indicative Results The presented controller module schematics were first developed with the aid of Novarm © Diptrace software (Kedrina, Dniepropetrovsk, Ukraine), and then the corresponding printed circuit boards (PCBs) were designed manually in order for the modules to present the required connection alignment in the final form. Special care was taken in specific trace routes where large currents may flow (from the power source through MOSFETs and to motor). Afterwards the PCBs designs were transferred onto photo resistive copper clad boards which were drilled and used for assembling the components. Single sided copper boards were selected for simplicity in manufacturing. The assembled modules and the connection between them, composed an experimental test bed where test runs were performed. A view of the test bed is depicted in Figure A2a in Appendix A. The final prototype is depicted in the same Figure A2b in the same Appendix. A small hobby BLDCM like those used in remote control models (i.e., airplanes, drones) where utilized as a test motor. Initially, some experiments were carried out regarding the generated PWM pulses. Figure 14 actually verifies the correct PWM signals feeding, based on the methodology of Section 3.2 for the circuit shown in Figure 5a. Specifically, for a 20 kHz switching frequency, the pulses generated by the microcontroller's output are fed to the SN74LS06 which inverts them and, in turn, feed the HCPL2631 opto-coupler. As can be seen in Figure 14a the pulses from the output of HCPL2630 are identical to the pulses originally generated by the microcontroller. Continuing, Figure 14b verifies the correct operation of the IR2113 MOSFET driver. It can be seen that compared to the pulse amplitude at the input of SN74LS06 (3.16 V), the output pulse from the IR2113 has been appropriately amplified (10.48 V) in order to drive the MOSFETs gate. command) up to a speed of 100 rpm. When this speed is exceeded, a transition occurs to closed loop mode. Because of space limitations, indicative results are shown in this paper. The essential element of these controller types is the proper operation of the bootstrap circuit which drives the upper part of the inverter's bridge, and therefore the MOSFET voltages are examined. Figure 15a refers to Case 1 and depicts the VGS voltage (along with the gate voltage, VG and the source voltage, VS) under 140 rpm and a duty cycle equal to 50%. For the same duty cycle, Figure 15b refers to Case 2 and shows the VGS voltage for approximately the same speed. Figure 16a corresponds to Case 3 and for a duty cycle of approximately 30% and speed close to 100 rpm, the same voltage is shown. Figure 16b depicts the VGS voltage for approximately 89 rpm and a duty cycle of 43.8%. In all Cases, it can be deduced from the waveforms that actually the bootstrap capacitor charges/discharges in a smooth way. It can also be deduced that the attained switching frequency has a magnitude of more than 25 kHz (<40 μs period). Finally, a simple cost comparison with commercial BLDCM sensorless controllers is demonstrated in Table 3 where it is clearly shown the cost superiority of the controller developed here and for higher motor ratings. However, it should be noted, that this Table is just for relative information and not for actual comparison, since a) the cost of the implemented controller represents only material costs and b) we have to acknowledge that the commercial controllers present many features not examined in this study. For all cases the gate-to-source voltage of the inverter's upper leg MOSFET was obtained by differential measurement. The differential measurement was made between the gate and the ground (orange waveform) and the source and the ground (green waveform). For each case, the motor is initially aligned and, just after, enters into an open loop mode where it can be driven (through user command) up to a speed of 100 rpm. When this speed is exceeded, a transition occurs to closed loop mode. Because of space limitations, indicative results are shown in this paper. The essential element of these controller types is the proper operation of the bootstrap circuit which drives the upper part of the inverter's bridge, and therefore the MOSFET voltages are examined. Figure 15a refers to Case 1 and depicts the V GS voltage (along with the gate voltage, V G and the source voltage, V S ) under 140 rpm and a duty cycle equal to 50%. For the same duty cycle, Figure 15b refers to Case 2 and shows the V GS voltage for approximately the same speed. Figure 16a corresponds to Case 3 and for a duty cycle of approximately 30% and speed close to 100 rpm, the same voltage is shown. Figure 16b depicts the V GS voltage for approximately 89 rpm and a duty cycle of 43.8%. In all Cases, it can be deduced from the waveforms that actually the bootstrap capacitor charges/discharges in a smooth way. It can also be deduced that the attained switching frequency has a magnitude of more than 25 kHz (<40 µs period). Conclusions The purpose of this paper was to present a descriptive as well as informative straightforward practical implementation method of a low cost BLDCM sensorless controller. Having simplicity and low cost as a priority, several issues regarding the hardware components selection were discussed and commented. A modular-based design was adopted for comprehensibility and easiness in manufacturing process. Based on a suitable-for digital signal processing-microcontroller, the PWM pulses generation and their driving up to the power inverter MOSFETs was discussed in detail and simple techniques were proposed. A bootstrap circuit was implemented as the main driving hardware. Regarding the power inverter, analytical calculations were shown for the proper selection and cooling of the switching elements. For the control strategy, a variant of the well-known zero crossing point detection method was utilized, based on the back-EMF differences signals and a suitable conditioning circuit was also presented. Detailed schematics of the proposed architecture clarified the overall controller scheme. Moreover, the aforementioned control strategy adopted was analyzed thoroughly, based on the simplicity and portability. Relative software source code developed was also given as a reference. Finally, indicative experimental results from the controller operation revealed the reliable and proper functionality. There are many points which future work efforts can target. One includes the careful replacement of the majority of the components with surface mounted devices (SMD) towards lower size and weight. Redesigning the modules structure "from scratch" is another thought, towards Conclusions The purpose of this paper was to present a descriptive as well as informative straightforward practical implementation method of a low cost BLDCM sensorless controller. Having simplicity and low cost as a priority, several issues regarding the hardware components selection were discussed and commented. A modular-based design was adopted for comprehensibility and easiness in manufacturing process. Based on a suitable-for digital signal processing-microcontroller, the PWM pulses generation and their driving up to the power inverter MOSFETs was discussed in detail and simple techniques were proposed. A bootstrap circuit was implemented as the main driving hardware. Regarding the power inverter, analytical calculations were shown for the proper selection and cooling of the switching elements. For the control strategy, a variant of the well-known zero crossing point detection method was utilized, based on the back-EMF differences signals and a suitable conditioning circuit was also presented. Detailed schematics of the proposed architecture clarified the overall controller scheme. Moreover, the aforementioned control strategy adopted was analyzed thoroughly, based on the simplicity and portability. Relative software source code developed was also given as a reference. Finally, indicative experimental results from the controller operation revealed the reliable and proper functionality. There are many points which future work efforts can target. One includes the careful replacement of the majority of the components with surface mounted devices (SMD) towards lower size and weight. Redesigning the modules structure "from scratch" is another thought, towards Finally, a simple cost comparison with commercial BLDCM sensorless controllers is demonstrated in Table 3 where it is clearly shown the cost superiority of the controller developed here and for higher motor ratings. However, it should be noted, that this Table is just for relative information and not for actual comparison, since (a) the cost of the implemented controller represents only material costs and (b) we have to acknowledge that the commercial controllers present many features not examined in this study. Conclusions The purpose of this paper was to present a descriptive as well as informative straightforward practical implementation method of a low cost BLDCM sensorless controller. Having simplicity and low cost as a priority, several issues regarding the hardware components selection were discussed and commented. A modular-based design was adopted for comprehensibility and easiness in manufacturing process. Based on a suitable-for digital signal processing-microcontroller, the PWM pulses generation and their driving up to the power inverter MOSFETs was discussed in detail and simple techniques were proposed. A bootstrap circuit was implemented as the main driving hardware. Regarding the power inverter, analytical calculations were shown for the proper selection and cooling of the switching elements. For the control strategy, a variant of the well-known zero crossing point detection method was utilized, based on the back-EMF differences signals and a suitable conditioning circuit was also presented. Detailed schematics of the proposed architecture clarified the overall controller scheme. Moreover, the aforementioned control strategy adopted was analyzed thoroughly, based on the simplicity and portability. Relative software source code developed was also given as a reference. Finally, indicative experimental results from the controller operation revealed the reliable and proper functionality. There are many points which future work efforts can target. One includes the careful replacement of the majority of the components with surface mounted devices (SMD) towards lower size and weight. Redesigning the modules structure "from scratch" is another thought, towards maintenance flexibility (easy replacement or service). Another, more technical issue, would be the addition of transition voltage suppress diodes (TVS) at the inverter stage input, which would suppress major voltage peaks observed in oscilloscope during commutation sequence transitions. In software level, there are also some potential improvements as a future work framework. For example, it would be useful during the start-up and alignment process to take into account some motor quantities (by storing them into a static memory), such as its idle torque, so that the start-up time would be adjusted appropriately each time a different motor is used (i.e., specific motor profile operation). It would be also useful to develop a graphical interface (either stand-alone or through USB connection to a PC) that would inform the user about e.g., motor speed, phase currents, line voltages, etc.). Last but not least, optimal tuning of the PI controller parameters for enhanced operation in closed loop would be of interest also. Moreover, the investigation of using two different sets of parameters, one for the open loop mode and the other for the closed loop mode (which would be changing automatically), would have been of great interest too. Author Contributions: All authors are involved developing the concept, simulation and experimental validation and to make the article error free technical outcome for the set investigation work. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. maintenance flexibility (easy replacement or service). Another, more technical issue, would be the addition of transition voltage suppress diodes (TVS) at the inverter stage input, which would suppress major voltage peaks observed in oscilloscope during commutation sequence transitions. In software level, there are also some potential improvements as a future work framework. For example, it would be useful during the start-up and alignment process to take into account some motor quantities (by storing them into a static memory), such as its idle torque, so that the start-up time would be adjusted appropriately each time a different motor is used (i.e., specific motor profile operation). It would be also useful to develop a graphical interface (either stand-alone or through USB connection to a PC) that would inform the user about e.g. motor speed, phase currents, line voltages, etc.). Last but not least, optimal tuning of the PI controller parameters for enhanced operation in closed loop would be of interest also. Moreover, the investigation of using two different sets of parameters, one for the open loop mode and the other for the closed loop mode (which would be changing automatically), would have been of great interest too. Appendix A Prototype Modules Photos and Experimental Set-Up Description Author Contributions: All authors are involved developing the concept, simulation and experimental validation and to make the article error free technical outcome for the set investigation work. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
18,337.2
2019-12-01T00:00:00.000
[ "Engineering", "Computer Science" ]
A Comparative analysis of OFDM based Communication using different Mapping Schemes for Signal Modulation OFDM is a multiplexing technique that divided a spectrum into sub-carriers that are orthogonal to each other. They are capable to carry high rate data transmission. The orthogonality is maintained as one sub carrier is null on the centre of other sub carrier. Sub-carriers have cyclic prefix is put in between and bandwidth wastage due to guard band, as in case of FDMA, is reduced up to 50% in most of the cases. Spectral efficiency is of OFDM is far more better than other frequency modulation techniques due to othrognality and CP design that helps in better diversity gain. The data is dispensed as sub-streams among sub carriers and each of the sub carrier is modulated as per the coded (PSK, BPSK, QAM) data sub-streams. Thus rather modulating whole data stream over a single frequency, sub carriers are allocated portions of data that ensure large amount of data processing. To ensure orthogonality, serial data is converted to parallel streams and are fed to IFFT module, from where P/S conversion is performed and CP is inserted between two sub-carriers. The reverse of these steps happens in the receiving module. Important aspects of OFDM systems are, Synchronization, Pilot allocation, the Channel State Information or Channel estimation etc. However this paper takes up channel estimation techniques (channel state information) in OFDM systems and puts a comparative analysis of them. It will help in understanding merits and demerits of estimation techniques that are in use. INTRODUCTION Signal through various paths, from transmitter, reaches the receiver. This propagation of signal from multiple paths impairs each signal in varying ways. Many impairments that causes signal to lose SNR and its intactness include ISI, ICI, Small scale fading, larges scale fading etc. Noise addition is another important aspect of signal distortion. This distortion, along with restriction to high data rate transmission due to limited frequency usage, had hampered new generation data services. It is therefore easy to comprehend that if the data is dispensed among different subcarriers, high data rates and be achieved and impairments to single carrier can be mitigated. For the purpose of distinguishing among subcarriers that carry divided data, two multiplexing techniques have gained great importance [2]. i. CDM (Code Division Multiplexing) In first methodology frequency is divided into subcarriers and thus distinction is achieved through frequency differences. However another way is to distinguish among carriers is through codes in which separate codes are allocated for multiplexing. Another way, which is more efficient and saves bandwidth, is OFDM [4]. OFDMA is an access technology that makes use of OFDM which divides a given spectrum into number of sub carriers, which are orthogonal to each other and thus produce higher spectral efficiency and save great deal of bandwidth as shown in Fig.1 Fig1. Comparison of OFDM and FDM Fig 2 shows a spectrum which is divided into N number of subcarriers for the purpose of OFDM. Data is dispensed to each of the subcarrier and this each carrier is modulated using a scheme like PSK, QAM etc. usually N number of carriers are not less than 256, and converted into time domain (as per OFDM modem design to retain othogonality and sent to upconverter for transmission. The reverse of all that happened at the transmitter station happens and the signal is converted into frequency domain. After complete reception each carrier is demodulated for data retrieval [6]. Before the up conversion and transmission, as told above, two subsequent carriers are inserted with Cyclic Prefix in between them. After simplified explanation of the OFDM Modulation and Demodulation, mathematical representation of the system is important in order to understand OFDM system in detail. When a symbol is mapped on a carrier frequency, it is grouped into column vector so that it might be fed to IFFT module (S/P conversion, shown in Fig 2) Cyclic prefix is appended after the carrier and also is pre-pended, in order to avoid cross talk between two sub carriers. Mathematically the presence of CP can be included in the above mentioned symbol"s vectors are: Let be the vector matrix of the CP that is appended to the carriers, then: Whereas: It can be written in mathematical form that, : M a r c h 1 8 , 2 0 14 Ng which should be greater than L in order to eliminate delay spread i.e. . Insertion of Cyclic Prefix the delay spread can be avoided thus mitigating ICI and ISI in transmission. The value of CP is taken as 16 as per IEEE 802.11a standards [4]. The cumulative signal, as shown in both the vectoral representation (symbol and cyclic prefix), is sent to DAC which then transmit the signal through RF arrangement. This signal attaining amplitude, frequency distortion and noise is received at the other end (as shown in fig 2). In our case to keep things simple and for comparison purpose, Quasi Static -Selective Rayleigh fading channel has been used [7]. Also that, the channel has be considered constant so that parameters related to channel estimation/ channel state information might not complex this research. The channel is considered as (L1)-th order FIR with filter co-efficient as shown in fig 4.2 The channel impulse response of this channel for m-th Symbol can be stated in matrix form as: Subscription, m denotes different value of Channel Impulse Response for each of the OFDM symbol. The overall baseband mathematical model for the received can be obtained by addition of AWGN to the result of convolution of "U" and "h" vectors that represent the CP + Symbol and Channel Impulse response respectively. The sample of OFDM n-th in number can be represented as: Whereas, = n-th sample received during m-th symbol reception at OFDM receiving end. th sample. Since the above expression represents last sample of the OFDM subcarrier, it can be used to present the whole signals transfer to the receiver end as given below [3]. M a r c h 1 8 , 2 0 14 is Troeplitz-matrix for Channel Impulse Response vector in case of m-th symbol. DT (discrete time) convolution can be taken by using "h" and "U" vector which for received signal of m-th position in OFDM symbol will be: ………….. (6) v(m) is the AWGN and term represents ISI and also that ISI will effect L -1 terms. Let y (m) be the output after removing CP. ISI will be mitigated if the condition that delay spread Ng is less than L. y (m) can be represented as Where C= Channel Impulse response and is also shown as CIR Decomposition of values that are Eigen Values will be: M a r c h 1 8 , 2 0 14 The above mentioned equation is the representation of OFDM signal. This mathematical representation is complete in explaining the OFDM transmitter and receiving process and helps in understanding equalization, channel estimation etc. These expressions have been used in MATLAB for the simulation purpose and a comparison by changing modulation techniques has been shown. Simulation System Parameters The parameters and values that have been fixed in this simulation are given in Table. 1. In simulation no realistic channel has been used and the values of AWGN have been incorporated rather any other impairment to keep the results simple. Table1. OFDMA Parameters for Simulation Different plot between BER and Eb/No or SNR have been obtained with the use of different mapping schemes. Cyclic Prefix is greater enough to cause any delay spread (cause of ISI). The number of bits per symbol of OFDM and the number of carriers are the same in all cases. Conclusion The simulation has been performed in order to check the degradation in transmission, bit error rate for the same set of conditions for different modulation techniques. The channel conditions have been taken as ideal however simple AWGN impairments have been added. The simulation has shown that QPSK technique has performed better than any of the techniques used. It can also be concluded that the communication system that uses OFDM has higher bit error rates in case higher bandwidth efficient modulation schemes are used. This is because of using smaller spectrum to transfer high data rate and is significantly wastage of advantage that is provided by OFDM.
1,933.4
2014-03-18T00:00:00.000
[ "Computer Science", "Engineering" ]
Direct growth of large-area graphene and boron nitride heterostructures by a co-segregation method Graphene/hexagonal boron nitride (h-BN) vertical heterostructures have recently revealed unusual physical properties and new phenomena, such as commensurate–incommensurate transition and fractional quantum hall states featured with Hofstadter’s butterfly. Graphene-based devices on h-BN substrate also exhibit high performance owing to the atomically flat surface of h-BN and its lack of charged impurities. To have a clean interface between the graphene and h-BN for better device performance, direct growth of large-area graphene/h-BN heterostructures is of great importance. Here we report the direct growth of large-area graphene/h-BN vertical heterostructures by a co-segregation method. By one-step annealing sandwiched growth substrates (Ni(C)/(B, N)-source/Ni) in vacuum, wafer-scale graphene/h-BN films can be directly formed on the metal surface. The as-grown vertically stacked graphene/h-BN structures are demonstrated by various morphology and spectroscopic characterizations. This co-segregation approach opens up a new pathway for large-batch production of graphene/h-BN heterostructures and would also be extended to the synthesis of other van der Waals heterostructures. Direct growth of graphene on h-BN is desired to improve device performance. Here, the authors demonstrate the direct growth of large-area and continuous graphene/h-BN vertical heterostructures via a co-segregation approach. V ertically stacked heterostructures of graphene and hexagonal boron nitride (h-BN) have recently revealed various novel properties and new phenomena [1][2][3][4][5] , which show great potential for enhancing the performance of graphenebased electronic devices 6,7 . Thus far, heterostructures of graphene and h-BN have been fabricated mainly by the typical mechanical cleavage and layer-by-layer transfer technique 6,8 , which is also used for obtaining other van der Waals heterostructures through stacking different two-dimensional crystals on top of each other 9,10 . However, the cleavage-and-transfer method is not suitable for industrial applications that require scalable low-cost approaches for the production of large-area heterostructures. To date, the growth of large-area graphene 11,12 and h-BN 13 on metal surfaces have been achieved by chemical vapour deposition (CVD) method. The graphene-on-h-BN (graphene/h-BN) or h-BN-on-graphene (h-BN/graphene) heterostructrues were obtained by stacking the CVD-grown graphene and h-BN on top of each other through the transfer process 14,15 . But the interfacial contamination is a big problem for the transfer approach, since the air, water 16 and hydrocarbon 8 can be easily trapped on surface. To obtain clean interface in the van der Waals heterostructures for better performance, direct growth of graphene and h-BN on top of each other is of great importance for their practical applications. Although CVD growth of h-BN/ graphene 17 and graphene/h-BN [18][19][20][21] heterostructures have been reported recently, it is still a big challenge to obtain continuous, large-area and uniform heterostructures in wafer scale. CVD process and segregation are the two dominant methods for the large-scale growth of graphene or h-BN on metal surfaces 22,23 . During the CVD growth of graphene or h-BN, the surface of metal catalyst plays a key role in the decomposition of gas molecules and the nucleation of absorbed atoms. But once the metal surface is deactivated by fully covered graphene or monolayer h-BN, the decomposition and nucleation rates are greatly suppressed, which depresses the formation rate of another epitaxial layer on top of the first epitaxial layer 22 . On the other hand, segregation methods have been recently developed for the growth of large-area and high-quality graphene 24,25 and h-BN 26,27 . Different from the gas precursors used in CVD process, solid-state dissolved constituents (like B, C and N atoms) in the bulk of metals are used as precursors in the segregation method, which follows an underneath growth mode. For this reason, after the first layer is covered on the surface of metals, dissolved foreign atoms can continue to segregate from the bulk of metals to the interface between the first layer and the metal interface, forming the second layer under the first layer. Thus, the segregation method may open up a new efficient way for the growth of various van der Waals heterostructures. Here we report the direct growth of graphene/h-BN vertical heterostructures by a co-segregation method. By one-step vacuum annealing the designed growth substrates with embedded solidstate C and (B, N) sources, dissolved C and (B, N) sources sequentially segregate on the surface of metals, directly forming the large-area and continuous graphene/h-BN vertical heterostructures on the growth substrates. This co-segregation method may open up a new efficient way for the industrial preparation of large-area graphene/h-BN heterostructures. Results Synthesis of graphene/h-BN heterostructures. On the basis of our previous experiments on the segregation growth of individual large-area graphene 25 and h-BN 27 thin films, we specifically designed a sandwiched growth substrate for the preparation of graphene/h-BN heterostructures, as illustrated in Fig 1a. The sandwiched growth substrate is composed of a solid-state (B, N)-source embedded between a C-doped nickel (Ni) top layer and a Ni bottom layer, which are sequentially deposited on a 300-nm-SiO 2 /Si wafer substrate by electron-beam evaporations of commercial Ni, BN and C-doped Ni targets. About 2.6 at% C is included in the bulk of C-doped Ni layer 25 , and the N: B ratio is about 11% in the (B, N)-source 27 . During the vacuum annealing of the Ni(C)/(B, N)/Ni sandwiched substrates, dissolved C atoms in the bulk of C-doped Ni top layer first segregate from the bulk to the surface, nucleating and forming the graphene layer on the top of the metal surface. The B-Ni binary phase diagram shows that the B atoms can react with Ni atoms producing Ni x B (x ¼ 1, 2, 3) compound 28 . Thus, during the annealing, B atoms in the (B, N) source and Ni atoms dissolved into each other through the reaction-diffusion process. N atoms are also brought into the bulk of Ni along with the dissolution of B atoms. When both B and N atoms diffuse to the surface of top Ni(C) layer, B and N atoms start to nucleate and grow with the form of h-BN between the graphene layer and Ni(C) surface, directly producing vertically stacked graphene/h-BN heterostructures. B and N atoms may also interdiffuse with the top graphene layer at high temperature, forming the in-plane hybrid of graphene and BN region. As shown in the inset of Fig. 1b, four-inch as-grown sample on the growth substrates can be prepared in one-time and the quantity of growth wafers is only restricted by the space of furnace, which shows great potential for mass production of large-area graphene/h-BN heterostructures. Morphology and structure characterizations. Figure 1b shows the optical image of as-grown samples. The dark region marked with dashed circle is quite similar with the morphology of CVDgrown multilayer graphene islands on metals 12 . To confirm the stacked heterostructures, the as-grown samples on the wafer-scale substrates were cut into small pieces and transferred to different substrates using the poly (methyl methacrylate) (PMMA)mediated transfer-printing technique 12 . Because the h-BN layers in the heterostructures are not directly adhered to the supported PMMA layer during the transfer process, h-BN layers underneath the top graphene layers can be folded and scratched away with the assistance of low-power sonication or other mechanical force, producing separated graphene layer and graphene/h-BN heterostructures (see Supplementary Fig. 1). As shown in Fig. 1c,d, the large-area graphene/h-BN heterostructures transferred onto the 285-nm-thick SiO 2 /Si substrates can be clearly identified from the separated graphene layer by the colour contrast in the optical images. Figure 1d illustrates the enlarged image of the dash line marked region in Fig. 1c, which shows that graphene is continuous across the edge between the graphene/ folded-h-BN and the separated graphene. By controlling the transfer process, the h-BN layers can be remained or fully removed, leaving the complete graphene/h-BN heterostructures or isolated graphene layer for other characterizations. As shown in Fig. 1e-g, high-resolution transmission electron microscopy (TEM, FEI F30, operated at 300 kV) was used to examine the microstructure of the graphene/h-BN heterostructures. Free-standing graphene/h-BN heterostructures was transferred onto TEM grids. As shown in Fig. 1e,f, clear Moiré patterns are observed in the high-resolution TEM images of the complete graphene/h-BN heterostructures. As shown in Fig. 1f, the Moiré pattern wavelength is measured to be about 1.28 nm, and the fast Fourier transform pattern of Fig. 1f displays two sets of hexagonal spots with a rotation angle about 11°(inset of Fig. 1f). As the lattice mismatch between graphene and h-BN (B1.8%) is very small, the Moiré pattern can be ascribed to the rotation between the graphene lattice and h-BN lattice 3 or the rotation between two ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/ncomms7519 kind of h-BN lattice or graphene lattice. As shown in the inset of Fig. 1g, only one set of hexagonal spots is distinguished, which suggests that the rotation angle between graphene and h-BN is close to 0°in the examined region (Fig. 1g). Figure 1g demonstrates that the hexagonal lattice spacing is about 0.25 nm, which is close to the h-BN lattice constant (0.250 nm) and graphite lattice (0.246 nm) 23 . After removing the bottom h-BN layers in the vertical heterostructures, the separated graphene were transferred onto TEM grids for TEM characterization, as shown in Fig. 1h,i and Supplementary Fig. 2. Spherical aberration-corrected TEM (FEI 80-300 environmental Titan (S) TEM, operated at 80 kV) was taken on the separated graphene to achieve its highresolution TEM images. Figure 1h shows the aberration-corrected HRTEM image of a double-layer region of the separated graphene, which reveals the Moiré patterns with a wavelength about 1.62 nm (inset of Fig. 1h). By observing folded edges of the separated graphene in TEM images, most of the observed regions are found to be single layer (Fig. 1i). Some double-layer and multilayer graphene regions are also observed (Fig. 1i), which are consistent with the multilayer regions marked with dot circle in Fig. 1b,d. The corresponding scanning electron microscopy images (see Supplementary Fig. 3) and atomic force microscopy images (see Supplementary Fig. 4) also reveal that most of the regions in the separated graphene are single to few-layer graphene and multilayer graphene regions are sporadic distributed. X-ray photoelectron spectroscopy characterization. X-ray photoelectron spectroscopy (XPS) was taken to determine the vertically stacked graphene/h-BN heterostructures grown on the surface of the growth substrates, since XPS is a surface-sensitive technique that can collect the signals of electrons escaping from the top 0 to about 10 nm in depth of the test materials. As shown in Fig. 2a,b, the B1s peak and N1s peak are located at B190.1 eV and B397.7 eV respectively, which are in good agreement with the standard XPS values for h-BN 13,27 . The N:B atomic ratio calculated from the XPS data is about 1.01:1, which is close to the 1:1 ratio in h-BN. As shown in Fig. 2c, the C1s peak at 284.8 eV corresponds to the C-C bonding in graphene or graphite. Small N1s peak at 400.1 eV and C1s peak at 288.6 eV are also observed in Fig. 2b,c respectively, reflecting the presence of bonding between N and C atoms 29,30 . The N-C bonds indicate that the inplane hybridization of graphene and h-BN may also exist in the heterostructures 30 ARTICLE dissolved in Ni 31 . Thus, the observed intensity of C1s comes from two parts: C atoms in graphite and the individual C atoms dissolved in Ni. After etching the top 0.3 nm of the samples, the intensity of C1s decreases sharply (Fig. 2f); however, the intensity of N1s is almost of no change (Fig. 2e), which agrees well with the construction of vertically stacked graphene-on-h-BN heterostructures. From 0 to 0.6 nm depth, the integrated intensity (area) of the N1s is almost of no change, while that of C1s decreases sharply (see Supplementary Fig. 5), which supports the vertical stacked graphene-on-h-BN structure. The follow-up decrease of C1s peak is ascribed to the etching of sporadic multilayer graphene islands. Other spectroscopic characterizations. To further demonstrate the presence of graphene and h-BN in the heterostructures, Raman and ultraviolet-visible absorption spectroscopy were taken to characterize the transferred samples. Figure 3a shows the Raman spectra recorded on the regions of graphene/h-BN and separated graphene (as indicated in Fig. 1c). The typical peaks of graphene, such as D (B1,363 cm À 1 ), G (B1,593 cm À 1 ), 2D (B2,714 cm À 1 ) and D þ G (B2,958 cm À 1 ), are displayed in both the graphene/h-BN and separated graphene regions, indicating the presence of graphene in the heterostructures 32 . Raman mapping of the separated graphene is also used to identify the multilayer graphene islands (see Supplementary Fig. 6). As the overlapping of D peak in the top graphene and the strong photoluminescence background, the Raman peak of h-BN at B1,370 cm À 1 is not clearly identified in the graphene/h-BN (Fig. 3a). As shown in the optical images of the transferred samples before and after plasma etching (see Supplementary Fig. 7 the isolated h-BN in the heterostructures (see Supplementary Fig. 8). The photoluminescence background displayed in graphene/h-BN region can be ascribed to the defect states of h-BN such as line defects along the grain boundaries 27,34 . For the as-grown samples transferred to quartz substrates, the ultravioletvisible absorption spectrum shows two peaks at B200 and B270 nm (Fig. 3b), which describe the optical bandgap of h-BN 27 and the excitonic effects in graphene 35 . After plasma etching (inset of Fig. 3b), the graphene excitonic peak at B270 nm disappears, indicating the remove of the top graphene layer in the vertical graphene-on-h-BN heterostructures. Auger electron spectroscopy (AES) was also taken to investigate the graphene/h-BN and separated graphene regions of the samples transferred on SiO 2 /Si substrate. In comparison with XPS, AES is more surface sensitive, whose signals are mainly from the top layers. As shown in Fig. 4a, respectively. As sputtering time increases, the C concentrations in the two regions both decrease, indicating the top graphene layer in the heterostructure is firstly etched. The B and N atoms doped in graphene are also etched away as sputtering time increases (Fig. 4c). And the increase of Si and O concentration also agrees with the etching of top graphene layer. However, the B and N concentrations in graphene/h-BN remain the same as sputtering time increases (Fig. 4b), indicating the h-BN underneath the graphene layer is not etched firstly. These AES results agree well with the construction of graphene-on-h-BN heterostructures. To visualize the structure of graphene/h-BN heterostructures as displayed in Fig. 1c, AES elemental mapping (Fig. 4d-i) was taken on the transferred samples near the boundary region of graphene/h-BN (upper left regions of Fig. 4d-i) and separated graphene (lower right regions of Fig. 4d-i). As shown in Fig. 4d, the intensity distribution of C before plasma etching is nearly the same in the graphene/h-BN and separated graphene region, indicating the graphene is the first top layer in the vertical heterostructure. But for B and N mapping before plasma etching, the intensities in graphene/h-BN region are stronger than that in separated graphene region (Fig. 4e,f). After the first top 2 nm is etched away by Ar-ion (same process as Fig. 4b,c), the C (Fig. 4g), B (Fig. 4h) and N (Fig. 4i) intensities in the previous separated graphene region are close to zero. The white dots shown in Fig. 4g are the residual multilayer graphene layers. Some residual C still exists in the previous graphene/h-BN region (Fig. 4g), which may be ascribed to the carbon doped in h-BN layer or the etching rate difference on SiO 2 and h-BN substrates (see Supplementary Fig. 9). The B and N elements only exist in the previous graphene/h-BN region after plasma etching (Fig. 4h,i), indicating the h-BN layers are under the top graphene layer in the vertical heterostructure. Discussion By removing the bottom h-BN layers in the transfer process, the top graphene layer in the heterostructures can be completely separated for further characterizations. Fig. 5a shows the optical image of the large area separated graphene transferred on SiO 2 /Si substrate. The ultraviolet-visible absorption spectrum of the separated graphene transferred on quartz also displays two characteristic peaks at B200 and B270 nm (Fig. 5b), which indicates the presence of h-BN and graphene domains 30 . The bonding between N and C atoms shown in XPS (Fig. 2b,c), D peak in the Raman (Fig. 3a) and B, N signals in the AES of the separated graphene (Fig. 4a) also reveal the in-plane hybridization of h-BN and graphene domains for our separated samples, as illustrated in the inset of Fig. 5b. For our co-segregation method, the hybridization of h-BN and graphene domains may ascribe to the interdiffusion between B, N and C atoms in the hightemperature growth process, which is also a general phenomenon for the CVD growth of stacked heterostructures of graphene and h-BN 17,20 . Recent theoretical and experimental studies on the in-plane hybridization of h-BN and graphene have revealed various novel tunable magnetic and electronic properties of this hybrid material 36,37 . To investigate the influence of the in-plane hybridization on the electrical transport properties, back-gated field-effect transistors of the separated graphene were fabricated on 285-nm-thick SiO 2 /Si substrates by optical lithography and following electrode deposition techniques (inset of Fig. 5c). The linearly current-voltage (I-V) curve of the separated graphene reveals its conducting nature (Fig. 5c). Because of the BN domains hybridized in the graphene lattice, the sheet resistance of the separated graphene (R s ¼ R Á W/LB200 kO per sq) is approximately three orders of magnitude higher than the pristine CVD-grown graphene 12 . Figure 5d shows the sourcedrain current (I ds ) as a function of the back gate voltage (V bg ) at the 0.1 V source-drain voltage (V ds ). The I ds -V bg curve of the separated graphene shows an ambipolar semiconducting behaviour, which is similar to that of pristine CVD-grown graphene 12 . The carrier mobility is calculated to be about 1.5 cm 2 V À 1 s À 1 , which is at the same level of the CVD-grown in-plane-hybridized graphene-BN films 30 . Because the carrier mobility of the separated graphene is still low at present, no enhanced electrical performance is shown in the graphene-on-h-BN heterostructrues (see Supplementary Fig. 10). However, a recent study shows that good control on the domain size and relative concentration of graphene and h-BN domains can improve their carrier mobility and on/off ratio 37 . Thus, our samples are still needed to further control on the in-plane hybridization of graphene and h-BN domains to achieve better performance. In our co-segregation method, the annealing temperature and the structure of Ni(C)/(B, N)/Ni sandwiched growth substrate are the two key controlling parameters. The diffusion rates of dissolved B, N and C atoms increase as the temperature increases. Thus, the amount of B and N atoms that diffuse to the metal surface increase as temperature increases for the same given time, which can be used to control the thickness of the underneath h-BN layers in the heterostructures (see Supplementary Fig. 11). By increasing the heating temperature from 950 to 1,000 and 1,050°C, the thickness of h-BN increase from 10 to 16 and 25 nm (see Supplementary Fig. 12). The coverage and size of multilayer graphene islands decrease as temperature increases (see Supplementary Fig. 13), because the carbon solubility in Ni bulk increases as temperature increases 28 segregated graphene are from Ni(C) layer, the graphene growth shows great dependence on thickness of Ni(C) (see Supplementary Fig. 14). Thicker Ni(C) lead more carbon to form multilayer graphene islands, while graphene cannot be formed when the Ni(C) is too thin to supply the carbon amount for the growth of monolayer graphene (see Supplementary Fig. 14). Our previous work on the segregation growth of h-BN has shown that the thickness of (B, N) source is also an important parameter for controlling the thickness of h-BN 27 . Our controllable experimental design on the co-segregation growth of graphene/h-BN heterostructures may be extended to other van der Waals heterostructures by designing new sandwiched growth substrates with other dissolved foreign atoms, such as Mo(S)/(B, N)/Ni growth substrates to get vertical heterostructure of MoS 2 and h-BN. In summary, we have developed a novel co-segregation method to realize the direct growth of wafer-scale vertically stacked graphene/h-BN heterostructures. C and (B, N) source dissolved in the Ni(C)/(B, N)/Ni sandwiched growth substrates can be sequentially segregated on the metal surface by vacuum annealing, directly forming the graphene top layer and h-BN under layers. The vertically stacked van der Waals heterostructures were confirmed by various morphology and spectroscopic characterizations. The graphene top layer in the heterostructure can be separated for characterizations, which demonstrates the actually in-plane hybridization structure. This work opens up a new way for mass production of large-area vertically stacked graphene/h-BN heterostructures and other van der Waals heterostructures. Methods Preparation of growth substrates. The Ni film (400 nm), (B, N) source (60 nm) and carbon-doped Ni film (150 nm) are sequentially deposited on four-inch 300nm-thick SiO 2 /Si wafer substrates using electron beam evaporator (ULS400, Balzers, pressure 1 Â 10 À 5 Pa before evaporation), forming sandwiched growth substrates Ni(C)150 nm/(B, N) 60 nm/Ni 400 nm/SiO 2 /Si. The weight purities of the commercial Ni and BN targets are 99.99%. The carbon weight in the carbon-doped Ni target is B1 wt %. The working pressure for Ni and carbon-doped Ni evaporation is 1-3 Â 10 À 5 Pa, while the working pressure for (B, N) source is 10 À 2 -10 À 1 Pa because of the decomposition of BN during the evaporation process. Growth procedures. The sandwiched Ni(C)/(B, N)/Ni samples were loaded into the vacuum annealing furnace (VTHK-350, Beijing Technol Science Co.) for annealing. After the furnace was evacuated to B5 Â 10 À 5 Pa, the samples were heated to desired temperatures (950, 1,000 and 1,050°C) at a rate of 20°C min À 1 and maintained at the desired temperatures for 10 min at a working pressure of 10 À 3 -10 À 4 Pa. Then the samples were cooled down to room temperature naturally at a rate of 2-50°C min À 1 after the heater power was shut down. The graphene/h-BN heterostructures films can be detached from the growth substrates and transferred to target substrates using the typical PMMA-mediated transfer-printing technique 12 . All the data are collected on samples with three desired heating temperature at 950, 1,000 and 1,050°C: Fig. 1 and Raman data in Fig. 3a are taken on sample-950°C; XPS data in Fig. 2 and ultraviolet-visible data in Fig. 3b are taken on sample-1,000°C; Fig. 4 and Fig. 5 are taken on sample-1,050°C.
5,241.6
2015-03-04T00:00:00.000
[ "Physics", "Materials Science" ]
EMPLOYEE PERFORMANCE IN IMPLEMENTING COMPLETE SYSTEMATIC LAND REGISTRATION: A STUDY ON THE OFFICE OF AGRARIAN AFFAIRS AND SPATIAL PLANNING/NATIONAL LAND AGENCY OF KUPANG REGENCY, INDONESIA This study aimed to describe and analyze employee performance in implementing Complete Systematic Land Registration ( Program Pendaftaran Tanah Sistem Lengkap –PTSL) in the Office of Agrarian Affairs and Spatial Planning/National Land Agency of Kupang Regency. In detail, the present study aimed to describe and analyze the quality, quantity, effectiveness, and timeliness of the land registration using a descriptive qualitative approach. The subject of the study was the Office of Agrarian Affairs and Spatial Planning/National Land Agency of Kupang Regency because it was in charge of the land registration program. Respondents were chosen purposively. The focus of the study was employee performance, with (1) quality, (2) quantity, (3) effectiveness, and (4) timeliness as the sub-focus. Findings suggested the followings. First , employee’s skills and ability in understanding the objectives and guidelines of the program as well as the information and technology significantly affected their performance. Second , employee quantity significantly affected performance— the higher the quantity, the better the performance and vice versa. Third , timeliness significantly affected performance. Fourth , effectiveness significantly affected performance— the higher the effectiveness, the better the performance and vice versa. Kupang Regency is one of the supporting regencies for the provincial capital. It has the largest administrative area in East Nusa Tenggara, with a total area of 5,298.13 km 2 consisted of 3,278.25 km 2 ocean, 288,397 ha forest, and 95,856.19 ha land (Central Bureau of Statistics Kupang Regency, 2019). The detailed information is presented in the following The Office of Agrarian Affairs and Spatial Planning/National Land Agency of Kupang Regency (Kantor Kementerian Agraria dan Tata Ruang/Badan Pertanahan Nasional -ATR/BPN) is in charge of collecting data and issuing a land certificate. It has made many achievements throughout the years, including achieving targets of land registration. The office has issued 100,813 land certificates from 626,025.979 m 2 (65.31%) of the total land area of Kupang Regency. ATR/BPN Kupang Regency keeps improving its services. Every inch of land must have legal certainty to minimize potential conflicts. Unfortunately, not all people understand how to register land and how their applications for land certificates are processed. The community has the right and opportunity to have legal guarantees for their land through a complete systematic registration process. The data on the achievement of the Complete Systematic Land Registration (Pendaftaran Tanah Sistematis Lengkap -PTSL) in Kupang Regency are as follows: Table 2 shows that the national target increases by two million per year, and it applies to all regions in Indonesia, including Kupang Regency. Besides the achievement in PTSL, there are possibilities for failure in the process, one of which is related to the employees or officials of ATR/BPN. ATR/BPN of Kupang Regency can only appoint 17 officials to do the certification as mandated by the central government. These officials or employees are appointed based on a Letter of Statement issued by the head of ATR/BPN. The low number of employees and the high target of certificates to issue makes employees work very hard. The 2017 target could be achieved because the local officials of ATR/BPN Kupang regency were assisted by the officials from the provincial office. As such, we were interested in studying this situation. This study aimed to describe and analyze employee performance in implementing Complete Systematic Land Registration (Program Pendaftaran Tanah Sistem Lengkap -PTSL) in the Office of Agrarian Affairs and Spatial Planning/National Land Agency of Kupang Regency. In detail, the present study aimed to describe and analyze the quality, quantity, effectiveness, and timeliness of the land registration using a descriptive qualitative approach. It is expected that the findings would be beneficial for the government as a reference in policymaking and for program implementers to improve the implementation of programs and policies. The study would also become a source of knowledge and information for further studies and interested parties. LITERATURE REVIEW Work performance refers to the results or outputs of a particular job function or certain activity in a certain period of time (Gomes, 1995). Performance measurement is a way to measure individual contributions to the organization (Gomes, 1995). Employee performance is positioned as the dependent variable in empirical studies because it is seen as a result or impact of organizational behavior or human resource practices, not as a cause or determinant. Gomes (1995) further explained two criteria for measuring performance or employee performance: (1) result-based performance evaluation and (2) behavior-based performance evaluation. Result-based performance evaluation means measuring performance based on organizational goals achieved or measuring final results only. Organizational goals are set by management or workgroups, then employees are encouraged to achieve the goals, and their performance is assessed based on how far the employees have achieved the goals. This measurement criterion refers to Management By Objective (MBO). The advantage of this method is it helps to set clear and quantitatively measurable criteria and performance targets. However, the main weakness is that many jobs cannot be measured quantitatively in the practice of organizational life, so they are considered to ignore non-quantitative performance dimensions (Gomes, 1995). Behavior-based performance emphasizes the means in achieving goals and not on the final results. Behavior-based performance tends to measure qualitative rather than quantitative aspects. It is generally subjective and assumes that employees can accurately describe effective performance for themselves and their co-workers (Gomes, 1995). The main weakness is that it is prone to measurement bias because performance is measured by perception. To overcome this,) suggest The use of instruments that measure the many aspects of specific behavior, such as innovative behavior, taking the initiative, level of selfpotential, time management, achievement of quantity and quality of work, self-ability to achieve goals, relationships with colleagues and customers, and knowledge of the company's products and competitors' products, can help overcome the weakness of behavior-based performance measurement and accommodate a very wide range of performance measures to obtain a comprehensive picture of job performance (Babin and Boles,1998;Bono and Judge, 2003). The quality of human resources determines an organization's performance in the organization. Performance has a broad meaning-it covers the results of work and the ongoing work process. According to Armstrong and Baron (1998), performance results from work with a strong relationship with the organization's strategic objectives, customer satisfaction, and contribution to the economy. An organization must monitor, assess and review the performance of human resources. The three things will help to determine whether employee performance is in line with the targets. If the target is not achieved, it is necessary to evaluate the employee performance. Companies will usually use an assessment indicator in evaluating employee performance. Robbins (2006), in -Organizational Behavior Measurement of Individual Employee Performance,‖ mentions five indicators to evaluate employee performance: quality, quantity, timeliness, effectiveness, and independence. Quality is one of the main criteria that determine product selection for customers. Customer satisfaction will be achieved if the product quality meets their needs. Deming (1982) defines quality as a continuous improvement based on statistical tools with a bottomup process. Deming (1982) does not include the cost of customer dissatisfaction because according he thinks that costs cannot be measured. Deming's strategy is to look at processes to reduce variation since improving quality will reduce costs. He has a strong belief in empowering workers to solve problems, providing management with the right tools. Meanwhile, according to Taguchi (1987), quality is a loss to society-a target deviation means a reduced quality function. On the other hand, the reduced quality will incur costs. Taguchi's (1987) strategy focuses on improving efficiency for repairs and cost considerations, particularly in the service industry. Then, seen from the research problems, we argue that quality is an increase in the ability and skills of employees systematically according to the demands of growing science and technology. Work quantity is a measure of how long an employee can work in one day; in other words, work quantity is all kinds of units of measure related to the amount of work expressed in numbers or other equivalent numbers (Mangkunegara, 2009). Furthermore, Chin & Osborne (2008) argue that quantity is the number of questions. Brotoharsojo and Wungu (2003) define quantity as any form of measurement unit related to the amount of work expressed in numbers or any units that can be matched with numbers. Wilson and Heyyel (1987) write that quantity of work is the amount of work carried out by an employee in a certain period. It can be seen from how employees use a certain time and speed to complete their tasks and responsibilities. The amount of work is the number of tasks that can be done. The use of time is the amount of time used in completing tasks and work. In this study, we define quantity as a number or amount. Thus, based on the research problem, quantity is the number of employees appointed by the Letter of Statement issued by the head of ATR/BPN to take care of the PTSL program. Timeliness is the use of information by decision-makers before the information loses its decision-making capacity (Chairil and Ghozali, 2001 in Ukago, 2005). Timeliness is very important for information users-information must be timely, so it must reach main users before it spreads to others. Timely information means information must be submitted as early as possible so that it can be used to make economic decisions and avoid delays in making these decisions (Baridwan, 1997 in Tjiptono & Anastasia, 2003). We define timeliness as how far or well an activity is completed, or a result is produced, at the earliest time in coordination with other outputs and maximize the time available for other activities. More specifically, it can be said that completing a task is appropriate or earlier than the predetermined target time. Sedarmayanti (2001) defines effectiveness as a measure that explains how far the target can be achieved. This understanding of effectiveness is more output-oriented while using inputs is less of a major concern. If efficiency is associated with effectiveness, even though there is an increase in effectiveness, it is not necessarily an increase in efficiency. Siagian (2001) defines effectiveness as utilizing resources, facilities, and infrastructure in a certain amount that is consciously determined beforehand to produce a number of goods and services for the activities carried out. Effectiveness, in this case, shows success in terms of whether or not the targets have been achieved. If the results of the activity are close to the target, it means the higher the effectiveness. Land registration comes from the word cadastre (kadaster in Dutch), a technical term for a record, indicating the area, value, and ownership of a plot of land (Parlindungan, 1999). This word comes from the Latin word capistratum, a register or capita or unit for the Roman land tax (Capotatio Terrens). In a strict sense, a cadastre is a record of land, the value of the land, and the holder of its rights for tax purposes. Thus, the cadastre is an appropriate tool in providing this description and a continuous recording of land rights. Land registration, according to Article 1 of Government Regulation Number 24 of 1997 on Land Registration, refers to a series of activities done on an ongoing and regular basis by the government, including collecting, managing, recording, presenting, and maintaining physical and judicial data in the forms of maps and lists of land parcels and flats, including the issuance of certificates as proof of rights for land parcels and as ownership rights for flat units and certain rights attached to them. Thus, the elements of land registration based on the above description are as follow: 1. A series of activities-the activities include collecting physical and judicial land data; 2. A certain office-land registration is managed by a special office known as Badan Pertanahan Nasional (BPN) or the National Land Agency; 3. On an ongoing and regular basis-the process is based on legal regulations and is done until citizens get the land certificate they need; 4. Land data-the first data from land registration are the physical and judicial data. The physical data include land location, borders, building areas, and plants living on the land. The judicial data explain the rights of ownership, including the names of owners; 5. An area-it covers specific areas within an administrative unit of the Republic of Indonesia; 6. Certain lands-related to the object of land registration; 7. Evidence-there is proof of ownership in the form of a certificate. Thus, land registration is a series of activities carried out by the government continuously and regularly regarding certain lands in certain areas by collecting certain information, processing, recording, and presenting the data for the benefit of the people to guarantee legal certainty of the land parcel, including the issuance of evidence and its maintenance. The government has targets to serve and provide 126 million certificates for uncertified land. However, until 2015, only 46 million land certificates have been issued or one-third of the target. BPN can only issue 500 thousand certificates annually. Under the current capacity of BPN, the government's target will be completed in 160 years. One of the causes is the lack of land surveyors in all regions. As such, BPN hired more land surveyors according to its needs. BPN also conducted a land registration acceleration program through the Complete Systematic Land Registration (Program Pendaftaran Tanah Sistem Lengkap -PTSL). The result of the 2017 PTSL was a tenfold capacity of land certificate issuance-BPN could issue 5 million certificates in 2017. The 2018 PTSL had been successful in distributing 9 million certificates. It was targeted that another 9 million certificates could be distributed in 2019. Kupang Regency targeted distributing 10,000 land certificates through PTSL in 2017 and 7,000 certificates in 2018. Since the regional BPN of Kupang only had eight (8) employees as land surveyors, the target was challenging. RESEARCH FRAMEWORK Companies will always conduct a performance appraisal on their employees if they want to develop. It aims to measure the capacity and ability of employees. In evaluating the performance of employees, companies usually use an assessment indicator. We employed Robbins' theory (2006) in this study. Robins proposed five indicators of individual employee performance measurement: quality, quantity, timeliness, effectiveness, and independence. Thus, we first looked at the previous descriptions presented in the following framework in compiling a conceptual framework. This study employed four out of five indicators proposed by Robins (2006): quality, quantity, timeliness, and effectiveness. We believed that these four indicators would be enough to answer the research problems. METHODS OF RESEARCH This study was qualitative descriptive. The study site was the regional ATR/BPN office of Kupang Regency because the office is in charge of land registration and PTSL. Informants were chosen purposively. The focus was on employee performance with four sub-focus: (1) quality, (2) quantity, (3) timeliness, and (4) effectiveness. Data came from primary and secondary sources. Data were collected through interviews, documentation, and observations. Data analysis adopted the method by Miles and Huberman: data reduction, display, and verification. Triangulation was employed to check data validity. RESULTS AND DISCUSSION Government institutions must adapt to the environment and developments and continue to make changes. Every organization generally expects employees to carry out tasks with quality results, in good quantity, and completed in a timely and effective manner. Human resources must have the competitiveness to produce good public services following the needs of the community. The supporting factors in the implementation of the task can be described as follows: ATR/BPN officials shall identify the location, measure the area, ascertain boundaries, and do mapping as outlined in a map of land parcels. Therefore, employees capable of doing the given task are needed. Findings confirmed that the officials had been given the materials on legislation and technical guidelines before PTSL was implemented. The land is vital for most Indonesian people, whose society and economy are an agrarian structure. In this case, the ATR/BPN officials had met the community's expectations related to land ownership rights. Findings showed that there was an acceleration in the process of land registration, especially the free registrations. In other words, PTSL accelerated the land registration process in Indonesia. Findings also confirmed the people wanted legal certainty over their land. Many lands have unclear ownership status because they are inherited lands that have not been divided by to the next generation or customary lands whose distribution is still unclear. As such, various parties often claim such lands that the certificate cannot be issued. In processing judicial and/or physical data, the officials or employees used data processing facilities, such as computers, printers, and the internet. All of the facilities were used to support the National Land Agency Computerization (Komputerisasi Kantor Pertnahan -KKP). In the implementation of PTSL, employees had to master the applications used in their work. Measurement is important, but a lack of mastery of knowledge in its application can also be fatal. However, the officials of ATR/BPN Kupang Regency had mastered measuring instruments to do their work well. The achievement of organizational goals refers to realizing organizational programs or execution of the routine, general, and development tasks. Achievement means that humans essentially can excel above others. This ability can be reached if employees have high education, sufficient experience, a good mentality, and good morals. In this case, various regulations as the legal umbrella for PTSL have been issued, coordination between agencies has been built, and various breakthroughs have been made. However, many obstacles are found during PTSL implementation-because it focused on quantity and ignored quality. Findings confirmed that PTSL implementation had followed the target number of measured land. Each village had a target number of measured land and certificates. Evaluation would be directly done if a village could not meet the target and be offered assistance from other villages that had finished their targets. Not all villages could meet the initial targets, but all parties worked hard to achieve the targets. As such, the ATR/BPN office could meet its organizational targets. However, some PTSL programs had not been completed because of a lack of officials. Every organization expects to achieve all its targets, including ATR/BPN, to achieve the PTSL targets. If ATR/BPN fails to achieve its targets, it is necessary to collaborate with local governments, village governments, the armed forces, agencies, and communities. Findings confirmed that collaboration had been done in many areas, yet some areas did not seem to practice collaboration. The government has been implementing a policy to improve service related to land registration, with massive land certification in all regions across Indonesia through PTSL, through Undang-Undang Pokok Agraria (UUPA), or Basic Agrarian Law and all its derivative regulations. PTSL aims to give all citizens a chance to have their land certificates at such a low price, especially for low-income groups. This program aims at providing legal certainty to land rights holders. Some challenges, however, arose in its implementation. Work quantity is a measure of how long an employee can work in one day (Mangkunegara, 2009). For PTSL, it means how many land areas could be physically measured by one officer a day and how many judicial reports of land were collected a day. This indicator is related to the third parties of external parties. There had been a miscommunication with these external parties or the community due to their low understanding of PTSL, so the officials of ATR/BPN Kupang Regency had to explain the objectives of PTSL to the community. PTSL objectives must be delivered using simple words or using the native language of the community because many of them cannot speak Indonesian fluently so that the community can understand better. PTSL is the first complete systematic land registration implemented throughout Indonesia. PTSL is done on a village basis or an administrative unit equal to a village, covering the collection and issuance of physical and judicial data of one or more land parcels for certification. PTSL acceleration is regulated in Article 3 Paragraph (3) of the Regulation of the Minister of Agrarian Affairs and Spatial Planning/National Land Agency Number 1 of 2017. PTSL shall follow the target given by the government. Findings showed that each ATR/BPN is given a deadline to complete the target on September 31 each year. ATR/BPN of Kupang Regency completed its target in early September 2018. However, the short time for target completion sometimes reduced the quality of physical and judicial data. Land officials found it hard to complete the target because they had to divide the time between working on the judicial data for routine collection and the judicial data for PTSL. The officials had to stay overnight in the village where they did the land measurement. They also had to spend their weekends and holidays to finish the task. The lack of officials was one of the main causes for delayed PTSL land certification. PTSL is an acceleration in land registration and certification. The National Land Agency is in charge of PTSL, and it assigns each Regional Land Agency to implement PTSL. PTSL regulations aim to provide legal certainty for the process. PTSL requires government support that includes the National Land Agency and all Regional Land Agencies. There have been innovations done in PTSL implementation. First, maps, documents, and drawings have been digitized. Second, the -Sentuh Tanahku‖ application helps landowners plot their land. Third, KKP has been beneficial for record entries, including documents, maps, certificate printing, recording, and reporting. Fourth, using the latest and most sophisticated measuring tools has been a great way to overcome the lack of officials and accelerate the measurement process. Fifth, there has been the auto-entry application that fastens the process of document entries. Sixth, the announcement has been made faster, from 60 days to 14 days. Seventh, document entries are done using ID numbers (ATR/BPN cooperates with the Ministry of Home Affairs). Eighth, ATR/BPN works with the Ministry of Forestry and Environment related to forest area boundaries, rivers, beaches, etc. Ninth, ATR/BPN recruits more employees to work as ASK, SKB, and KJSKB to take and record physical data for PTSL all across Indonesia. PTSL targets have been seen as too ambitious, and many parties consider the targets as having the political interests of certain people. PTSL is different from the previous programs because the president supervises, evaluates, and even directly intervenes in the process. The Ministry perfects the written regulations and legal bases, improves human resource quality, increases quality and quantity of facilities and infrastructure to ensure legal certainty and legal protection for the PTSL program, reduces disputes, and accelerates the program itself. Findings showed there were concerns that Ministerial Regulation Number 35 of 2016 jo the Regulation of the Minister of Agrarian and Spatial Planning (Head of BPN) Number 1 of 2017 concerning PTSL cannot guarantee legal protection for the certificate issuance because it is following the Government Regulation Number 24 of 1997-the later regulation is hierarchically higher. One of the problems is the length of time for the announcement and examination of judicial data that no longer go through the adjudication process (inspection trial in the field)-it is increasing the time for PTSL. PTSL is an acceleration for land registration. Effectiveness is a benchmark for comparing the process with the goals and objectives achieved. Employees in the organization are required to work earnestly and diligently, following the procedures and work plans. Employees also have to make the best use of working time so that objectives can be achieved well and mistakes, shall there be any, can be reduced to a minimum. Findings suggested that the number of PTSL program implementers was sufficient to complete the work according to the targets. The officials could finish their work well, although they had to finish measuring 8,000 land parcels and 7,000 certificates and be responsible for completing 1,000 land parcels in the land redistribution program. They were also responsible for routine activities of issuing land certificates with around 20,000 applications per year. It confirmed that official shortage was not one of the inhibiting factors of PTSL. Kupang Regency is hilly with a slope of up to 45°, yet part of it is lowland and coastal areas. Therefore, PTSL took a longer time to finish. However, if the officials worked extra, including working on the weekends, staying overnight in the village to do land measurement, and using suitable applications, PTSL targets could be met. Thus, it can be stated that the effectiveness of employee work is a series of activities carried out by employees to work as planned accurately in terms of quality, quantity, and timeliness. In this case, the Kupang Regency ATR/BTN has been able to implement it quite effectively. CONCLUSION Based on the findings and discussion, the following conclusions are presented: The ability and quality of human resources in understanding progams, technical guidelines, technology, communication, and information significantly affected work quality. The better the work quality, the better their performance will be and vice versa. Findings suggested good performance of the ATR/BPN officials. However, improvement is needed related to the educational background (the educational background does not match the job), late work completion because it involved the third party (the community), and the inability to catch up with technological advancements. Employee quantity significantly affected performance-the higher the work quantity, the higher the level of employee performance and vice versa. Findings showed good performance. However, some improvements need to be made, such as the ability of officials to communicate with the third party (the community), the ability to cooperate with the law enforcement officials (the police, prosecutors, and courts), the ability to meet daily measurement target, the ability in recording and keeping records to avoid losing the judicial documents, etc. Timeliness significantly and positively affected employee performance. Findings suggested that employees were timely. Nevertheless, some improvements shall be made. Respondents asked for more time to complete the judicial document, so it took a long time for officials to complete their job. Effectiveness significantly affected performance-the higher the effectiveness, the better the performance and vice versa. SUGGESTIONS Based on the conclusions, the following suggestions are given: 1. The ATR/BPN officials must do their best in implementing PTSL so the community can get legal certainty; 2. The ATR/BPN officials must put quality before quantity in implementing PTSL so that the community can get legal certainty. If there are mistakes in the procedure and stages of PTSL, the officials are responsible for those mistakes, not the office; 3. There must be good legal protection for the ATR/BPN officials to do their work well without fearing that they will be charged with criminal acts when they make mistakes with the SOP; 4. The findings can be used to increase knowledge, insight, and additional information related to land registration.
6,200.6
2021-08-21T00:00:00.000
[ "Economics" ]
Human Protein Cluster Analysis Using Amino Acid Frequencies The paper focuses on the development of a software tool for protein clustering according to their amino acid content. All known human proteins were clustered according to the relative frequencies of their amino acids starting from the UniProtKB/Swiss-Prot reference database and making use of hierarchical cluster analysis. Results were compared to those based on sequence similarities. Results: Proteins display different clustering patterns according to type. Many extracellular proteins with highly specific and repetitive sequences (keratins, collagens etc.) cluster clearly confirming the accuracy of the clustering method. In our case clustering by sequence and amino acid content overlaps. Proteins with a more complex structure with multiple domains (catalytic, extracellular, transmembrane etc.), even if classified very similar according to sequence similarity and function (aquaporins, cadherins, steroid 5-alpha reductase etc.) showed different clustering according to amino acid content. Availability of essential amino acids according to local conditions (starvation, low or high oxygen, cell cycle phase etc.) may be a limiting factor in protein synthesis, whatever the mRNA level. This type of protein clustering may therefore prove a valuable tool in identifying so far unknown metabolic connections and constraints. Introduction ''Epigenetics'' can be broadly used to describe any aspect other than a DNA sequence able to alter a phenotype without changing its genotype. The science of epigenetics -the study of reversible changes in gene function that occur without a change in the DNA sequence -is transforming the nature-nurture debate. It has been speculated that dynamic epigenetic processes, operating at the interface between the genome (nature) and the environment (nurture), strongly influence the complexity of living organisms in health and illness [1]. Cell chemical processes used to regulate gene expression and specific mRNA synthesis (transcription) include methylation, phosphorylation, acetylation and are usually regarded as canonical tools of this regulation. However, the step from mRNA to protein (translation) also displays absolute requirements including ribosomal machinery, tRNA, ATP supply and amino acids (AA) local availability. To date the effect of AA availability as regulating factor of every protein synthesis has not been extensively investigated. It is well known the glutamine requirement for purine bases synthesis [2] or the leucine effect on mTOR expression [3], but usually protein synthesis rate is correlated with mRNA amount and not with local essential AA concentration. In our research we assume that local AA availability is a limiting factor for a given any protein synthesis. It is well known that mRNA has a limited life span and that factors affecting its expression and stability are powerful modulators of protein synthesis. In the case of AA scarcity the rate of some tRNAaminoacid complexes may become the prevailing limiting factor, provided the mRNA lifespan is shorter than the time required to collect all the required AA. We therefore hypothesize that the relative abundance of proteins in different cellular setup may also depend on the local availability of AA. The AA percentage of a protein should mirror AA local availability. Our clustering tool is intended for identification of homogeneous groups of proteins whose synthesis can be regulated by selected AA relative abundance in proper experimental settings. The protein sequences of all human genes were extracted from the reference database UniProtKB/Swiss-Prot and clustered using agglomerative, or bottom-up, hierarchical cluster analysis. Every protein initially corresponds to one-point cluster and, in each subsequent step, the two 'closest' clusters were merged until only one remained. The agglomerative approach offered advantages such as more exible clustering as well as often producing higher quality trees. Data The source of data: the protein sequence in FLAT file format from UniProtKB/Swiss-Prot protein database, which provides protein sequences with extensive annotation and cross references. The database is regularly updated and is a section of UniProtKB [4]. UniProtKB is organized in two sections: 1) UniProtKB/Swiss-Prot, which is the main database, manually curated, which means that the information in each entry is annotated and reviewed by a curator; 2) UniProtKB/TrEMBL, which is the supplement database of Swiss-Prot containing computer annotated entries that undergo a number of checks before their publication in UniProtKB/Swiss-Prot. The data stored in one single file containing the FLAT format records of 20,244 human proteins were obtained from the Expasy portal which is an extensible and integrative portal to access many scientific resources, databases and software tools in different areas of life sciences [5]. We adopted the 1-Letter and 3-Letter standard amino acids abbreviation codes used in UniProtKB/Swiss-Prot, which is the standard adopted by the commission on Biochemical Nomenclature of the IUPAC-IUB [6]. Proteins were labelled according to UniProtKB [4] nomenclature. Methods In UniProtKB/Swiss-Prot, each entry in the FLAT file contains an ID (Identification) line and a SQ (SeQuence header) line with the length of the sequence and the sequence in amino acid. A Perl program was implemented in order to process the data concerning the human proteins contained in the FLAT file format. The output was a table with protein IDs, and the amino acids relative frequencies, which is available at http://hdl.handle.net/2318/836 (or http://aperto.unito.it/handle/2318/836). The program chose each entry in the FLAT file and analyzed the SQ line in order to compute the relative frequencies of each amino acid type in each proteins. The relative frequency was a five digits floating point number with three digits after the decimal point. A hierarchical cluster protein analysis was performed using this table as input for Fastcluster R package [7]. Clustering is the process of partitioning a set of objects into subsets, called cluster, so that each subset contains similar objects, and the objects in separate subsets are dissimilar [8]. The ability of cluster analysis arises from the fact that it can divide similar data without any a priori knowledge. Clustering methods can be divided into two basic types: partitional and hierarchical. A commonly used partitional clustering method is K-means, a process to partition an N-dimensional population into k sets on the basis of a sample [9]. K-means requires that the choice of the number of clusters is made in advance. Given a set of points, a hierarchical clustering creates a binary tree of the data that it successively merges in groups of similar points. Hierarchical cluster only requires a measure of similarity between groups of data points and then it can gradually build clusters. There are two main categories of hierarchical clustering: agglomerative and divisive. An agglomerative clustering starts with a one-point cluster and recursively merges two or more most appropriate clusters. A divisive clustering starts with one cluster of all data points and recursively splits the most appropriate cluster. Agglomerative cluster popularity is largely due to its ability to use arbitrary clustering dissimilarity or distance functions and the conventional wisdom that it produces higher quality trees than divisive or incremental approach [10]. We chose to run the hierarchical cluster analysis for its independence from the choice of the number of clusters. Hierarchical cluster analysis was performed using the R package Fastcluster, which implements fast hierarchical, agglomerative (bottom-up) clustering based on the seven most widely used schemes: single, complete, average, weighted, Ward, centroid and median linkage [7]. Similarity Measure Protein sequence clustering is a process which aims to identify sets of homologous proteins in a protein database [11][12][13]. There are many ways to compute similarity between two protein sequences. Generally, the target sequences are aligned depending on the position of the amino acids and the resulting scores are used to calculate a measure of similarity [14]. In our case, the relative frequency of the amino acids in protein sequences was taken as measure of similarity. The Ward's method and the Euclidean metric were chosen to compute the distance between the relative frequencies of amino acids in the proteins. The resulting vector of distances was transformed in Newick format using ctc R package [15], in order to be visualized with the graphical editor TreeGraph2 (http:// treegraph.bioinfweb.info/) and to extract meaningful subtrees that visualize the distances between clusters. Results and Discussion Representation of clusters was given by means of a cladogram. The distribution of proteins along the cladogram was analysed for different groups of proteins belonging to the same group according to sequence similarities. In Figure 1, the cladogram highlights a portion of the group of keratin associated proteins [16]. TreeGraph2 automatically sets line widths or colours according to the value of variables that can be assigned to each node or branch [17]. Keratins are extracellular structural proteins with a very repetitive structure. In this case more than 90% of our clustering overlapped with Swissprot classification. Cadherins, a group of partially extracellular proteins, are another group of proteins that were also distinct from keratin group in cladogram ( Figure 2). Figure 2 shows a portion of a cadherin subtree, in which cadherins are mixed with other apparently unrelated proteins. Aquaporins, a set of membrane protein involved in water transport in almost all tissues, see Figure 3, are far apart in the cladogram. This difference means that molecules with a relatively small active site are free to evolve according to the local environment in the moiety less strictly related to the function. Apparently structural proteins have a highly homogenous amino acid composition while catalytic proteins combine highly conserved sites with variable regions that allow clustering according to factors up to now unexplored. When clusterizing only the aquaporins, similarities where observed for those aquaporins on the same chromosome. On the contrary, those on different chromosomes were more distant and they did not clusterize very well. A similar behaviour can be expected for most enzymes that exist in different isoforms. An example of catalytic protein is the enzyme human steroid 5-alpha reductase, that exists in 3 isoforms: S5A1_HUMAN (SRD5A1 gene), S5A2_HUMAN (SRD5A2 gene) and PORED_HUMAN (SRD5A3 gene). They are located on different chromosomes and in our cluster they are not so close, while they are very close to proteins with different functions but similar tissue expression. We analyzed the cluster members of the three isoforms. S5A1_HUMAN, Figure 4, is very close to GP173_HUMAN, CAHM1_HUMAN, FZD9_HUMAN in the cluster. S5A1_HUMAN gene is expressed in foetal brain and ovary, GP173_HUMAN is a super conserved receptor expressed in brain, CAHM1_HUMAN is predominantly expressed in adult brain, FZD9_HUMAN is expressed predominantly in adult and . Circadian locomoter output cycles protein kaput and NAD-dependent protein deacetylase sirtuin-1 amino acids relative frequencies. The graph compare the relative frequencies of the Circadian locomoter output cycles protein kaput (CLOCK gene) and NADdependent protein deacetylase sirtuin-1 (SIRT1) highlighting the different level of glutamate and glutamine. CLOCK means high glutamine and bases synthesis and switch on DNA synthesis. SIRT1 means low glutamine and high glutamate and acetylCoA and switch off DNA synthesis. doi:10.1371/journal.pone.0060220.g007 foetal brain. This confirms the closeness in the cluster from the point of view of the tissue. S5A2_HUMAN, Figure 5, is very close to TM212_HUMAN. S5A2_HUMAN is expressed in high levels in the prostate and many other androgen-sensitive tissues, while TM212_HUMAN is a multi-pass membrane protein expressed in the lung. PORED_HUMAN, Figure 6, is very close to CCBP2_HUMAN, DOPP1_HUMAN, CLN6_HUMAN. POR-ED_HUMAN is expressed in eye, CCBP2_HUMAN in placenta, foetal liver and lung, DOPP1_HUMAN in lung, cerebellum and brain, CLN6_HUMAN in lung and urinary bladder. The apparently inconsistent expression of DOLPP1 in cerebellum and brain may depends on its expression in the glial cells which have a metabolic behaviour more similar to a foetal liver or a lung than to a neuron. This could be a possible explanation for its closeness to CCB2 in the cluster. Conclusions Proteins with a repetitive structure and with highly specific AA patterns such as keratins and collagens cluster quite well demonstrating the correctness of the mathematical approach, but their clustering added no information to existing knowledge. Proteins that clusterize on the basis of AA percentage but perform quite different functions or similar functions in different tissues or microenvironments (glial cells and neurons in the same area have a completely different glutamate/glutamine) disclose new approaches to the description of complex biological systems. Polymorphic proteins performing similar functions in different tissues (e.g. highly oxygenated/hypoxic) have different AA percentages which allow more efficient protein synthesis, and so on. The cell cycle is a cyclic process alternating DNA and protein synthesis. DNA synthesis requires a high amount of glutamine. While protein synthesis relies on the presence of all AA. Clock and SIRT1 proteins [18], (Figure 7), are widely accepted as regulator check point of cell cycle. Considering the glutamine content, CLOCK should be higher during DNA synthesis while SIRT1 during protein synthesis. This has been well known for years, merely on the basis of experimental data. Now we can understand why: protein AA content depends on local AA content and becomes a ''signal'' that activates the proper metabolic pathway. AA percentage becomes a relevant part of the ''information content'' of the protein. In conclusion, this method makes it possible to gather so far unexplored information on proteins, linking their coordinated expression to chromosome or tissue locations, cell cycle phase, starvation and other metabolic constraints. It is potentially very useful for predictive analysis before passing on to expensive and time-consuming laboratory tests.
3,029.6
2013-04-04T00:00:00.000
[ "Biology", "Computer Science" ]
Hierarchical distributed framework for optimal dynamic load management of electric vehicles with vehicle-to-grid technology The tendency towards carbon dioxide reduction greatly stimulates the popularity of electric vehicles against conventional vehicles. However, electric vehicle chargers represent a huge electric burden, which affects the performance and stability of the grid. Various optimization methodologies have been proposed in literature to enhance the performance of the distribution grids. However, existing techniques handle the raised issues from individual perspectives and/or with limited scopes. Therefore, this paper aims to develop a distributed controller-based coordination scheme in both medium and low voltage networks to handle the electric vehicles’ charging impact on the power grid. The scope of this work covers improving the network voltage profile, reducing the total active and reactive power, reducing the load fluctuations and total charging cost, while taking into consideration the random arrivals/departures of electric vehicles and the vehicle owners’ preferred charging time zones with vehicle-to-grid technology. Simulations are carried out to prove the successfulness of the proposed method in improving the performance of IEEE 31-bus 23 kV system with several 415 V residential feeders. Additionally, the proposed method is validated using Controller Hardware-in-the-Loop. The results show that the proposed method can significantly reduce the issues that appear in the electric power grid during charging with minor changes in the existing grid. The results prove the successful implementation of different types of charging, namely, ultra-fast, fast, moderate, normal and vehicle-to-grid charging with minimum charging cost to enhance the owner’s satisfaction level. I. INTRODUCTION The evolution of EV batteries with sizable capacities and the development of renewable energy resources promotes the use of EVs over internal combustion vehicles. Moreover, EVs improve urban air quality [1], have 43 percent better fuel economy compared to conventional vehicles, and act as an enabler for renewable power generation by providing storage through the V2G technology [2,3]. Therefore, car companies have begun to invest in the EV market, and distribution grids have evolved through communications infrastructures and smart meters/sensors to support EV charging [4]. Enormous demand for EV charging causes a challenging demand side management problem with stochastic attitudes [5]. Furthermore, it induces detrimental effects on the distribution grids' performance. For instance, frequency oscillations, unacceptable voltage drops, and an increase in total power losses may occur. Therefore, various optimization strategies have been proposed in literature to restrict these impacts. One of the optimization objectives to be achieved during EV charging could be voltage regulation in the distribution grid. For instance, the method in [6] regulates the voltage by controlling the reactive power through the grid. However, it required communication among EVs and the distribution system operator (DSO), the total power losses and battery degradation were not considered in that work. On the other hand, the method in [7] utilizes the reactive power and model predictive control to regulate the voltage, and additionally considers battery degradation. However, the other challenges in the distribution grid, such as, total power losses and frequency fluctuations have not been studied. The method in [8] moves the charging time from peak time to other times for voltage regulation, but benefits from V2G technology and total system losses have not been addressed. Alternatively, the optimization objective may aim to minimize the total active power losses in the distribution grid. For instance, the method in [9] focuses on optimal integration of distributed generators using butterfly optimization to minimize the daily active power losses. However, the EV owner preferred charging time zones have not been handled. Another method introduced in [10] adopts a smart load management methodology for EVs and charging stations with losses minimization, but the presence of renewable energy sources with V2G technology has not been covered Another method introduced in [11] minimizes the real power losses over the distribution feeders but further considers the V2G technology and the charging cost. However, the random arrivals/departures of EVs were not considered. One more issue that may be addressed during EV charging is the frequency oscillations. This issue is addressed in [12], which provides frequency regulation for the power system by utilizing the V2G technology with no concerns for the total losses in the system and the charging cost. However, the method in [13] regulates the frequency deviations while reducing the charging cost, but, the cost related to the distribution power losses was not considered. The method in [14] and [15] use quadratic programming and bidding constraints, respectively, to reduce the peak load and demand variability in the distribution grids. However, charging stations and battery degradation have not been addressed in these methods. A different direction of research focused on the minimization of the energy cost. For instance, the methods in [16] and [17] use model predictive control and stochastic mixed integer linear programming approaches, respectively, to find the optimal EV charging strategies for maximizing the aggregator profits. However, these studies have been simulated with no consideration for the distribution system constraints including the peak load and voltage drops. From another perspective, the objective may be to minimize the total charging cost for the owners. The method in [18] uses sequential quadratic programming and genetic algorithms to minimize the power energy cost and load fluctuations. However, this method did not handle the total losses and voltage drops in the distribution grids. In [19], ant colony optimization is used to reduce the waiting time and charging cost without taking into consideration the distribution grid performance. One key observation from the reviews presented in [20][21][22], is that most of EV charging strategies focuses on one or two of the following power grid issues (total power losses, excessive voltage drops, load/ frequency fluctuations or peak shaving), regardless of the other issues. For instance, authors of [23] tried to minimize the active power losses and voltage regulation specially in the MV networks, in [24] only the peak shaving and valley filling were addressed, while in [25] only the reduction of the energy cost was considered. Thus, it is quite clear that the main shortcoming in the previous methods is that no technique has been designed to address all these power grid issues at medium voltage (MV) and low voltage (LV) levels, taking into consideration the owners' satisfaction factor, battery degradation, charging cost, computational time, and the stochastic behavior of the system. The stochastic behavior stems from: (i) the random arrivals/departures of EVs, (ii) the owner's preferred charging time zones, (iii) batteries with a wide range of capacities and ratings considering ultra-fast and fast charging requirements. The previous research handled the EV system's stochastic behavior, but within a narrow range that is limited only to studying the impacts on the power grid. For example, references [7,13,15,[26][27][28] consider only on one or a maximum of 5 different EVs, which is not a realistic case, as the power grid would normally deal with a large number of different EVs with different capacities and charging rates. This paper proposes a multi-objective hierarchical formulation of the EV charging problem implemented at the LV/MV distribution grids and charging stations. This formulation aims to reduce the undesirable impacts such as unacceptable voltage drops, total power losses and peak loads. These issues have recently appeared due to EV charging [29]. Besides, the proposed formulation considers minimization of the total energy cost for the EV owners in the charging stations. The cost is addressed through the recent pricing strategies, namely time-of-use price, and realtime price. Furthermore, the formulation takes into account the random arrival/departure pattern for the EVs with predetermined preferred charging time zones based on a priority selection scheme. The V2G technology is applied in the proposed method to motivate the use of renewable energy resources. The contributions of the paper are summarized as follows; • Development of a two-level hierarchical controller-based coordination scheme implemented in both MV and LV networks to handle most (not only one) crucial EV charging impacts on the power grid. • Formulating three improved objective functions for the hierarchical controllers, that are solved by the distributed controllers with rapid convergence speed achieving realtime optimum charging decision making. • The optimal scheduling policy for charging and discharging of EVs is designed to handle practical real-life circumstances, such as all the previously stated random and stochastic behaviors, large penetration of EVs with different capacities and charging rates, bi-directional V2G, distribution grid restrictions and battery degradation. • Finally, the optimization algorithms are validated using Hardware-in-the-Loop via the OPAL-RT real-time simulator and a Digital Signal Processor (DSP). To prove the enhancement in distribution system performance using the proposed methodology, a standard IEEE test system is simulated and practically validated. This standard system consists of 31 MV feeders (23-kV) with several charging stations and integrated 53 LV (415-V) node residential networks populated with EV chargers of various sizes. This paper is organized as follows. Section II covers the optimization problem formulation. The overall test system structure is described in Section III. Section IV presents the algorithmic details. The results are presented in Section V using MATLAB and Hardware-in-the-Loop via the OPAL-RT real time simulator. Section VI concludes the paper. II. PROBLEM FORMULATION This section introduces the formulation of the optimization problem associated with the different controllers that can be implemented in the distribution grid. Moreover, it includes the required constraints that balance between the highperformance levels for grids and EV owner's satisfaction levels. Fig. 1 demonstrates the different locations for the controllers in the distribution gird. The objective of each optimization problem will differ according to its location in the distribution grid. For instance, the controller implemented in the MV feeders (high-level control), has an objective that aims to reduce the total power losses, enforce load shaving and voltage regulation through the MV network. On the other hand, the controller located in the LV feeders (low-level control), is responsible for charging the EVs on the residential charging points in a specific period, considering minimum losses and acceptable voltage drops in the LV network. Moreover, minimizing the total energy cost for EV owners or maximizing the aggregator profits are the objectives for the controller installed in large charging stations (low-level control). On a large scale, these three controllers (MV feeders, LV feeder and charging station controllers) improve the performance of the distribution grids through the integration and coordination among them. Fig. 1 illustrates the coordination between the low-level controllers and the MV feeder controller. This coordination can be described as follows: (i) The low-level controllers calculate the required energy to charge every EV connected to it. For every EV, the low-level controller optimally allocates different power levels in different time slots (based on its optimization objective) to achieve this required energy. The low-level controller sums up the powers delivered to EVs in each time slot and generates the charging profile as in Fig. 2(a). (ii) The MV controller receives this charging profile from each low-level controller. The MV controller runs load flow analysis over the medium voltage buses. If the power exceeds the permissible level for generation or causes an excessive voltage drop in the MV network, then the MV controller sets a limit to the low-level controller and suggests a shift for the portion of power that causes this issue to a different time slot, as shown in Fig. 2(b) by the red portion that is shifted from slot 5 to slot 6. Moreover, the MV feeder controller suggests a shift in portions of power to a different time slot to minimize extra losses over the MV networks, as shown by the green portion in Fig. 2(b), that is shifted from slot 4 to slot 6. (iii) The lowlevel controller receives the modified charging profile limits, and coordinates between the EVs based on the modified profile to achieve better owner satisfaction level. The proposed model handles more than one objective, namely, the active, reactive power losses, load fluctuations, energy cost, and excessive voltage drop. Using one controller to handle all these objectives in LV and MV together is computationally complex. Therefore, a two-level hierarchical controller-based coordination scheme is proposed, which is shown in Fig. 1, to reduce the computational time and maintain system reliability as the failure of any controller will not entirely affect the whole system. A. Charging stations (low-level control) The coordination problem of EVs in charging stations is targeted to determine the allocated power for each EV in different time slots, considering the random arrival of EVs, owners selected charging priorities (with extra fees) and grid topology. 1) OBJECTIVE FUNCTION FOR MINIMIZING THE LOAD FLUCTUATIONS AND TOTAL ENERGY COST Charging stations are commonly integrated with MV buses and aim to minimize the load fluctuations and the total energy cost for EV owners. Sudden load increase causes undesirable frequency oscillations and can be addressed by minimizing the following objective function: where 1 is the first objective function for the station that describes the load variance over the day, is the number of time intervals, is the set of the time intervals, ( ) is the total power delivered to the EVs in the time interval , ( ) describes the daily average load [18]. The daily average load can be calculated as, In addition, the charging station aims to minimize the total energy cost for the EV owners using the following cost function, where is the total cost required to charge EVs, and ( ) is a linear model for the tariff in time interval . This linear model is determined based on the two common pricing strategies, namely, the time of use (TOU) and dynamic real-time price. TOU is designed to encourage customers to use more energy at off-peak periods. On the other hand, the dynamic price helps to set fair tariffs for different time slots based on the time and load demand and can be expressed as, LV feeder where and are the slope and the intercept, respectively, for the linear price model. The TOU pricing can be achieved by assigning different values to over the day. The second objective function for the station can be formulated as, where ,2 describes the total charging cost. Due to the quadratic nature of the problem, the proposed method merges these two objective functions into a single objective function, so that it can be easily addressed using quadratic programming. Quadratic programming rapidly finds the optimal solution that helps significantly during the communication with other controllers. Such a merging can be achieved through the following steps. Equation (1) can be reformulated as, where ( 1 ) is a diagonal matrix with diagonal elements equal to 1 and of size ( × ), t is a vector containing the total power consumed by EVs in each time interval and has a size of ( × 1). t can be written in terms of the decision variables by utilizing the matrix transformation, where is the total number of EVs in the charging station, is the decision variables vector which determines the allocated power for a specific EV in a specific time slot and has a size of ( × 1), and is the transformation matrix, which sums up all allocated EVs power in each time slot with a size of ( × ), as shown in Appendix A. Therefore, (6) can be reformulated as follows, The Hessian matrix for the first objective function can be defined as follows, The second objective function in (5) can be reformulated as follows, where is a vector which consists of the constant terms for the linear model of tariff and has a size of (1 × ) . The Hessian matrix 2 and gradient vector for the second objective function can be defined as follows, Consequently, the final objective function for the station can be described by merging (8) and (10) as follows, This objective function together with the linear constraints described below can be addressed by utilizing the methods of quadratic programming, namely, interior point method, active set, sequential quadratic, or any other relevant method. 2) CHARGING STATION CONSTRAINTS The optimization problem considers sufficient constraints to achieve the best level of owner satisfaction while abiding by the restrictions dictated by the distribution grid. Moreover, the stations are designed to deal with sizable batteries. Each EV is treated by nominal charging rate used during normal charging periods and maximum charging rate under fast and ultrafast charging conditions. This can be achieved using the following constraint, which describes the charging range that should not be exceeded during charging to protect the battery, where ( ) is the maximum charging rate for EV while − ( ) is considered for the sake of V2G purposes. Furthermore, to prolong the lifetime of EV batteries, the state of charging (SOC) should be limited between the recommended bands settled by the manufacturers [30] as follows, where , is the state of charging at the end of the charging process. , are the minimum and maximum depth of discharging settled by manufacturers. In lithium-ion batteries currently used in some EVs, optimum DoD typically ranges from 45% to 90%, which is a tradeoff between battery life and sufficient range for the required journeys [10]. is used when the state of the EV is charging, while is used during discharging states for V2G purposes. One more constraint is that the total charging power for EVs should not exceed the maximum capacity dictated by the grid. This constraint ensures stable and normal operation for the power grid, . where ( ) is the maximum allowable capacity for charging during period . This dynamic value is obtained from the upper-level controller which ensures stable MV operation and is studied in detail in the next section. To ensure owner satisfaction and fulfill the journey requirements, the final state of the battery , should reach the required value within the owners' selected charging priorities such that, where , is the state of charging before the charging process, is the nominal capacity of EV , start and end are the owners selected charging periods, and , is the expectation level for 0 value of EV . B. LV feeder controller (Residential compounds -lowlevel control) The optimization problem of EVs in residential compounds determines the scheduling of individual residential charging points in the LV distribution system, in such a way that minimizes the LV network power losses, where peak demand shaving is achieved and voltages at all LV nodes are regulated within allowable tolerances. Furthermore, the optimization algorithm considers the residential load variations over a 24-hour period, EV owner selected charging priorities, the grid topology, and the random arrival for the EVs. This subsection describes the problem formulation for the residential compound controller considering the necessary constraints. 1) OBJECTIVE FUNCTION FOR THE RESIDENTIAL COMPOUND This objective function seeks to minimize the total power losses and to regulate the voltage in low distribution grids and can be described as follows, where is the total number of LV nodes, ( , , +1) , ( , , +1) are the active power and reactive power losses in period , respectively, and are calculated for the cable connecting between nodes and + 1, ( , ) is the voltage at node in period , and ( ( , ) ) is a barrier function to restrict the voltage within permissible range and is defined as follows, where is the minimum voltage limit and set to ±10% ( = .9 = 1.1 ), which is the case in many distribution systems [10]. The previously stated objective function requires accurate information about the power losses and voltage at each LV bus; therefore, a modified Newton-based load flow analysis is implemented to access the necessary information required for the computation of the objective function. Newton-based load flow has a quadratic and rapid convergence sufficient for real-time analysis [31]. The residential controller deals with 53 buses in the power flow analysis step, which represents a computational burden to the controller. Therefore, the proposed model suggests an improved objective function (17) using the logarithmic-barrier method for the voltage constraints to reduce the computational time. The idea behind using the logarithmic-barrier method for computational time reduction has been illustrated in [32]. 2) RESIDENTIAL COMPOUNDS CONSTRAINTS Similarly, the residential compound constraints seek to compromise between the high-performance levels for grids and owner's satisfaction levels. These constraints are similar to other constraints stated in the previous subsection. However, an exception exists due to the limitations of the wiring in residential charging points. According to [10], there are 15 and 20 A outlets (single-phase and three-phase), that can supply a maximum power of 4 and 14.4 kW, respectively. Therefore, a maximum charging rate of 5 kW with unity power factor is considered in the following analysis, which covers the normal residential infrastructure without having to reinforce wiring. Accordingly, the constraint in (13) will be modified as follows, C. MV controller (high-level control) The optimization problem for the MV feeders is to decide the amount of the allocated power for each charging station and for each feeder supplying the residential compounds, such that the total power losses in the MV network are minimized, peak load shaving is achieved and voltage magnitudes at MV nodes are regulated. Therefore, this controller continuously receives the energy required from the charging stations and the MV feeders that feed the residential compounds. It then reallocates this energy over the day if the permissible MV drops and the total power losses exceed the permissible limits. This subsection describes the problem formulation for the MV controllers identifying the necessary constraints. 1) OBJECTIVE FUNCTION FOR MV CONTROLLER. This objective function seeks to minimize the total MV network losses considering the permissible voltage drop tolerances and load shaving, and can be described as follows, where is the total number of MV nodes, and ( ) represents the allocated power for medium feeder at period . The optimum value for ( ) continuously updates the constraint in (15), related to the charging station and for others related to the residential compound controllers. VOLUME XX, 2021 6 2) MV CONTROLLER CONSTRAINTS The MV controller operation is constrained by the following voltage limits, where and are the MV limits and are set to ±5% which is typical for many medium distribution systems ( = .95 and = 1.05 ). The controller is constrained by a maximum demand level ( ), which shaves the load if it exceeds the generation levels. Also, the objective function in (20) requires power flow analysis. The MV and residential compound optimization problems in (17) and (20) can be addressed easily using any of the heuristic optimization algorithms due to nonlinearity of these problems. For instance, a genetic algorithm, particle swarm optimization or pattern search methods can be used. III. THE OVERALL TEST SYSTEM STRUCTURE The performance of the previously stated controllers with different objective functions is validated using the IEEE test system described in Fig. 3. The system consists of 23 kV 31 buses with 415 V residential compound feeders. This section describes the grid topology, EVs/charging points specifications and load assumptions required for the analysis. As shown in Fig. 3, the smart grid distribution test system consists of 23 kV 31 buses with line data described in [33], and is associated with a 415 V residential network. There are also 22 residential compound controllers arranged from feeder 10 to feeder 31, and 4 charging station controllers on feeders 2, 4, 7 and 9. Furthermore, there is a one MV controller allocated at the substation which communicates with the previously stated controllers. The detailed diagram for the 415 V residential network is shown in Fig. 4, where each lowvoltage residential network has 53 nodes representing customer households and populated with EV charging points. These residential networks are powered by 23 kV/415V 300 kVA distribution transformers. These networks are based on real system data of an Australian distribution network with line impedance given in [10]. B. EV specifications This study involves a wide range of recently manufactured EVs (plug-in hybrid or all-electric vehicles) [34]. Each EV battery has a specific capacity and nominal/maximum charging rates. Validation handles the random arrival, departure and SOC of EVs. The arrival time is modeled as a Gaussian distribution with ( = 6 , = 2 ℎ ). The departure time in the normal operation is modeled as a Gaussian distribution with ( = 6 , = 2 ℎ ); these Gaussian distributions are limited by 4 hr around the mean. However, the departure time may depend on the owner's selected charging priorities, whether the owner prefers ultra-fast (30 minutes), fast charging (1 hour) or moderate charging (2 hours). The initial SOC of EVs is modeled as a Gaussian distribution based on previously stated DoD in (14), namely ( = 50%, = 5%) for the vehicles that will be charged and ( = 85%, = 5%) for the vehicles that will be discharged for V2G purposes. The EV specifications are listed in TABLE A1 in Appendix A. C. Charging station specifications Charging stations are located at the MV feeders (2, 4, 7 and 9) and are considered as a proper option for long distance traveling like conventional fuel-based vehicles. Each station has a maximum capacity of 50 vehicles and is supplied by reinforced wiring to support ultra-fast charging (30 minutes) and DC charging. The charging demand in stations commonly increases from 7 a.m. to 10 a.m. for morning journeys and from 3 p.m. to 6 p.m. for evening journeys. The optimization technique for the charging station handles the random arrival/departure and the SOC of EVs. D. Charging point specifications and residential load The proposed method assumes a variation in the residential load over the day, which is based on actual data recorded form the distribution transformer in Australia [10] with an average loading of 1.5 kW at 0.95 power factor and the peak value occurs in the evening 6 p.m., as shown in Fig. 5. Moreover, the dynamic real time and TOU prices over the day are set according to the delivered power. The system has 3 different categories for the tariffs, namely, the high tariff zone (red colored), the moderate tariff zone (blue colored) and low tariff zone (green colored) in Fig. 5. Charging points are located on the LV buses or in the householder's garage. The case study handles two levels of penetration (31% and 62%) which cover the scenarios in the near and far future. The penetration level refers to the portion of the LV nodes that has charging points. As previously mentioned, the charging points are fitted with 15 and 20 A outlets (single-phase and three-phase) that can supply approximately 4 and 14.4 kW, respectively [10]. However, as the single-phase is commonly used on the residential scale, the maximum charging power supplied by the charging points is assumed to be 5 kW and that ensures no reinforced wires. The charging points' penetration scenarios in the LV network are listed in TABLE A2 in Appendix A. The study assumes that 3 vehicles out of 32 (in 62% penetration) and 2 out of 16 (in 31% penetration) require fast charging (red colored in TABLE A2) while those that require moderate charging are marked in blue and those that require discharging for V2G purposes are marked in yellow. E. Controllers' flowcharts and their coordination This subsection describes the coordination between the controllers aided by flowcharts. Initially, as shown in Fig. 6 (residential controller flowchart), the residential controller gathers data about the EVs connected to it, for instance, (battery capacity), (maximum charging rate), ( ) (state of charge), (arrival time), (departure time), and ( ) , that describes the charging direction V2G or G2V. V2G technology is used to improve the grid performance in peak load hours as EVs feed electricity back to grid and the users earn a high revenue based on the energy supplied to the grid during peak load hours. This is advantageous for both the grid performance and the user due to the dynamic and TOU pricing. Therefore, the users can charge during the off-peak hours and sell stored energy during peak load. Every user is provided with a day ahead hourly load forecast and real-time charging prices, shown in Fig. 5. In the implemented code, the arrival time ( ) controls the preferred discharging time and the arrival time can be controlled by the user. Moreover, the departure time in the algorithms controls the charging rate (as the departure time decreases the charging rates increases to recharge the battery in the required time, therefore, the ultra-fast and fast charging rates are achieved). The residential controller produces an initial charging pattern for every EV, this charging pattern describes the specific power that will charge the EV at each time slot, then the controller solves the optimization problem in (17) (using the pattern search algorithm) which minimizes the total losses in the low voltage network; therefore, it inherently runs a load flow function in the low voltage network based on information about the low voltage line and bus data and the daily load curve for the linear loads connected to the LV nodes. The daily load curve can be formed based on real time date thanks to the recent smart meters used in modern smart grids [35]. If the changes in loads over the same session are significant, the daily load curve can be formed using historical data. The tolerance. Function tolerance is a lower bound on the change in the value of the objective function from one iteration to the next (| ( ) − +1 ( )|< function tolerance (10 −5 )). Time tolerance is calculated as one third of the time slot in the algorithm, for instance, the time tolerance is 10 minutes in 30 minutes time slot. Then the controller sums up the optimum achieved powers for all EVs for each time slot and sends it as an initial charging pattern to the main controller. Simultaneously, as shown in Fig. 6 (charging station controller flowchart), the charging station controller collects the EVs information then uses interior-point method from the quadratic programming techniques to solve (12) with its constrains, until one of two stopping conditions achieved; function tolerance or time tolerance as in case of residential controller. Also, this controller sums up the optimum achieved powers for each time slot and sends the initial charging pattern to the main controller. As shown in Fig. 6 (main controller flowchart), the main controller receives the initial charging patterns from the residential and charging station controllers and now it knows the required power profile at each medium voltage node, therefore, it runs a medium voltage load flow for the 24 hours and checks the voltage and power limits. If it is exceeded at any time slot, the main controller movies the portion of power that causes the issue to another time slot in order to keep the voltage level within limit and increases the system stability. The initial charging patterns after modification are called the restricted charging patterns as the residential and station controllers must follow these patterns in charging for grid stability. These restricted patterns are used as initial Residential Controller Flowchart Send the restricted pattern and the suggested pattern to the residential and station controllers Data flow populations for the minimization problem in (20) which reduces the power losses in the medium voltage network while satisfying the constrains. This process iterates until one of two stopping criteria (function tolerance or time tolerance) are achieved. The charging patterns that conserve the constrains and reduce the losses in the medium voltage are called the suggested charging patterns. The main controller sends the restricted and suggested versions for the charging patters to the residential and charging station controllers, which await this command, then residential and station controllers check if the suggested patterns will produce less power losses or not, by calculating the net savings between the suggested and restricted patterns and using the pattern that minimizes the losses. If the suggested pattern is much better than the restricted pattern, then the low-level controllers check if it will negatively affect the satisfaction factor. If the satisfaction factor is not affected, then the suggested pattern is used. Otherwise, the restricted pattern is selected IV. HIERARCHICAL CONTROLLER ALGORITHMS This section presents the optimization algorithms used by the hierarchical controllers to achieve their objectives. The optimization problem at the charging stations' controllers is solved using the interior point method for quadratic programming due to its quadratic form as shown in (12). The residential compounds and MV associated optimization problems are solved by applying the pattern search algorithm due to the nonlinearity in their formulations as shown in (17) and (20). The pattern search is preferred over the genetic algorithm, as it finds the minimum direction with fewer steps especially in a small search space [36]. The pattern search algorithm uses the mesh adaptive direct search algorithm [37]. At each step, this algorithm searches a set of points, called a mesh, around the current point (the point computed at the previous step of the algorithm). The mesh is formed by adding the current point to a scalar multiple of a set of points (vectors) called a pattern. If the algorithm finds a point in the mesh that improves the objective function more than the current point, then this step is considered as a successful poll and the new point (best mesh point) becomes the current point in the next step of the algorithm. Moreover, if the poll is successful, the algorithm increases the search space by increasing the mesh size around the new current point. Therefore, the current mesh size is multiplied by an expansion factor and the process is repeated. If none of the mesh points yields a better objective function value, then this step is considered as an unsuccessful poll and the current point does not change but the algorithm reduces the search space by multiplying the current mesh size by a contraction factor and the search process is continued. These steps are repeated until the convergence is achieved. In the following simulations, the expansion and contraction factors are 2 and 0.5 respectively. As the constraints for the charging stations and residential compound controllers are linear, an initial feasible solution to each of those control problems can be obtained using linear programming. With a feasible initial solution, the pattern search and quadratic programming algorithms takes fewer iterations to converge. V. RESULTS AND DISCUSSION This section describes the effects of the proposed method on the distribution grid performance using MATLAB. Moreover, it tests the validity for implementing the proposed algorithms on the available hardware in the lab based on control Hardware-in-the-Loop (CHiL) using OPAL-RT and DSP. A. Simulation Results To validate the proposed controllers, the hybrid smart distribution grid given in Figs. 3 and 4, is simulated using MATLAB at two levels of penetration, namely 31% and 62%. Furthermore, the validation considers the stochastic arrival/departure of EVs, ultrafast/fast charging with extra fees V2G technology. The results describe the difference between coordinated charging using the proposed method and random charging on the distribution grid performance. CASE STUDY 1: 31% PENETRATION LEVEL This case covers the near future, which assumes that 16 out of 53 nodes have charging points and each charging station has a total parking capacity of 25 EVs. A random scenario assumes that each vehicle continues to charge without any constraint as long as it is plugged in. Moreover, EVs arrive randomly following a normal distribution over three uncoordinated charging, the system peak dramatically rises over the maximum demand level limit and electricity shortage may occur in power plants with limited reserving capacity and generators with limited spinning reserve as shown in Fig. 7(a). In contrast, while using coordinated charging in Fig. 7(b), the power demand is smoothly distributed over the day in a manner that does not surpass the maximum demand limit. Furthermore, coordinated charging increases the system reliability taking into consideration the ultrafast/fast charging and EV owners' preferred time zones. The power demand is further reduced when using the V2G technology as indicated by the shaded green area in Fig. 7(b). The shaded area describes a total reduction of 12% in the supplied energy from the grid while is compensated by discharging 2 EVs in V2G or V2V mode. Fig. 8 describes the total system losses in the distribution networks over the day. As shown in Fig. 8(a), a considerable extra amount of energy losses -estimated by 302.12 kWh per day -can be noticed in case of the uncoordinated charging (over the system without EVs). However, it sharply decreases by 37% and 48% in case of coordinated charging without/with V2G technology, respectively as shown in Fig. 8(b). On the other hand, coordinated charging results in a significant decrease in the transformer loading and saves an annual cost estimated by 10097 $ and 13072 $ when utilizing the V2G technology (assuming the electricity price in Australia for household is 0.247 $/kWh). Fig. 9 describes the voltage profile per day for the 53 LV nodes connected to Feeder-15, which is the worst affected MV feeder. Fig. 9(a) shows the system voltage profile without introducing EVs and for a maximum acceptable voltage drop up to 10% in the LV networks. For every bus, there are 48 circles, which represent the voltage over the day in case of a 30-minute time slot. As illustrated in Fig. 9(b), extreme voltage deviations up to 16% may occur in case of random uncoordinated charging, which may be addressed using transformers tapping or placing capacitor banks at the faulted buses with extra cost. However, the voltage deviations are within the acceptable region for our coordinated charging scenario, with a maximum value of 0.91 pu occurring at node-14. This scenario covers the long-term future, which assumes 32 nodes out of 53 have charging points and each charging station has a total parking capacity of 50 EVs. Fig. 10 illustrates the total system power demand level over the day for 62% penetration level. The peak demand level is sharply doubled during uncoordinated charging, which causes undesirable effects on the distribution grid stability as shown in Fig. 10(a). However, this peak is shaved using our coordinated charging and the loads are smoothly shifted to fill the valley as shown in Fig. 10(b). Furthermore, the green area describes a total reduction of 10% in the supplied energy from the grid and compensated by 3 EVs out of 30 discharging in V2G or V2V mode. Additionally, the peak at 6 p.m. is slightly raised to allow charging with extra fees according to the owner's selected charging priorities. Fig. 11 describes the total system losses in the distribution network over the day. As shown in Fig. 11(a), the extra amount of energy losses is 1170 kWh per day during uncoordinated charging (over the system without EVs). However, the losses are significantly decreased by 43% and 50% during coordinated charging without/with V2G technology, respectively as shown in Fig. 11(b). Furthermore, the annual cost is reduced by 45258$ and by 53390$ when utilizing the V2G technology. Fig. 12 describes the voltage profile for the residential compound connected to Feeder-22. Severe voltage deviations can be observed during uncoordinated charging with a maximum voltage drop at bus-10 as shown in Fig. 12(a). However, the voltage deviations are within the acceptable limits during coordinated charging as shown in Fig. 12(b). B. Hardware-in-the-Loop Validation The system is validated based on DSP and control Hardwarein-the-Loop (CHiL) using OPAL-RT, as shown in Fig. 13. The OPAL-RT platform is operating on 4 cores based on Intel Core Xeon processor at 3 GHz and RAM 2 × 8 GB. The system controller is uploaded on a 150 MHz DSP labeled as (TMS320F28335ZJZA). The validation process involves two cases. 1) CASES 1: CHARGING STATION CONTROLLER The optimization algorithm of the charging station is to be validated; therefore, the charging station algorithm is set up on the DSP and the OPAL-RT platform is used to implement the constrained signals from the main controller. First, the main controller constrained signals are sent from the OPAL-RT to the station algorithm on the DSP, then the OPAL-RT receives the optimal station charging schedules generated There are 5 different scenarios: (1) EV arrives at 6:30 PM with initial SOC 44% and is totally charged to 90% within 30 minutes (ultra-fast charging), (2) EV arrives at 4:30 PM with initial SOC 46% and totally charged to 90% within 1 hour (fast charging), (3) EV arrives at 7:00 PM with initial SOC 47% and totally charged to 90% within 2 hours (moderate charging), (4) EV arrives at 4:00 PM with initial SOC 45% and totally charged to 90% without any prioritized periods (normal charging), (5) EV arrives at 5:30 PM with initial SOC 98% and totally discharged to 45% for the purposes of supporting the grid during peak loads and purposes of V2G. . 15 describes the cost analysis for the charging station, ensuring that the implemented controller successfully reduced the cost. As previously stated, the pricing is addressed using dynamic real time pricing based on the linear model stated in (4). The parameters for the linear model are = 2 × 10 −4 $/ (kWh) 2 and = 0.247 $/kWh [18]. TOU pricing can be shaped by changing over the day. Fig. 15(a) shows the change of with loading, during the peak and mid-peak hours. The price increases by 40% and 20%, respectively. Fig. 15(b) shows the total savings in charging station while using different pricing techniques. When the price is constant, there are no savings. However, the savings are approximately 10$ over the day while using dynamic real time pricing without TOU and savings jump to 55.41$ while using dynamic real time pricing with TOU. 2) CASE 2: RESIDENTIAL CONTROLLER In the second case, the optimization algorithm for the residential controller is validated. The OPAL-RT platform is set up as in case 1, but with the residential algorithm as the DSP code. Fig. 16 describes the charging scenarios for the residential compound connected to Feeder-15 to clarify that the implemented controller succeeded in meeting the constraints. There are 4 different scenarios: (1) EV arrives at 4:00 PM with initial SOC 50% and totally charged to 90% within 1-hour (fast charging), (2) EV arrives at 5:00 PM with initial SOC 40% and totally charged to 90% within 2 hours (moderate charging), (3) EV arrives at 3:00 PM with initial SOC 49% and totally charged to 90% without any prioritized periods (normal charging), (4) EV arrives at 3:30 PM with initial SOC 98% and totally discharged to 45%. 3) EXECUTION TIME Due to the heuristic nature of the optimization algorithm, the execution time cannot be precisely determined. However, the average execution time for the three controllers can be estimated by measuring the average execution time for the residential controller, because the residential controller execution time includes that of the main controller, since the residential controller waits for the main controller to finish its task and send back the initial pattern. Therefore, the execution time for 1000 samples from different residential controllers for different time slots have been measured. As shown in Fig. 17, which shows the histogram generated from 1000 samples, the average execution time for the residential controller in the case of the 62% penetration level is approximately 15 minutes for a 30-minute time slot, as shown in Fig. 17(a), and the average execution time in the case of the 31% penetration level is approximately 10 minutes for a 30-minute time slot, as shown in Fig. 17(b). VI. CONCLUSION This paper addressed the impact of random charging of EVs on the distribution grid performance and proposed an integrated optimization problem formulation implemented at LV and MV networks to enhance the distribution grid performance from various perspectives including minimization of losses and maximization of customer satisfaction level. Additionally, the proposed formulation seeks shaving the peak demand while keeping the voltage drops within the acceptable region. The proposed method handles the charging of EVs in stations with different priorities and minimizes the charging cost for EV owners. The method is validated using a hybrid smart distribution grid of IEEE with 23 kV 31 buses with residential 415 V 53 LV nodes at two EV penetration levels, namely 31% and 62%. The results from
10,283
2021-01-01T00:00:00.000
[ "Engineering" ]
Global helioseismology (WP4.1): From the Sun to the stars&solar analogs Sun-as-a star observations put our star as a reference for stellar observations. Here, I review the activities in which the SPACEINN global seismology team (Working Package WP4.1) has worked during the past 3 years. In particular, we will explain the new deliverables available on the SPACEINN seismic+ portal. Moreover, special attention will be given to surface dynamics (rotation and magnetic fields). After characterizing the rotation and the magnetic properties of around 300 solar-like stars and defining proper metrics for that, we use their seismic properties to characterize 18 solar analogues for which we study their surface magnetic and seismic properties. This allows us to put the Sun into context compared to its siblings. Introduction The objective of the global seismology team inside the SPACEINN collaboration, namely Working Package 4.1, is to discuss the current problems in global seismology of the Sun and solar-analogue pulsating stars. Naturally, attending to the type of observations, i.e. Sun-as-a-star or imaged ones, the work and the discussions related were divided in low and high-degree modes. In this proceedings I will concentrate on the activities of the group concerning the low-degree modes obtained in Sun-as-a star observations and in the studies of solar analogues. During the past three years, the discussion of highdegree global modes were focused on the use of different fitting codes and the differences obtained when several datasets of different instruments were used. Some of the works done in this direction are summarized in [e.g. 1,2]. Briefly, it has been found that some of the p-mode parameters (for example the mode asymmetries) are inconsistent when different fitting methods are used, while they are consistent when a single methodology is used in different datasets, e.g., with GONG [3] or with MDI and HMI [4,5]. Moreover, inversions of the internal rotation profile showed that the solar magnetic cycle 24 is different from cycle 23. The work with different datasets and methods evidenced that the twist appearing at high latitudes with time in several helioseismic variables is very likely to be a numerical artifact [for more details see 2]. Concerning the low-degree modes, the two main axes of the work in the WP4.1 were concentrated on the development of new statistical tools to study these modes and the study of the temporal evolution of their properties during the evolution of the solar magnetic activity cycle. The solar results were placed into a stellar context by comparing them to observations of other solar analogue stars observed by the NASA Kepler mission [6]. e-mail<EMAIL_ADDRESS> Real and simulated solar time series A huge effort has been led by the WP4.1 in order to provide properly calibrated datasets obtained from the Sunas-a-star instruments on board the Solar and Heliospheric Observatory (SoHO [7]), as well as from the Mark-I instrument, a solar spectrophotometer located and operated at Observatorio del Teide (Tenerife, Canary Islands, Spain). It provides precise radial velocity observations of the Sun-as-a-star 1 Photometric light curves of the Sun Photometers (SPM) of the Variability of solar IRradiance and Gravity Oscillations instrument (VIRGO [8]) are now available at the SPACEINN portal 2 . These are 60 s cadenced time series starting on April 11, 1996 of the three channels of 5 nm bandwidth centered at 402 nm (blue), 500 nm (green) and 862 nm (red). These light curves are corrected from outliers and then filtered with a high-pass filter running mean to remove unwanted low-frequency trends. An additional correction is also applied whenever possible in order to correct for the so-called SPM "attractor" as explained in [9]. Doppler velocity time series observed by the Global Oscillations at Low Frequency (GOLF [10]) instrument are also available in the SPACEINN portal 3 . Observations obtained from the blue or the red wing of the Sodium doublet [11], each one with a different sensitivity to the solar disk [12], have been calibrated following the methods explained in [13] and the two independent channels have been averaged into one single time series. The starting date is at midnight of April 11, 1996 and the sampling rate is 60 s in order to follow the same cadence than the other two helioseismic instruments on board SoHO. Numerical simulations have also been created and are available on demand. All the information can be found in the SPACEINN portal 4 . Three different type of simulations were created: Fitting low-degree solar modes In helioseismology, it is common practice to characterize the acoustic modes by fitting single modes using a frequentist approach at low frequencies and pairs of even or odd modes at higher frequencies [e.g. [14][15][16][17]. But with the development of asteroseismology and due to the natural complications to properly identify the modes, global fittings were established as standard methods [e.g. [18][19][20][21][22][23]. Moreover, bayesian fitting techniques were soon adopted in asteroseismology [e.g. [24][25][26][27][28][29] and also tested on the Sun [30]. Although the difference between the bayesian and the frequentist approach in the case of the Sun could be neglected [e.g. 31], we decided to adapt the high-DImensional And multi-MOdal NesteD Sampling asteroseismic Bayesian global-fitting tool [DIAMONDS, 28] to the solar case 5 . An example of the application of the code to the GOLF velocity time series can be seen in figure 1. Because of the particularities of the GOLF measurements, the p-mode power excess has been fitted using a double Gaussian following [32]. A full analysis of the GOLF and VIRGO/SPM datasets using this tool will be soon available (Corsaro et al. in preparation). A lot of discussions and efforts were employed to study low-frequency p modes and g modes in the Sun. Although the signature of the g-mode period spacing was found using GOLF observations [33,34], it has been impossible to unambiguously detect individual g modes with a high confidence level [35][36][37]. Special effort has been spent to find better ways to combine contemporary datasets as it was already outlined in [30]. Proper covariant matrices have been developed (Broomhall et al. in preparation) but the results have not allowed us to find solar g modes so far. Photospheric magnetic activity proxy The quasi continuous space observations required to perform helio-and astero-seismic studies can be used to track the evolution of the photospheric magnetic field [e.g. [38][39][40][41][42] as well as in the sub-photospheric layers through seismology [43][44][45][46][47]. In the SPACEINN framework, we have extensively studied the photospheric magnetic activity proxies obtained from photometric observations, S ph [e.g. 48,49] and Doppler velocity time series S vel . Briefly they track the standard deviation of subseries of a length proportional to the rotation period in order to ensure that the periodicities are related to stellar spots and hence, to magnetic activity. An example of the solar S vel measured using the GOLF data is given in figure 2. In particular, we have used the Sun as a reference (Salabert et al. in preparation) and we have compared S ph and S vel to many other solar magnetic proxies [50]. An example of these comparisons is shown in figure 3. Four time series of the S ph corresponding to the VIRGO/SPM blue, green, red channels, and the Keplerlike composite as well as one time series of the GOLF S vel are available at the SPACEINN portal 6 . These files are updated in multiples of 5x27= 135 days because, as it has been demonstrated for the Sun and other stars [52], a length of five times the rotation period provide the best compromise to study the evolution of the magnetic activity cycle. These solar datasets have been computed from the raw time series of VIRGO/SPM and GOLF applying the calibration procedures developed for Kepler [53] using a high-pass filter with a cut-off period at 60 days. Comparison of solar magnetic activity cycles 23-24 A particular effort has been done to study the magnetic activity cycle of the Sun and to better understand the differences between cycles 23 and 24 that we know had a particularly unexpected long magnetic activity minium [e.g. 54,55]. Thanks to helioseismology, it si now possible to "see" inside the Sun below the photosphere and study the changes during the evolution of the solar cycle [e.g. 56]. An example of the frequency shifts of modes of degrees =0, 1, and 2 are shown in figure 4. They have been calculated at three different depths beneath the photosphere at (∼2400, ∼1300, and ∼760 km) [57]. The lowfrequency modes show nearly unchanged frequency shifts between Cycles 23 and 24, with a time evolving signature of the quasi-biennial oscillation [58,59], which is particularly visible for the quadrupole component revealing the presence of a complex magnetic structure. The modes at higher frequencies show frequency shifts 30% smaller during Cycle 24, which is in agreement with the decrease observed in the surface activity between Cycles 23 and 24. The frequency tables associated with the work described in [57] are now available at the SPACEINN por-tal 7 . A total of 69 non-independent one-year frequency tables of modes =0, 1, 2, and 3 are available. They were computed using 365 days time series (with a four-time overlap of 91.25 days) to avoid any perturbation induced by the 1-year orbital motion of the SoHO spacecraft and covering a total of 18 years starting in April 1996. Sellar Activity: Solar analogues One of the most important questions we have to answer in astrophysics is if the Sun is a typical star or not attending to its photospheric properties, in particular, to its activity. Indeed, the amount of magnetic activity in a star is crucial for the development of life. Therefore, it is important to be able to compare the Sun to its closest siblings as it has been shown that when a wider comparison is done in terms of the length of activity cycles and surface rotation rates, the Sun seems to be a particular star between the so-called active and inactive branches [60]. However, we know that stars exhibiting pulsations has low surface magnetism because the latter inhibits the oscillation modes [39,40,43,61]. Therefore, the first step is to compare the Sun to solar-type pulsating analogs. This has been done by several members of the WP4.1 for 18 solar analogs observed by Kepler [62]. It has been found that the photospheric activity levels of 15 of the solar analogs are comparable to the range between the minimum and the maximum of the solar magnetic activity during the solar cycle (see figure 5). Figure 5. Photospheric magnetic activity index, S ph (in ppm), as a function of the rotational period, P rot (in days from [49]), of the 18 seismic solar analogs observed with the Kepler satellite. The mean activity level of the Sun, calculated from the VIRGO/SPM observations, is represented for a rotation of 25 days with its astronomical symbol, and its mean activity levels at minimum and maximum of the 11-year cycle are represented by the horizontal dashed lines. Adapted from [62]. The two stars in figure 5 with a higher S ph correspond to the youngest stars in the sample. Hence, it is normal to have higher surface magnetic activity compared to the Sun. One star with a rotation period comparable to the Sun (i.e. with an age similar to the Sun as expected from gyrochronology), is observed to have a photospheric activity slightly lower than the Sun at its minimum. This could be a consequence of a tilted star compared to the line of sight as the photospheric activity proxy depends on the stellar inclination angle. A similar picture can be drawn when a chromospheric proxy, the Ca K lines, are used to study the magnetism of these stars. Again, inside the error bars, all apart for the two youngest ones have activity levels inside the solar range. The youngest star in the sample, KIC 10644253 (1.1 ± 0.2 Gyr, [63]), has been extensively analyzed as a possible precursor of our Sun. A magnetic activity modulation of ∼1.6 years has been measured in the S ph as well as in the frequency shifts [45], and in the p-mode amplitudes [47]. This modulation could be analogous to what has been found by [64] in the Mount Wilson star HD 30495, having very close stellar properties and falling on the inactive branch reported by [60]. Interestingly, some discrepancies are seen when compared the seismic results with the photospheric proxy. Because this star seems to have a low inclination angle with a weighted average value of i = 23 ± 6 o , it is conceivable that the regions of high activity are largely confined to the nearly out-of-sight hemisphere of the star where the discrepancy between the activity proxies are the largest [47]. Follow up observations have been engaged in the Hermes/Mercator telescope during the last two years for all these targets. An example is shown in figure 6 for KIC 10644253 and KIC 3241581. The chromospheric S index seems to follow the magnetic modulation depicted form the Kepler S ph proxy. However, longer follow up observations will be required to confirm the periodicity of the magnetic activity cycle of those stars. Conclusions The global helioseismology working group (4.1) in the SPACEINN project has been a success because of the quality and quantity of the work done as well as for the tools and datasets delivered to the community through the SPACEINN web portal. But the work of this team will not finish at the end of 2016 with the official end of the project. WP4.1 is still working on several scientific papers (e.g. Broomhall et al., Corsaro et al., Salabert et al.) and it will continue working together in some other on-going analysis of helioseismic data. It is also important to mention that the deliverables provided go beyond what was originally proposed in the project. WP4.1 has also been a success reinforcing the synergies between helio-and asteroseismology with a continuous transfer from one to the other, in particular, on the studies of solar analogue stars, which are allowing us to better understand the Sun compared to its siblings. [45] obtained from the Kepler photometry (black continuous line) and Chromospheric S index obtained form the spectroscopic follow up observations obtained from the Hermes/Mercator telescope (blue dots) for two solar analogues, KIC 10644253 and KIC 3241581. The dashed curve represent the potential magnetic modulation plotted to guide the eye between the two sets of observations.
3,346
2016-11-11T00:00:00.000
[ "Physics", "Geology" ]
Very Degenerate Higgsino Dark Matter We present a study of the Very Degenerate Higgsino Dark Matter (DM), whose mass splitting between the lightest neutral and charged components is O1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O}(1) $$\end{document} MeV, much smaller than radiative splitting of 355 MeV. The scenario is realized in the minimal supersymmetric standard model by small gaugino mixings. In contrast to the pure Higgsino DM with the radiative splitting only, various observable signatures with distinct features are induced. First of all, the very small mass splitting makes (a) sizable Sommerfeld enhancement and Ramsauer-Townsend (RT) suppression relevant to ∼1 TeV Higgsino DM, and (b) Sommerfeld-Ramsauer-Townsend effect saturate at lower velocities v/c ≲ 10−3. As a result, annihilation signals can be large enough to be observed from the galactic center and/or dwarf galaxies, while the relative signal sizes can vary depending on the locations of Sommerfeld peaks and RT dips. In addition, at collider experiments, stable chargino signatures can be searched for to probe the model in the future. DM direct detection signals, however, depend on the Wino mass; even no detectable signals can be induced if the Wino is heavier than about 10 TeV. Introduction The pure Higgsino (with the electroweak-radiative mass splitting ∆m = 355 MeV between its lightest neutral and charged components) is an attractive candidate of thermal dark matter (DM) for its mass around 1 TeV [1]. As null results at Large Hadron Collider (LHC) experiments push supersymmetry (SUSY) to TeV scale, such Higgsino as the lightest supersymmetric particle (LSP) has recently become an important target for future collider [2][3][4][5][6][7] and DM search experiments [5][6][7][8][9][10][11]. A priori, the Higgsino mass µ and gaugino masses M 1 , M 2 for the Bino and Wino are not related; thus, the pure Higgsino scenario with much heavier gauginos is possible and natural by considering two distinct Peccei-Quinn and R symmetric limits. It is, however, difficult to test the pure Higgsino LSP up to 1-2 TeV at collider experiments (including future 100 TeV options) and dark matter detections. Standard collider searches of the pure Higgsino LSP based on jet plus missing energy become hard as final state visible particles become too soft to be well observed due to the small mass splitting [2][3][4]; but the splitting is still large enough for charged Higgino components to decay promptly at colliders so that disappearing track and stable chargino searches are not able to probe [2,12]. Furthermore, the purity of the Higgsino states suppresses DM direct detection signals. DM indirect detection signals are also not large enough because of relatively weak interactions and negligible Sommerfeld enhancements [8-11, 13, 14]. In contrast, the pure Wino DM with the radiative mass splitting of 164 MeV, another thermal DM candidate for its mass ∼ 3 TeV, provides several ways to test: monojet plus missing energy due to more efficient recoil and larger cross-section [2,4,15,16], disappearing track due to -1 - Very Degenerate Higgsino DM We discuss the SUSY parameter space of the Very Degenerate Higgsino DM, which involves the Higgsino mass parameter µ, the Bino and Wino masses M 1,2 , the ratio of the Higgs vacuum expectation values t β ≡ tan β = v u /v d , the weak mixing angle given by s W ≡ sin θ W ≈ 0.23, and the W gauge boson mass m W . We assume the limit |M 1 ±M 2 |, |M 2 ±µ|, |µ±M 1 | m W and |M 1 |, |M 2 | |µ|. We keep the signs of mass eigenvalues and make eigenvectors real. Later on, we will assume M 2 , µ > 0 and M 1 < 0 for the Very Degenerate Higgsino DM, but we will be agnostic about how such signs can be obtained. Meanwhile, all sfermions and heavy Higgs bosons are assumed to be very heavy and not relevant to our study. Higgsino mass eigenvalues at tree-level are [20,21], where s 2β = sin 2β and so on. The subscripts S, A imply that the mass eigenstates are Which of χ 0 S or χ 0 A is the LSP depends on the relative sign of µ -2 - JHEP01(2017)009 and K M 2 : the χ 0 A is the LSP if the relative sign is positive, and vice versa. Expressing both possibilities, we write the LSP mass as Higgsino mass splitting at tree-level is then The physical mass splitting is ∆m = ∆m tree + ∆m loop , where the model-independent electroweak loop corrections give ∆m loop ≈ 355 MeV for the Higgsino [12]. Notably, the ∆m tree can be negative, so that the resulting physical mass splitting ∆m can be smaller than the ∆m loop . 1 From the above approximations, we find that one way to obtain negative ∆m tree is to satisfy the following conditions: • sign(µM 2 ) > 0 is required because only the first term in eq. (2.4) can be negative. Assuming µ, M 2 > 0 from now on, we rewrite Thus, ∆m tree < 0 if the mass parameters satisfy • M 1 < 0 is preferred so that K < 1. We assume M 1 < 0. • Small t β is preferred; t β 2 does not allow solutions for ∆m tree < 0 for the range of mass parameters considered. We apply this set of approximate conditions to our full numerical calculation to narrow down the solution finding procedure. In figure 1, we show one set of numerical solutions for ∆m tree < 0 for the range of µ ≤ 2 TeV and −2.5 ≥ M 1 ≥ −5 TeV with fixed benchmark parameters M 2 = 10 TeV and t β = 1.8. In most of the parameter space shown, ∆m is smaller than the radiative mass splitting of 355 MeV. Although the approximate equations above do not depend on µ, the full numerical solution does a bit. We will consider two benchmark cases of ∆m =2, 10 MeV in this parameter space throughout. Later, we will also comment on the case with smaller M 2 = 5 TeV. The solutions for ∆m =2, 10 MeV and our most discussions do not strongly depend on the value of M 2 , but direct detection signals do as will be discussed. The neutralino mass splitting, δm 0 ≡ |m 0 χ 2 | − |m 0 χ 1 |, is somewhat larger ∼ O(100) MeV, and it also does not strongly affect our discussion. 1 The negative ∆mtree has been used in exotic collider phenomenology of Higgsinos [22,23]. , and spin-independent direct detection rate σ SI (dotted; see section 4.1) are shown. We consider the two benchmark models along the ∆m = 2, 10 MeV contours throughout. Indirect detection of annihilation signals Non-perturbative effects in DM pair annihilation can lead to Sommerfeld enhancement [13,14] or Ramsauer-Townsend suppression [9,11]. The pure Higgsino DM with µ ∼ 1 TeV and ∆m ≈ 355 MeV does not experience large SRT effects. Only Higgsinos as heavy as ∼ 7 TeV can experience sizable effects, but they are too heavy to be relevant to collider experiments. On the other hand, the 1-3 TeV pure Wino DM with ∆m ≈ 164 MeV experiences much larger SRT effects with a resonance appearing at around 2.4 TeV [8-11, 13, 14, 17]. Since the SRT effects on the pure Wino DM saturate at relatively high velocities v/c ∼ 10 −2 , Wino annihilation cross-sections at various astronomical sites with different velocity dispersions are same. We will discuss that the very small splitting of the Higgsino DM can make the relevant Higgsino mass scale down to ∼ 1 TeV and allow different annihilation cross-sections at various astronomical sites, postponing the saturation to lower velocities. Furthermore, there can appear not only Sommerfeld enhancements but also RT suppressions. SRT effects with very small mass splitting We focus on today's DM annihilation cross-sections into W W, ZZ, γγ, Zγ channels. Thus, we do not consider co-annihilation channels. Pair annihilations with SRT effects can pro- JHEP01(2017)009 ceed via various intermediate two-body states with the same charge Q = 0 and spin S = 0, 1 as those of the initial LSP pair, which are exchanged by photons and on/off-shell W, Z gauge bosons. We take into account all two-body states formed among Higgsino states; in addition, we add heavier gauginos if their masses are within 10 GeV of the Higgsino in order to accommodate non-zero effects from them, but this rarely happens in our study. We follow a general formalism developed for SUSY in ref. [24][25][26][27] to calculate absorptive Wilson coefficients and non-relativistic potentials between various two-body states, and we numerically solve resulting Schrödinger equations to obtain SRT effects. We study two benchmark models with ∆m = 2, 10 MeV presented in figure 1. For the given µ ∈ {600, 2000} GeV (and other parameters as described), a unique solution for M 1 is found. As long as gaugino mixtures are small, the exact value of M 2 ( |M 1 |) does not matter much in annihilation signals. It is because leading contributions to annihilations and SRT effects already exist in the pure Higgsino model with vanishing gaugino mixings: for example, the direct annihilation χ 0 χ 0 → W W and the SRT effect χ 0 χ 0 → χ + 1 χ − can be mediated by the Higgsino-Higgsino-W interaction without need for any gaugino mixtures. Thus, we set M 2 = 10 TeV (and t β = 1.8) in this section. In figure 2, we show contours of the annihilation cross-section into photon-line signals, σv for the benchmark models with ∆m = 2, 10 MeV and the usual pure Higgsino model with ∆m = 355 MeV for comparison. Similar features exist in photon-continuum signals from σv W W +ZZ ≡ σv W W + σv ZZ , and similar discussions apply. Two types of enhancements are observed, most clearly from the ∆m = 2 MeV result. First, a series of threshold zero-energy resonances forms just below the excitation threshold of χ 0 χ 0 → χ + χ − with 1 2 µv 2 ∆m (blue-dashed line) [27][28][29], depicted as diagonal bands of enhancement. Photon exchanges between chargino pairs are responsible for the series of closely-located resonances, but not all of them are captured and shown in the figure; see ref. [27] for a demonstration of many closely-located threshold resonances. Well below the threshold, SRT effects are independent on the DM velocity as the W -boson exchange in χ 0 χ 0 → χ + χ − becomes governed by the W -mass rather than the DM momentum [13,14,27], depicted as vertical regions of enhancement. The SRT effect saturates at finite enhancement in the v → 0 limit because of the finite-ranged W -exchange Yukawa potential. As ∆m increases, the excitation χ 0 χ 0 → χ + χ − becomes harder and the attractive potential becomes effectively shallower [14]. A heavier DM with a smaller Bohr radius can compensate this trend and can form zero-energy bound states. Thus, the larger ∆m, the heavier Higgsino Sommerfeld peaks. From µ ∼ 1.1 TeV for ∆m = 2 MeV, the Sommerfeld peak moves to a heavier µ ∼ 1.3 TeV for ∆m = 10 MeV and to much heavier µ ∼ 7 TeV for the pure Higgsino with ∆m = 355 MeV. Moreover, the threshold velocity becomes higher with the larger ∆m, making the SRT effects saturate at higher velocities. All such behaviors are clearly shown in figure 2. Another remarkable is that RT dips are formed near Sommerfeld peaks [9,11,18,19] both near the excitation threshold and in the small-velocity saturation regime. RT dips are located at slightly heavier Higgsino masses and/or larger velocities. As ∆m increases, dips and peaks become more separated in µ and v. Annihilations at GC and DG We calculate annihilation cross-sections at GC and DG, main candidate sites for DM indirect detection. The GC is expected to support a huge DM density but also plenty of contaminations from baryons, whereas DG are very clean DM sources in spite of smaller DM density. In addition, velocity dispersions are an order of magnitude different, often further differentiating annihilation signals at DG and GC. We convolute the annihilation cross-section calculated in the previous subsection with the Maxwell-Boltzmann velocity distributions for GC and DG [9,[30][31][32] Resulting velocity-convoluted annihilation cross-sections at GC and DG are shown in figure 3. Sommerfeld enhancements and RT suppressions are both clearly observed near the 1 TeV Higgsino. Near Sommerfeld peaks and RT dips, annihilation cross-sections at GC and DG are different in general. The difference is larger for the ∆m = 2 MeV case because SRT effects saturate at lower velocities. Meanwhile, overall enhancements and suppressions are larger for the ∆m = 10 MeV case because peaks and dips are more separated in µ and v so that they lead to less cancellation in the velocity convolution. We also comment that GC cross-sections are not as sharp as DG ones in the figure because we had to average over very closely-separated peaks and dips appearing just below the excitation threshold (where the GC signal is most sensitive too), not all of which is well captured in our parameter scanning. Another remarkable feature in figure 3 is that, owing to RT dips, the DG annihilation cross-section can be smaller than that of GC. It is a counter-example to the typical result that the DG annihilation cross-section is similar or larger because the DM velocity dispersion is smaller. The existence of RT dips is (accidentally) more clear in the photon-line signal than in the photon-continuum signal; as RT dips are produced from cancellations between various contributions (not necessarily related to resonances), their appearances and strengths can depend on annihilation channels. However, the photon-line dip depth that will be observed at detectors is subject to the internal bremsstrahlung effect. Within detector resolutions of the photon energy, the photons radiated off the W W annihilation process can contribute to photon-line signals, and this extra contribution can smooth the RT-dips in figure 4. As shown in the 5plet and 7-plet DM cases studied in ref. [19], photon-line dips at some DM masses can disappear due to this extra contribution. For a better estimation of indirect detection, it is worthwhile to carry out a similar study for our doublet case; so our conclusions are subject to this uncertainty. The peak heights shown in the figure may also be subject to uncertainties; our parameter scanning resolution very close to peak centers is limited, and perturbative cor- rections that may become important in this regime are not added. The perturbative corrections are most important when the unitarity is broken by unphysically enhanced cross-sections [33]. However, our annihilation cross-sections are well below the unitarity bound σv ≤ 4π/(µ 2 v) 10 −20 × 1 TeV µ 2 10 −2 v cm 3 /sec; and indeed, the regularizing velocity v c ∼ 10 −6 [33] is much smaller than our saturation velocity. Also, our scanning resolution is good enough just away from peak centers. Thus, we do not attempt to further improve the peak height calculation. In figure 4, we finally overlay the latest constraints and some projection limits of indirect detections. Datasets presented include: HESS 2013 [34] and Fermi-LAT 2015 [35] for photon-lines from GC, MAGIC 2013 [36] for photon-lines from DG, Fermi-LAT+MAGIC combination [37] for photon-continuum from DG, and HESS 254h [38][39][40] for photoncontinuum from GC. Projection studies include: CTA 5h [11] for photon-lines from GC (see refs. [41,42] for similar results), CTA 500h [43,44] for photon-continuum from GC, and Fermi-LAT 15 years for photon-continuum from 16 DG [45] (see ref. [46] for CTA projections). Current and future DES constraints from DG photon-continuum [47] are similar or weaker than the results shown, so we do not show them. A full DM relic density is as- sumed for all Higgsino masses to interpret these data as the constraints on the annihilation cross-sections. Currently, Sommerfeld peaks in both ∆m = 2, 10 MeV models are constrained by DG searches. Also, GC searches constrain Sommerfeld peaks of the ∆m = 10 MeV case, while smaller peaks of the ∆m = 2 MeV are not yet constrained by GC searches. In the future, a large part of the Sommerfeld enhanced parameter space can be probed by CTA GC and Fermi DG searches. On the other hand, RT dips in photon-line signals are below future sensitivities although potential positive contributions from the internal bremsstrahlung can change this somewhat. RT dips in photon-continuum signals are less significant and close to CTA GC projections. For reference, we also show as green bands the mass range where the thermal Higgsino DM with ∆m = 355 MeV can explain the full DM relic density. Although SRT effects on the Very Degenerate Higgsino model can alter the relic density somewhat, the pure Higgsino result is still a useful guide as SRT effects on relic density may not be so significant; not only nearby Sommerfeld peaks and RT dips may cancel each other during a thermal history, but also some co-annihilation channels may have opposite SRT effects (as for the pure Higgsino DM [27]) that can also nullify impacts on the relic density. Without dedicated relic density calculations, we are content with assuming a full DM relic density which may come, e.g., from a non-thermal origin, and in any case our signals can be scaled in proportion to a true relic density. Direct detection The spin-independent direct detection (SIDD) signal of the nearly degenerate Higgsino DM depends on the mass splitting between the neutral states δm 0 and the amount of the gaugino mixture. The neutral mass gap δm 0 should be larger than O(0.1) MeV, otherwise its inelastic scattering mediated by the Z exchange should have been already observed [9]. For the sufficiently large δm 0 as in our study (see figure 1), the elastic scattering rate is controlled by gaugino mixtures (via Higgsino-gaugino-Higgs coupling), that is, the signal vanishes in the pure Higgsino limit. Therefore, we consider two benchmark values of M 2 = 10 and 5 TeV in this subsection, representing the cases with relatively small and large gaugino mixings and SIDD signals. For each M 2 benchmark, the value of M 1 is fixed (as a function of other parameters) to obtain the desired ∆m = 2, 10 MeV, and thus SIDD rates are determined. The SIDD cross-section is approximately given by [48] σ SI 8 × 10 −47 g hχχ 0.01 where the sign ∓ implies the sign(-K ) and we assume the Higgs alignment limit. We obtain σ SI = (3 ∼ 5) × 10 −48 , (4 ∼ 9) × 10 −47 cm 2 for the M 2 = 10, 5 TeV with the range spanned by µ = 600 ∼ 1500 GeV (see figure 1 for M 2 = 10 TeV result). The dependence on the ∆m (indirectly via Bino mixtures) is not significant for ∆m 10 MeV. The former range of σ SI with M 2 = 10 TeV is close to the coherent neutrino scattering background floor so that searches will be difficult in the near future, while the latter range with M 2 = 5 TeV is expected to be probed at future experiments such as DarkSide-G2 [49,50] and LZ [49,51]. Although indirect detection signals are sizable for both M 2 benchmark values, the absence or existence of detectable SIDD signals still depends on the Wino mixture (hence, the Wino mass), and either is not a necessary consequence of the Very Degenerate Higgsino DM. Meanwhile, more interesting direct detection signals of our model can be produced by the formation of a DM-nucleus bound state through the inelastic scattering of χ 0 N Z → χ − N Z+1 [52][53][54]. The latest analysis adopting a semi-classical calculation in the Fermi gas model of nuclei [54] showed that neutrinoless double-beta decay experiments like EXO-200 and Kamland-Zen are able to provide a unique and strong sensitivity to the model parameter space with ∆m smaller than the chargino-nucleus binding energy ∼ 20 MeV. Further progresses in understanding nuclear model dependences of the nuclear transition element and/or improving experimental sensitivies will be crutial to test our model. Collider searches With the very small mass splitting, the charged Higgsino can be long-lived at LHC experiments. If it decays outside or the outer part of LHC detectors, stable chargino searches can apply, that is, characteristic ionization patterns of traversing massive charged particles can be identified. If it decays in the middle of detectors, disappearing charged track searches can apply as soft charged decay products are not efficiently reconstructed. For ∆m much smaller than the pion mass, the dominant chargino decay mode is χ + → e + ν e χ 0 [12,[55][56][57]: with the function P (x) given in ref. [12]. For ∆m ∼ O(1 − 10) MeV, the decay length is very long, cτ ∼ 10 7 − 10 12 m (equivalently τ ∼ 10 −1 − 10 4 sec), so that almost all charginos traverse LHC detectors and thus only stable chargino searches apply. By reinterpreting the CMS 8 TeV constraints on the stable charged pure Wino [58], we obtain the constraint µ 400 − 600 GeV for ∆m much smaller than the pion mass. The uncertainty range quoted is partly owing to the lack of our knowledge of r min , the minimum decay length of the chargino for the CMS stable chargino search to be applied; it is needed because CMS considered the range of charged Wino decay length cτ = O(0.1 − 10) m where only a fraction of charged Winos traverse detectors and become stable charginos. From the CMS acceptance curve in ref. [58], we choose to vary r min = cτ min 1.5 − 6 m (τ min = 5 − 20 ns) to obtain the constraint and uncertainty. We conclude that the ∼ 1 TeV Very Degenerate Higgsino DM is currently allowed, but future LHC searches of stable charginos will better constrain the model. Cosmological constraints The long-lived charged Higgsino can be cosmologically dangerous. The above quoted lifetime in our model τ ∼ 10 −1 − 10 4 sec could endanger the standard bing-bang nucleosynthesis (BBN) prediction. Although the chargino decay releases only soft leptons not directly affecting BBN, its metastable existence can form a bound state with a helium and can catalyze the 6 Li production. The lifetime limit τ 5000 sec of such a metastable charged particle [59] constrains the Higgsino mass splitting to be ∆m 1.2 MeV. 2 The (∆m) 5 dependence of the decay width in eq. (4.3) makes the BBN constraints quickly irrelevant to larger ∆m cases that we focus on. As the enhancement is saturated at modestly small velocity, early-universe constraints from the era with the very small DM velocity such as recombination and DM protohalo formation are not strong. For example, σv W W 10 −24 cm 3 /sec is generally safe from such considerations (see, e.g., refs. [60][61][62]), so that the model is not constrained possibly except for very small parameter spaces close to Sommerfeld peaks. JHEP01(2017)009 5 Summary and discussions We have studied the Very Degenerate Higgsino DM model with O(1) MeV mass splitting, which is realized by small gaugino mixings and leads to dramatic non-perturbative effects. Owing to the very small mass splitting, SRT peaks and dips are present at around the 1 TeV Higgsino mass, and the velocity saturation of SRT effects is postponed to lower velocities v/c ∼ 10 −3 . As a result, indirect detection signals of ∼ 1 TeV Higgsino DM can be significantly Sommerfeld-enhanced (to be constrained already or observable in the near future) or even RT-suppressed. Annihilation cross-sections at GC and DG are different in general: either of them can be larger than the other depending on the locations of Sommerfeld peaks and RT dips. But our conclusions are subject to unaccounted internal bremsstrahlung effects which can smooth RT dips. Further studies are required to check that our results are robust. Meanwhile, other observable signature is also induced in stable chargino collider searches, which can probe the 1 TeV scale in the future. However, the rates of direct detection signals depend on the M 2 value (the smaller M 2 , the larger signal) so that M 2 ∼ 5(10) TeV can(not) produce detectable signals. The potentially unusual aspects of indirect detection signals discussed in this paper are well featured by the two benchmark models of ∆m = 2 and 10 MeV and shall be well taken into account in future searches and interpretations in terms of Higgsino DM models. The Very Degenerate Higgsino DM also provides an example where "slight" gaugino mixings can have unexpectedly big impacts on the observation prospects of the Higgsino DM. The mixing is slight in the sense that the direct detection, whose leading contribution is induced by gaugino mixings, can still be small (for heavy enough Winos). But the phenomenology is unexpectedly interesting because such small mixings are usually thought not to affect the indirect detection signal, as the signal is already sizable in the zeromixing limit. In all, nearly pure Higgsino DM can have vastly different phenomena and discovery prospects from the pure Higgsino DM, and we hope that more complete studies can be followed.
5,916.4
2017-01-01T00:00:00.000
[ "Physics" ]
Microglial physiological properties and interactions with synapses are altered at presymptomatic stages in a mouse model of Huntington’s disease pathology Background Huntington’s disease (HD) is a dominantly inherited neurodegenerative disorder that affects cognitive and motor abilities by primarily targeting the striatum and cerebral cortex. HD is caused by a mutation elongating the CAG repeats within the Huntingtin gene, resulting in HTT protein misfolding. Although the genetic cause of HD has been established, the specific susceptibility of neurons within various brain structures has remained elusive. Microglia, which are the brain’s resident macrophages, have emerged as important players in neurodegeneration. Nevertheless, few studies have examined their implication in HD. Methods To provide novel insights, we investigated the maturation and dysfunction of striatal microglia using the R6/2 mouse model of HD. This transgenic model, which presents with 120+/-5 CAG repeats, displays progressive motor deficits beginning at 6 weeks of age, with full incapacitation by 13 weeks. We studied microglial morphology, phagocytic capacity, and synaptic contacts in the striatum of R6/2 versus wild-type (WT) littermates at 3, 10, and 13 weeks of age, using a combination of light and transmission electron microscopy. We also reconstructed dendrites and determined synaptic density within the striatum of R6/2 and WT littermates, at nanoscale resolution using focused ion beam scanning electron microscopy. Results At 3 weeks of age, prior to any known motor deficits, microglia in R6/2 animals displayed a more mature morphological phenotype than WT animals. Microglia from R6/2 mice across all ages also demonstrated increased phagocytosis, as revealed by light microscopy and transmission electron microscopy. Furthermore, microglial processes from 10-week-old R6/2 mice made fewer contacts with synaptic structures than microglial processes in 3-week-old R6/2 mice and age-matched WT littermates. Synaptic density was not affected by genotype at 3 weeks of age but increased with maturation in WT mice. The location of synapses was lastly modified in R6/2 mice compared with WT controls, from targeting dendritic spines to dendritic trunks at both 3 and 10 weeks of age. Conclusions These findings suggest that microglia may play an intimate role in synaptic alteration and loss during HD pathogenesis. Background Huntington's disease (HD) is a dominantly inherited neurodegenerative disorder characterized by loss of motor control, accompanied by cognitive and psychiatric impairments [1]. It is caused by a CAG repeat expansion within exon 1 of the huntingtin (HTT) gene [2], which is ubiquitously expressed across the body and is required for normal development [3,4]. CAG expansion of 40 or more repeats (compared to healthy individuals' [5] impairs the protein's folding, eventually resulting in intracellular inclusions of mutant huntingtin (mHTT) across the brain [6]. Patients usually begin to manifest symptoms between 35 and 45 years of age, but the length of the CAG expansion is inversely correlated with age of disease onset, and individuals with more than 50 CAG repeats present with symptoms before age 20 [7]. Several animal models, including the R6/2 mouse, have been generated to study the effects of mHTT in the brain [8]. These animals express exon 1 of human HTT with~120-150 CAG repeats under its endogenous promoter. Disease onset in these animals is between 6 and 9 weeks of age and animals generally die between 12 and 14 weeks of age [8]. Medium-sized spiny neurons (MSSNs) make up 95% of the neurons within the striatum and are particularly vulnerable to CAG repeat expansion [9,10]. They are among the first neurons to die within the striatum of HD patients [9], though by later disease stages there is widespread loss of pyramidal neurons in the cerebral cortex as well [11]. Striatal neurodegeneration is observable in HD patients and moves along a dorsal-to-ventral and medial-to-lateral pattern [12]. Although the genetic cause of HD has been determined, the specific susceptibility of MSSNs has not been fully explained. Electron dense, dark neurons containing condensed cytoplasm and other markers of cellular stress have been identified using transmission electron microscopy (TEM) in postmortem brains of HD patients as well as in late-stage (17-week-old) R6/2 mice [13]. Furthermore, reductions in synaptophysin and postsynaptic density 95 staining were measured in the striatum and cerebral cortex of late-stage R6/2 mice between 10 and 12 weeks of age [14,15]. In fact, reductions in synaptic markers within the somatosensory cortex are seen as early as 6 weeks of age in R6/2 mice, just before the onset of behavioral impairments [16]. There have been relatively few studies on the involvement of microglial cells in HD. In fact, mHTT inclusions have been found in all cell types of the brain in both mouse models and human cases of HD [17], while celltype specific expression of mHTT in glial cells, either oligodendrocytes or astrocytes, is sufficient to cause motor deficits and results in early death across several mouse models [18,19]. In addition to studies that focused on oligodendrocytes and astrocytes, a relatively small amount of attention has been paid to the involvement of microglia, the brain's resident macrophages, in HD pathogenesis. Microglia are responsible for normal synaptic pruning and maintenance and have been implicated in a number of disease states associated with synaptic loss and neurodegeneration [20,21]. Recently, microglia have begun to be studied in the specific context of HD [22]. Morphologically reactive microglia (defined by large, amoeboid-like cell bodies with short or absent processes) have been identified in postmortem samples of cerebral cortex and striatum from HD patients, as well as in the striatum of mouse models of HD [23,24]. In mice, microglia from wild-type (WT) animals cocultured with striatal neurons expressing mHTT depicted increased proliferation, elevated levels of cytokine IL-6 and complement components C1qa and C1qb, and took on a more amoeboid morphology. In spite of their reactive phenotype, the presence of microglia within the culture increased mHTT neuronal viability [25]. Interestingly, microglia, macrophages, and monocytes isolated from human mHTT carriers or from the YAC128 mouse model also expressed elevated levels of IL-6 and other proinflammatory markers in response to the proinflammatory stimulus lipopolysaccharide, as measured by multiplex ELISA [26]. More recent work determined that the aberrant reactivity of microglia in HD may be cell-autonomous, considering that mHTT expression within these cells led to increased expression of transcription factors PU.1 and CEBP which are responsible for macrophage and microglia development as well as maturation [27]. Increases in PU.1 and CEBP in mHTT-expressing microglia resulted in microglial "priming" or enhancement of proinflammatory gene expression, including IL-6 and TNFα, driven downstream of NFκB activation [27]. In human cases of HD, positron emission tomography (PET) imaging has identified increased microglial reactivity in the striatum and cortical regions of symptomatic HTT patients together with brain-wide increases of radiotracer binding in presymptomatic HTT carriers [28][29][30]. In all cases, 11 C-(R)-PK11195 was used as a translocator protein (TSPO) binding to mitochondrial peripheral benzodiazepine sites upregulated in microglia and other mononuclear phagocytes in response to proinflammatory stimuli or in neurodegenerative conditions such as Alzheimer's disease [31]. In two cases, increased microglial reactivity was correlated with decreased dopaminergic signaling, as read by D2 receptor binding with 11 C-raclopride-PET ligand in the identified regions [28,29]. In addition, postmortem studies of human tissue have identified increases in complement components, including C1q, C3, C4, iC3b, and C9, in HD patients [32]. Together, these data suggest that primed microglia observed in presymptomatic patients may react to normal stimuli in a hyperactive fashion, worsening disease pathogenesis. While there have been several recent studies investigating the potential role of microglia in HD, the literature has focused on the inflammatory function of these brain-resident immune cells. In addition to their neuroinflammatory roles in disease, microglia are now considered to exert beneficial physiological roles, notably in synaptic pruning and maintenance during development and adulthood [33]. It has been hypothesized that early HD symptoms may be a result of loss of synaptic input onto the MSSNs in the striatum [34]. In order to investigate the role that microglia might play in the synaptic loss seen in HD, we have performed light microscopy studies to uncover densitometric, morphological, and phagocytic alterations in microglia among the striatum of sex-and age-matched WT and littermate R6/2 mice. We investigated 3-week-old mice prior to neuronal loss or the manifestation of motor phenotypes, as well as 10week-old mice when motor impairments were quantifiable, and 13-week-old mice when motor phenotypes were severe. Here, we present light microscopy data which demonstrate that mHTT microglia are morphologically more mature earlier than WT microglia at 3 weeks of age and are hyperphagocytic at early ages. These early microglial alterations were present even before disease-related signs unfold in the R6/2 model. Bolstered by this information, we delved further into the phagocytic alterations and investigated 3-week-old and 10-week-old animals using quantitative transmission electron microscopy (TEM). Microglial ultrastructure was already altered in the dorsomedial striatum of 3-week-old animals. Furthermore, mHTT microglia were found to interact differently with synapses: they were more likely to contact synaptic clefts prior to synaptic loss and less likely to contact synaptic clefts after motor symptoms were displayed. Finally, we utilized state-of-the-art focused ion beam scanning electron microscopy (FIB-SEM) to investigate in 3dimension (3D) synaptic structures in 3-week-old and 10-week-old R6/2 versus WT animals. While total synaptic input onto dendrites was not reduced in 3-weekold animals, we found that synaptic inputs onto dendrites in the dorsomedial striatum of R6/2 animals were more likely to make en face synapses, targeting dendritic trunks instead of dendritic spines. This occurred before synaptic loss and persisted throughout disease pathology, concurrent with altered microglia-synaptic interactions. Together, these data suggest that mHTT microglia may be implicated in HD disease development and/or progression. Further studies are required to clarify this potentially important role. Animals Sex-and age-matched R6/2 B6CBA (120+/-5 CAG) and nontransgenic littermate mice on a mixed C57BL/6/ CBA background were purchased from The Jackson Laboratory and group-housed 3 to 5 animals per cage until sacrifice. Animals were kept under a 12-h light/dark cycle with food and water provided ad libitum. All the experiments were approved and performed under the guidelines of the Institutional animal ethics committees, in conformity with the Canadian Council on Animal Care recommendations. Tissue collection Three-, 10-, and 13-week-old R6/2 mice or nontransgenic littermates were anesthetized with 80 mg/kg sodium pentobarbital (i.p. injection) prior to transcardiac perfusion. Prior to anesthetization, hindlimb clasping was verified in all 10-and 13-week-old R6/2 mice [35]. Animals were perfused through the aortic arch with 3.5% acrolein followed by 4% paraformaldehyde (PFA) for electron microscopy (EM), or solely with 4% PFA for light microscopy [36]. Brains collected for light microscopy were post-fixed for 48 h in 4% PFA and dehydrated in 15% and 30% sucrose solutions before coronal sections (40-μm thick) were cut using a freezing microtome [35]. For EM, brains were extracted and post-fixed for 90 min in 4% PFA before coronal sections (50-μm thick) were cut in phosphate-buffered saline (PBS, 50 mM, pH 7.4) using a Leica VT1000s vibratome [37]. Brain sections for both light and EM were collected and stored in cryoprotectant at − 20°C. Brain sections from Bregma levels 0.5 mm to 0.7 mm were selected based on the stereotaxic atlas of Paxinos and Franklin (4th edition) and examined for light, TEM, or FIB-SEM experiments. For FIB-SEM, sections were post-fixed flat in 2% osmium tetroxide and 1.5% potassium ferrocyanide for 1 h, followed by incubation in 1% thiocarbohydrazide for 20 min, and further incubated in 2% osmium tetroxide as described by the National Center for Microscopy and Imaging Research [38]. Following post-fixation, tissues for both TEM and FIB-SEM were dehydrated using increasing concentrations of ethanol and finally immersed in propylene oxide. Following dehydration, sections were impregnated with Durcupan resin (Electron Microscopy Sciences; EMS) overnight at room temperature, mounted between ACLAR embedding films (EMS) and cured at 55°C for 72 h. Specific regions of interest (1 mm × 1 mm square of dorsomedial region of the striatum) were excised and mounted on resin blocks for ultrathin sectioning. Immunofluorescent staining for light microscopy was performed as described previously with minor modifications to adapt the protocol for free-floating sections [39]. Briefly, free-floating sections were washed in PBS, incubated in 0.1 M citrate buffer for 15 min at 90°C, washed, and incubated for 1 h using block of 2% normal donkey serum in PBS containing 0.2% Triton X-100. Sections were incubated with primary antibodies (rabbit anti-IBA1, 1:1000, Wako; rat anti-CD68 1:2500; BioRad) in blocking buffer overnight at 4°C. After primary antibody incubation, sections were washed and incubated with fluorescent secondary antibodies (donkey anti-rabbit alexa fluor 546, donkey anti-rat alexa fluor 488 1:1000, Invitrogen). Following staining, sections were counterstained with 4′,6-diamidino-2-phenylindole (DAPI, Thermo Fisher, 200 nM in PBS) and mounted on glass slides using Fluoromount G (Thermo Fisher). Microglial imaging and analysis Light microscopy IBA1-immunoreactive (+) cells from the dorsomedial, ventromedial, dorsolateral, and ventrolateral regions of the striatum were imaged using a Nikon eclipse TE300 light microscope (DAB staining) or confocal fluorescent microscope (IBA1 and CD68 double staining). Microglial density, distribution, morphology, and CD68+ puncta were analyzed as described previously by researchers blinded to animal age and genotype using ImageJ [39][40][41]. Microglial density within the striatum was measured across 4 sections per animal imaged at × 4 magnification by marking the center of each IBA1+ microglial cell with a dot using the paintbrush tool. The "analyze particles" function was used to count cell numbers and use spatial coordinates to determine the nearest neighbor distance (NND), while microglial density was determined by dividing the number of cells by the total surface area of the regions of interest measured in square millimeters for each animal. The spacing index was calculated as the square of the average NND multiplied by the microglial density per animal. Microglial morphology studies were performed on 40 cells per animal imaged at × 40 magnification. Cell body size was determined by encircling the microglial soma using the freehand selection tool, and arborization area was measured using the polygon tool to connect distal extremities of every process and reported in micrometers. The morphological index was calculated by dividing the soma area by the arborization area for each cell. Density of CD68+ puncta within IBA1+ microglial cells (number of puncta per cell body) was calculated within the dorsomedial region of the striatum. Between 20 and 27 cells across 5 sections per animal (total 60-100 cells per condition) were imaged at × 63 using a Zeiss LSM800 confocal microscope. Regions of interest were drawn around IBA1+ cell bodies as described [42], and the number of CD68+ puncta was counted. TEM Ultrathin (65-80 nm) sections were cut with a diamond knife (Diatome) on a Leica UC7 ultramicrotome, collected on bare square mesh grids (EMS), and imaged at 80 kV with a FEI Tecnai Spirit G2 transmission electron microscope. Profiles of neurons, synaptic elements, and microglia were identified according to well-established criteria [43]. Microglia were identified both by their IBA1 immunoreactivity as well as their association with extracellular space pockets, distinctive long stretches of endoplasmic reticulum (ER), and small elongated nucleus [37]. Between 7 and 11 microglial cell bodies (imaged using magnifications between × 4800 and × 9300) and 70 to 100 microglial cell processes (imaged at × 9300) profiles per animal were photographed using an ORCA-HR digital camera and analyzed by blinded researchers using ImageJ, as previously described [39,44]. Microglial processes were traced using the freehand selection tool and analyzed for their area and perimeter in ImageJ. Contacts with synaptic elements (presynaptic axon terminals identified by their synaptic vesicles, postsynaptic dendritic spines identified by their postsynaptic density, and synaptic clefts identified by the direct apposition with less than 20-nm extracellular space between presynaptic terminals and dendritic spines) were measured by counting direct contacts with microglial plasma membrane. Phagocytic activity was measured as the proportion of microglial cell bodies or processes profiles containing phagosomes, defined as the presence of endosomes containing digested elements or fully lucent vacuoles larger than 300 nm [44]. The total number of phagosomes per microglial cell body or process profile was determined. Dilated ER identified by gaps between cisternae membranes larger than approximately 100 nm was counted, and the proportion of microglial cell body profiles containing dilated ER was reported [44]. Focused-ion beam scanning electron microscopy (FIB-SEM) A Leica UC7 ultramicrotome equipped with a glass knife was used to trim the tissue into a roughly cubic frustum. A diamond knife (Diatome) was used to polish the surface of the tissue and to collect semithin sections that were used to identify a region of interest (ROI) within the block face. The trimmed tissue block was removed from the resin blank using a jeweler's blade and mounted on an aluminum stub (EMS) using conductive carbon paint (EMS) with the smooth surface facing up [45]. Finally, the sample was sputtered with 30 nm of platinum using a sputter coater (Zeiss). The sample was loaded into a Zeiss Crossbeam 540 FIB-SEM. Once in the FIB-SEM, the region of interest was identified for iterative FIB-milling and SEM imaging. ATLAS Engine 5 software (Fibics) was used to automate the steps involved in FIB-SEM data collection, including the deposition of a protective platinum surface followed by the sequential milling and deposition of a "wolverine claw" of fiducial markers to allow for precise automated focus and drift correction. Images were acquired with 5-nm pixels in the lateral dimensions using the ESB and SE2 detectors with the SEM voltage set at 1.4 kV and current of 1.2 nA. FIB milling steps of 10 nm were accomplished using a milling voltage of 30 kV and current of 1.5 nA. Volumes of 125 μm 3 in dimension were captured over 18-24-h imaging sessions. To maximize dendritic segments within the volumes, regions of the dorsomedial striatum outside of striosomes and devoid of blood vessels, myelinated axons, and cell bodies were selected. After acquisition, image stacks were evaluated for quality and finely aligned using ATLAS software and were exported as tiff stacks. The carving module of Ilastik software [46] was used for the semi-automated segmentation of 3-6 dendrites and their spines per animal (165 synapses from WT and 189 synapses from R6/2 animals were analyzed). The number and type (onto dendritic spine versus en face synapse directly onto dendritic trunk) of synapses were recorded manually. By segmenting individual dendrites in serial images, we were able to count exact synapse density, instead of estimations used by other synapse counting methods [47]. We segmented and analyzed 280 dendritic spines and 74 synapses directly onto dendritic trunks, similar in number to those Statistical analysis The software Prism (GraphPad, version 8) was used to analyze all the acquired data. Two-way ANOVA with Sidak post hoc for multiple comparisons was performed for light and EM experiments. p < 0.05 was considered statistically significant. All reported data on graphs represent mean ± standard error of the mean (SEM). For microglial density, distribution, and morphology studies N = animal, for phagocytosis and EM studies N = cell, process, or dendrite [39,44]. Results Microglial morphological maturation is accelerated in the R6/2 mouse model In order to investigate microglial maturation and function in the R6/2 mouse model of HD, we performed IBA1 immunostaining followed by densitometric analysis in the striatum of 3-week-, 10-week-, and 13-week-old animals ( Fig. 1a-f). Densitometric studies included measurement of density (cells/mm 2 ) as well as the nearest neighbor distance (NND, nearest cell to every other cell), and spacing index (compiling the NND and density) between microglia. R6/2 animals displayed an agedependent decrease in microglial density but had higher microglial density compared to control animals at all ages investigated (Fig. 1g). The NND of both R6/2 and control animals increased with age, and WT animals had significantly larger NND compared with R6/2 animals at both 10 and 13 weeks (Fig. 1h). However, the spacing index did not differ significantly between groups at any age (Fig. 1i). In addition to these changes in densitometric maturation, microglia in both genotypes exhibited an agedependent morphological maturation. We performed morphometric analysis in the striatum of 3-week-, 10week-, and 13-week-old animals (Fig. 2a-f). Microglial cell body area was stable across ages in WT animals, while cell body area decreased with age in R6/2 animals (Fig. 2g). Further morphological investigation found microglia in control animals displayed an increase in their arborization area (Fig. 2h) while R6/2 animals' microglial arborization area was already large at 3 weeks. Both genotypes displayed an age-dependent decrease in their morphological index (the ratio of cell body area over arborization area) (Fig. 2i). In fact, R6/2 mouse microglia displayed a significantly reduced morphological index compared with WT microglia at 3 weeks of age (Fig. 2i). Together, these data indicate that microglial number remained elevated in the striatum of R6/2 mice compared with WT controls and their morphology was significantly altered as early as 3 weeks. This age corresponds to a time point preceding any known inflammatory signaling or neurodegeneration in this model [53]. Microglial phagocytosis is increased in the R6/2 mouse model Microglia are known to play a major role in synaptic removal and plasticity, notably via phagocytosis in healthy and disease states [20,54]. To determine the functional implications of the decreased morphological indices in R6/2 mice, we quantified the immunolabeling for CD68, a transmembrane protein highly expressed by microglia and macrophages that is enriched in their phagolysosomal compartments [55]. IBA1+ microglia in both R6/2 and WT mice displayed abundant CD68+ puncta (Fig. 3a-f). We quantified the number of CD68+ puncta per microglial cell body (phagocytic index) in the striatum of 3-, 10-, and 13-week-old animals. WT microglia decreased their phagocytic index as the animals matured (Fig. 3g). However, microglia in R6/2 mice had elevated levels of CD68+ puncta at all ages and did not display a decrease in phagocytosis over time with maturation (Fig. 3g). Microglia , and 13 weeks (c) of age and R6/2 (HTT) mice at 3 weeks (d), 10 weeks (e), and 13 weeks (f) of age, as identified using anti-IBA1 staining, captured using a brightfield microscope. Scale bar = 10 μm. Quantitative analysis of microglial density (g), nearest neighbor distance (h), and spacing index (i) performed at all three ages in both WT and HTT conditions. N = 4 animals per condition. Asterisk denotes the difference from WT, blue number sign denotes the difference between ages in WT mice, and red number sign denotes the difference between ages in R6/2 mice; * ,# p < 0.05, ** ,## p < 0.01, ### p < 0.001 in R6/2 mice may be performing aberrant excess phagocytosis as often seen in neurodegenerative disease conditions [20], or their phagolysosomal system may be overwhelmed and not processing phagocytosed material properly, causing the cells to become overloaded with phagocytic debris. Microglial cell body ultrastructure is altered in the R6/2 mouse model To further investigate the types of phagocytic cargo associated with the CD68+ puncta, we performed immunoEM in the dorsomedial striatum of 3-week-versus 10-weekold WT and R6/2 animals. We focused on the dorsomedial region of the striatum, which is one of the earliest regions affected by HD pathology [12]. Microglial cell bodies in both WT and R6/2 mice displayed characteristic ultrastructural features, including a cheetah-like heterochromatin pattern in their ovoid nuclei surrounded by a narrow band of IBA1+ cytoplasm (Fig. 4a-d). Microglial cell bodies were often found directly juxtaposed with neuronal elements such as cell bodies and dendrites, as Asterisk denotes the difference from WT, blue number sign denotes the difference between ages in WT mice, red number sign denotes the difference between ages in R6/2 mice; * ,# p < 0.05, ### p < 0.001, #### p < 0.0001 Asterisk denotes the difference from WT, red number sign denotes the difference between ages in R6/2 mice; * ,# p < 0.05, **p < 0.01, ***p < 0.001 well as synaptic elements, including axon terminals and dendritic spines (Fig. 4a-d). While microglia in both 3week-and 10-week-old WT mice rarely showed processes contiguous with their cell body in ultrathin sections, microglia in 10-week-old R6/2 mice often had long, ramified processes connected to their soma in ultrathin sections (Fig. 4d). Microglial cell bodies across all experimental groups also displayed characteristic long stretches of ER and occasional lipidic inclusions and lipofuscin granules as well as lysosomes, all common in microglial cell bodies. A higher percent of microglia in R6/2 animals contained phagosomes (Fig. 4e), and microglia from R6/2 mice had more phagosomes per cell body than WT animals at 3 weeks (Fig. 4f), consistent with our light microscopy observations. Microglial phagosomes often held partially digested material, and microglia in R6/2 animals contained more partially digested inclusions than WT animals at 10 weeks (Fig. 4g), indicative of a possible impairment in phagolysosomal maturation. In addition to these alterations, microglia in R6/2 striatum displayed increased frequency of dilated ER (Fig. 4h). Dilated ER is a well-known marker of cellular stress that has been described using EM in numerous contexts of neurodegeneration including amyotrophic lateral sclerosis and Alzheimer's disease pathology [44,56,57]. We also identified two microglia in 3-week-old R6/2 mice with reduced IBA1 immunoreactivity and a condensed cytoplasm as well as nucleoplasm (Supplemental Figure 1). These cells are reminiscent of the dark microglia seen in aging and other neurodegenerative disease models [44,57]. Their long processes formed acute angles and interacted with synaptic structures as well as the vasculature, all characteristic of dark microglia. However, we did not identify cells with the hallmark loss of nuclear chromatin pattern typically associated with dark microglia [57]. Microglial process ultrastructure is altered in the R6/2 mouse model To complement microglial cell body ultrastructure, we utilized immunoEM to glean information into microglial processes activities in the striatum. Microglial processes are IBA1+, allowing them to be investigated at ultrahigh resolution in EM (Fig. 5a-d). They are not usually contiguous with their cell body in ultrathin sections and form a variety of shapes and sizes as they move throughout the neuropil and survey their environment. Similarly to cell bodies, processes often contained phagocytosed material (Fig. 5a, b) and made frequent direct contacts with extracellular degraded elements or debris (referred to as "extracellular degradation," Fig. 5c, d). Microglial processes observed in WT and R6/2 mice displayed agedependent decreases in their perimeter (Fig. 5e), suggesting that microglia are taking on a more surveillant morphology as indicated by our light microscopy studies. Interestingly, R6/2 microglial processes had larger perimeters than WT processes at 3 weeks, but their process perimeter was significantly reduced with age and became smaller than WT processes at 10 weeks (Fig. 5e). Microglial processes in both WT and R6/2 striatum reduced their areas with age, but again, microglial processes in R6/2 mice became significantly smaller in area than those of WT mice by 10 weeks (Fig. 5f). Microglial processes in R6/2 mice were also more likely to perform extracellular degradation than processes in WT animals, although this phenomenon also decreased with age (Fig. 5g). Both WT and R6/2 processes displayed an agedependent decrease in phagocytosed material (Fig. 5h). Interestingly, this is in contrast with our findings for cell bodies indicating there may be a shift in phagocytic cargo trafficking between processes and cell bodies with maturation. Microglial processes make less contacts with synapses in the R6/2 mouse model Some of the most striking changes in microglial process ultrastructure in R6/2 animals were their interactions with synaptic structures. We found many incidences of microglial processes interacting with synaptic structures, including axonal terminals, dendritic spines, and direct contact with excitatory synaptic clefts, defined as the junction between a presynaptic axon terminal and a postsynaptic dendritic spine (Fig. 6a-d). Microglia in WT dorsomedial striatum consistently interacted with the same number of excitatory synapses onto dendritic spines regardless of age investigated (Fig. 6e, f). However, microglia in R6/2 dorsomedial striatum displayed an age-related decrease in synaptic interactions (Fig. 6e, f). Microglial processes in R6/2 mice were also shifted from being more likely to interact with synaptic clefts than WT at 3 weeks of age to less likely at 10 weeks of age (Fig. 6e, f). Synaptic density does not increase with maturation in the R6/2 mouse model Microglia-synapse interactions may have an impact on synaptic density and could be impacted by a large number of factors including synaptic number. In order to investigate these changes, we analyzed dendrites in the dorsomedial striatum of 3-week-old and 10-week-old animals. We performed FIB-SEM experiments to image 150-250 μm 3 synaptically dense regions-outside of striosomes and containing no blood vessels, myelinated axons, or cell bodies (Fig. 7a). Afterwards, we segmented randomly selected dendrites of lengths varying between 4.5 and 10 μm, dependent upon their orientation through the imaged volume (Fig. 7b-e). We correlated the segmented dendrite with the original images to count the number of excitatory synapses and determine immunoreactivity. The perimeter (E) and area (F) of microglial processes were calculated. The percentages of microglial processes surrounded by pockets of extracellular space containing degraded elements or debris (termed "extracellular degradation"; G) or containing phagocytic endosomes (H) was determined. Annotations are as follows: d, dendrite; ma, myelinated axon; np, neuronal perikaryon; s, dendritic spine; t, axon terminal. Black arrows point to excitatory synaptic clefts. Extracellular degradation is pseudocolored in pink; phagosomes are pseudocolored in purple. Scale bar = 500 nm, n = 70-100 microglial processes per animal for all conditions, N = 3-4 animals per condition. Asterisk denotes the difference from WT, blue number sign denotes the difference between ages in WT mice, red number sign denotes the difference between ages in R6/2 mice; *p < 0.05, **p < 0.01, **** ,#### p < 0.0001 Fig. 6 Microglia-synapse interactions in the dorsomedial striatum during HD pathology. Microglial processes from 3-week-old WT (A) and R6/2 (B), as well as 10-week-old WT (C) and R6/2 (D) mice displaying IBA1+ immunoreactivity and contacting synaptic structures. Proportion of microglial processes making contact with excitatory synaptic clefts (E) as well as the number of synaptic cleft contacts per process (F) was calculated. Microglial process interaction with presynaptic axon terminals (G) and postsynaptic dendritic spines (H) was also determined. Annotations are as follows: d, dendrite; ma, myelinated axon; s, dendritic spine; t, axon terminal. Black arrows denote excitatory synaptic clefts. Scale bar = 500 nm. n = 70-100 microglial processes per animal for all conditions, N = 3-4 animals per condition. Asterisk denotes the difference from WT, blue number sign denotes the difference between ages in WT mice, red number sign denotes difference between ages in R6/2 mice; *p < 0.05, **p < 0.01, ***p < 0.001, **** ,#### p < 0.0001 Fig. 7 Synaptic density in dorsomedial striatum during HD pathology. A single image from the 125-μm 3 3-dimensional FIB-SEM stack (A) displays dense neuropil containing many dendrites, dendritic spines, and axon terminals. The dendrite (d) in the insert displays an en face synaptic contact from an axon terminal (t, red arrow) and a postsynaptic dendritic spine (s) directly contacted by an axon terminal (blue arrow). FIB-SEM was performed to create 125-μm 3 images from 3-week-old WT (B) and R6/2 (C) and 10-week-old WT (D) and R6/2 (E) mice. Dendrites were traced using Ilastik and rendered using meshlab software. The number of synapses was calculated for each dendrite and normalized to their length (F). The number of synapses onto dendritic spines (G) and directly made onto the dendrite itself (H) was calculated and normalized to dendritic length. The proportion of en face synapses was calculated for each dendrite and averaged (I). Scale bar = 5 μm in A, 1 μm in insert, 1 μm in B-E. n = 3-6 dendrites per animal, N = 3-4 animals per condition. Asterisk denotes difference from WT, blue number sign denotes difference between ages in WT mice; *p < 0.05, **p < 0.01, *** ,### p < 0.001 the number of synapses per micrometer of dendrite (Fig. 7f). This analysis revealed that synaptic density was not affected by genotype in 3-week-old animals. However, synaptic density increased between 3 and 10 weeks in WT animals, without concomitant increase of synaptic density in R6/2 animals (Fig. 7f). This data supports other studies finding impairment of corticostriatal communication in 10-12-week-old R6/2 mice [58] and could be related either to synaptic loss or a defect of synapse formation or maturation. Synapses preferentially target dendritic trunks versus spines in the R6/2 mouse model Because we segmented dendrites and counted synaptic density at nanoscale resolution, we were also able to discriminate between non-synaptic (spines which were not juxtaposed with an axon terminal containing synaptic vesicles) and synaptic spines (spines directly juxtaposed with an axon terminal containing synaptic vesicles). We were also able to count en face synapses formed directly onto the dendrite trunk (Fig. 7a, inset and Supplemental Video 1). Although there was no change in synaptic number at 3 weeks of age (Fig. 7f), synapses in 3-weekold R6/2 animals were significantly less likely to contact spines and did not increase synapses onto dendritic spines with age (Fig. 7G). Synapses in R6/2 animals were more likely to directly target the dendritic trunk itself compared to WT animals at 3 weeks of age (Fig. 7h). These data indicate that, while total synaptic density may not be affected in 3-week-old R6/2 animals, there is already a difference in the type of synaptic input made onto the medium-sized spiny neurons in the dorsomedial striatum. This shift in synaptic location (higher proportion of en face synapses to spine synapses) persisted in 10-week-old R6/2 animals (Fig. 7i). Discussion We investigated microglia in the R6/2 mouse model at early, mid, and late disease stages using a combination of light and state-of-the-art ultrastructural analyses. Microglial density was higher in the striatum of R6/2 mice compared with WT mice at all ages investigated and significantly decreased with age in R6/2 mice. This is in line with prior studies finding that microglial brain density decreases in mice from 3 to 6 weeks of age, at which point it stabilizes [59]. Interestingly, we also found that microglia from 3-week-old WT mice had an increased NND, implying more distance between individual cells. There were no overt changes in cell body area between WT and R6/2 mice, although microglial cell body area decreased significantly between 3 and 13 weeks of age in R6/2 animals. We also noted a decreased morphological index (ratio of cell body to arborization area) in 3-week-old R6/2 mice versus WT littermates. These densitometric and morphological changes are all consistent with a more mature microglial phenotype [33], indicating long processes surveilling large areas of neuropil and monitoring synaptic activity in 3-week-old R6/2 mice. Increased microglial process arborization may indicate increased synaptic interactions with the still-healthy MSSNs which make up 95% of the neurons residing in the striatum. Ma and colleagues described microglial morphological changes occurring as early as 7 weeks of age in R6/2 mice; however, our work provides the first quantification of microglial morphological changes in this model, and we focused on 3-week-old animals as well as animals with established behavioral deficits [60]. These overall data raise the intriguing possibility that microglial function may differ in R6/2 mice prior to the development of any motor impairments or synaptic loss, which are known to emerge at 6-7 weeks of age in this model [16,35]. Recent single-cell mass cytometry (CyTOF) studies of microglia isolated from the whole brain of R6/2 mice identified three populations of microglia, including disease-associated cells found only in R6/ 2 mice. The disease associated cells were present in all ages investigated (4, 7, 10, and 13 weeks of age), but did not increase in number as the disease progressed [61]. Our data also showed age-dependent decrease in microglial density in the striatum of R6/2 mice at all investigated ages. In 4-, 7-, and 10-week-old R6/2 mice, the disease associated cells displayed a high expression of canonically anti-inflammatory cytokine IL-10 [61]. This is in line with our morphological analysis of microglia in 3-week-and 10-week-old animals which defined a surveillant, phagocytic, but not proinflammatory, phenotype. Further investigation into microglial phenotype, especially at early ages, is warranted in R6/2 and other mouse models of HD. One of the unexplained aspects of HD lies in the functional changes that occur in microglia in presymptomatic and symptomatic carriers of mHTT. Changes in microglial metabolism are apparent in presymptomatic human carriers of mutant HTT, as visualized by positron emission tomography imaging [28,29], and microglia in mouse models of HD express higher levels of phagocytic genes including those from the complement family [25,27]. While microglia express increased phagocytic receptors, no changes in synaptic marker levels are apparent in the R6/2 mouse model until at least 6 weeks of age [16]. Our data has found increased microglial phagocytosis in the dorsomedial striatum in 3-week-old R6/2 mice, prior to the model's development of overt neurological phenotypes or changes in synaptic markers. Our ultrastructural studies also uncovered increased microglial-synaptic interactions in the dorsomedial striatum of 3-week-old R6/2 mice. These data may point to an early role of microglia in the loss of synaptic input into the striatum seen in HD pathogenesis. The loss of synapses seen in later ages of R6/2 animals could be caused by alterations in the formation or maturation of synapses, synaptic loss, or by excess pruning of synapses by microglia. It is also possible that the microglialsynaptic interactions altered in R6/2 animals are associated with the instability of dendritic spines in this model [62]. Dendritic spines in R6/2 mice were shown to be less stable than those of WT animals as early as 5 weeks of age [62], although further research will be required to determine if this is causative or resulting from the changed microglial interaction seen here as early as 3 weeks and that also persisted at 10 weeks. Our use of 3D EM to investigate synaptic structure and density revealed early alterations in synaptic location in presymptomatic HD mouse model. This technique could also be useful to investigate a number of other pathologies. It has been well described that synaptic loss precedes much of the cognitive impairment seen in Alzheimer's disease patients and animal models of the disease [63]. In fact, recent 3D FIB-SEM studies have uncovered alterations in synaptic structure in human tissue in regions both near and far from amyloid plaques [64,65]. Synapses in the transentorhinal cortices of AD individuals were more likely to target dendritic trunks, similar to our data in the striatum of the R6/2 model of HD [65]. Further research on larger datasets is required to determine if this shift in synaptic placement occurs across other brain regions and in other neurodegenerative diseases. Other opportunities available due to this technique involve determining if presynaptic terminals or postsynaptic spines change size or morphology in R6/ 2 mice, and are the focus of ongoing studies. Microglia-specific expression of mHTT causes increases in proinflammatory signaling and exaggerated response (or priming) to sterile inflammatory factors [27]. Interestingly, the changes in microglial responsiveness in the model expressing mHTT in microglia alone occur during early adulthood (8 weeks of age), significantly before the neurological deficits are identified in mice expressing mutant HTT in astrocytes or oligodendrocytes using cell-type-specific promoters [18,19]. Striatal microglia in presymptomatic R6/2 mice also contained increased levels of ferritin, which remained elevated throughout disease development [66]. While several studies underline the possibility that mHTT expression causes microglial-autonomous impairments, a recent study found microglial expression of mHTT was insufficient to cause HD symptoms, and that removing mHTT from microglia specifically did not ameliorate HDassociated features [67]. However, microglial phenotype (density, morphology, phagocytosis, etc.) was not specifically investigated. It is also important to note that while the R6/2 mouse model uses the endogenous HTT promoter and microglia express mHTT RNA, there have been no direct observations of HTT inclusions within microglial cell bodies [68,69]. It is possible that microglia play large roles in HD pathogenesis even without expressing mHTT, and that most of the microglial alterations seen in our studies are a result of microglial responses to impaired neuronal function. In addition to microglia, recent research has uncovered potential cell-type specific roles of various glial cells in HD. Expression of mHTT specifically in either astrocytes or oligodendrocytes causes neurological deficits, impaired motor functions, and early death [18,19]. Researchers engrafted Rag1 null mice with human glial progenitor cells expressing normal (18Q) or mHTT (48Q) and found impaired coordination as evidenced by latency to fall from the rotarod, and their striatal neurons were hyperexcitable. Conversely, R6/2 mice injected with normal HTT expressing human glial progenitor cells displayed increased motor skills and longer survival than mice injected with R6/2 expressing human glial progenitor cells [70]. These studies draw attention to the importance of studying microglial interactions with other cell types expressing mHTT in the pathogenesis of HD. Overall, these data indicate that microglia may play an intimate role in the development and pathogenesis of HD pathology. Given that microglial alterations occur as early as 3 weeks of age, microglia remain a promising target for early disease therapeutic intervention. Further studies on microglial cytokine expression and transcriptome alterations are warranted to determine if pharmacological intervention to shift microglial phenotype could affect disease pathogenesis. Conclusion In our study, striatal microglia displayed significant differences in density, distribution, morphology, phagocytosis, ultrastructure, and synaptic interactions, before any previously reported neuronal loss or behavioral deficits in the R6/2 mouse model. These alterations observed during HD pathology occurred concurrent with the synaptic alterations we describe at nanoscale resolution. Considering these findings with previously obtained information about the changes in inflammatory cells, both in the brain and periphery of human cases and animal models of HD, it is apparent that further studies into the potential role of microglia in HD are warranted. Additional file 1: Figure S1. Stressed microglia in dorsomedial striatum of 3-week-old R6/2 mice. A microglia with condensed cytoplasm (A, inset in B) but normal heterochromatin (hc) patterning. The cell body is lightly IBA1+ and contains dilated ER (er, identified by white arrow). Another stressed microglia with condensed cytoplasm (C) displaying dilated ER (white arrow) occupies a satellite position next to a neuronal perikaryon (np) and shows an invaginated nucleus (identified by black arrow). It is also contacted by IBA1+ microglial processes. Scale bar = 1 μm. bv: blood vessel, ma: myelinated axon, mt: mitochondrion, s: dendritic spine, t: axon terminal. Additional file 2: Video S1. Segmentation and 3D rendering of dendritic spine versus en face synapses. A dendrite from the dorsomedial striatum of a 3-week-old WT mouse is rendered in blue. A segmented axon (orange) makes a synapse directly onto a dendritic spine. A separate segmented axon (yellow) makes a dynapse directly onto the dendritic trunk.
9,853.2
2019-12-23T00:00:00.000
[ "Biology" ]
Stokes flow around an obstacle in viscous two-dimensional electron liquid The electronic analog of the Poiseuille flow is the transport in a narrow channel with disordered edges that scatter electrons in a diffuse way. In the hydrodynamic regime, the resistivity decreases with temperature, referred to as the Gurzhi effect, distinct from conventional Ohmic behaviour. We studied experimentally an electronic analog of the Stokes flow around a disc immersed in a two-dimensional viscous liquid. The circle obstacle results in an additive contribution to resistivity. If specular boundary conditions apply, it is no longer possible to detect Poiseuille type flow and the Gurzhi effect. However, in flow through a channel with a circular obstacle, the resistivity decreases with temperature. By tuning the temperature, we observed the transport signatures of the ballistic and hydrodynamic regimes on the length scale of disc size. Our experimental results confirm theoretical predictions. Introduction In the absence of disorder, an interacting many-body electron system can be described within the hydrodynamic framework [1][2][3] . Typical three-dimensional metals rarely enter into the hydrodynamic regime because the electron-impurity (phonon) scattering is stronger than the corresponding electron-electron interactions 4 . However, it is expected that in a clean two-dimensional (2D) electron system, such as modulation doped GaAs systems and high-quality graphene layers, the requirements for hydrodynamics can easily be satisfied. Hydrodynamic characteristics are enhanced in a Poiseuille geometry, where a parabolic flow profile can be realized in a narrow pipe. The fluid in this regime has zero velocity at the boundaries. The electronic analog of the viscous flow in the pipe is a transport in a narrow channel of width W with diffusive scattering at the boundary, driven by the electric field. Viscous electron flows are expected to occur when the mean free path for electron-electron collision, l ee , is much shorter than the sample width, while the mean free path due to impurity and phonon scattering, l, is larger than W . It has been predicted that the electrical resistivity of a 2D system is proportional to electron shear viscosity, η = 1 4 v 2 F τ ee , where v F is the Fermi velocity and τ ee is the electron-electron scattering time τ ee = l ee /v F 5-9 . For example, resistance decreases with the square of temperature, ρ ∼ η ∼ τ ee ∼ T −2 , referred to as the Gurzhi effect, and with the square of sample width ρ ∼ W −2 . The boundary conditions can be characterized by a diffusive scattering or by a slip length l s with extreme cases being no-slip (l s → 0) and no-stress (l s → ∞) conditions. It is expected that for l s → ∞ no Gurzhi effect should be detected. Recently interest in electronic hydrodynamics has arisen from measurements of the transport in graphene, where electronphonon scattering is relatively weak [10][11][12][13] . Moreover, a series of updated theoretical approaches has been published 14-17 considering a viscous system in the presence of a magnetic field, which provides additional possibilities to study magnetohydrodynamics. Experiments on PdCoO 2 18 , W P 2 19 , and GaAs 20-23 have many features demonstrating the viscous flow of electrons. Moreover, the previous study of the giant negative magnetoresistance in high mobility GaAs structure 24-27 could be interpreted as a manifestation of the viscosity effects, or interplay between ballistic and hydrodynamic effects 28 . The diffusive scattering condition is the relevant one for most liquid-solid interfaces. The absence of Poiseuille flow and the Gurzhi effect in graphene has been taken as evidence for a specular limit with a very large slip length 13 . If the slip length is larger than sample size, viscous shear forces can arise, if the fluid flows around an obstacle. Flow around a circular disc was considered by Stokes a long time ago 29,30 . In classical two-dimensional fluid mechanics, this may lead to a phenomenon referred as the "Stokes paradox": no solution of the Stokes equations can be found for which fluid velocity satisfies both the boundary conditions on the body and at infinity 31 . Recently an electronic analog of the Stokes paradox has been proposed for two-dimensional Fermi liquids 4,32,33 . Schematically this proposal is illustrated in Figure 1a: the resistance of the sample with length L ∼ W is studied, when a circle obstacle of radius a 0 << L is located in the middle of the sample 34,35 . In an electronic liquid, the Stokes paradox has been resolved within the framework of the semiclassical description of quasiparticle dynamics, and a linear response has been obtained due to the momentum relaxation process [32][33][34] . Indeed Ohmic theory predicts that the obstacle will enhance total resistance 34 : where R 0 is obstacle free resistance, and R obst = cR 0 a 2 0 L 2 , c is a geometric factor. It is interesting that the Stokes flow around a disc leads to a dramatic consequence beyond Ohmic behaviour: the effective radius of the obstacle a e f f is always larger than the geometric radius a e f f >> a 0 34 . More importantly the obstacle resistance decreases with temperature, suggesting that the viscous liquid is essentially always in the regime of specular scattering boundary conditions. In the present work, we have experimentally examined the transport properties of a mesoscopic 2D electron system with a circular obstacle (antidot or micro-hole). As a reference we also studied a device without an antidot in order to extract the obstacle resistance and determine all relevant viscous parameters, which provides the comparative analysis between theory and experiment. By tuning the temperature in a wide interval 1.5 < T < 70K, we show that obstacle resistance R obst exhibits a drop as temperature increases (even as dR 0 /dT > O), in consistence with predictions for the ballistic and hydrodynamic regimes. Methods The samples were grown by molecular beam epitaxy method. Our samples are high-quality, GaAs quantum wells with a width of 14 nm with electron density n s = 6 × 10 11 cm −2 and a mobility of µ = 2.5 × 10 6 cm 2 /V s at T=1.4K. Other parameters, such as fermi velocity, mean free path and others are given in Table 1. We present experimental results on Hall-bar devices. They consist of three, 6µm wide segments of different length (6, 20, 6µm), and 10 contacts. Figure 1b shows the image of a typical multiprobe Hall device I. The antidots are located in the middle of the right side and left side segment of the Hall bar by chemical wet etching through the quantum well. The measurements were carried out in a VTI cryostat, using a lock-in technique to measure the longitudinal ρ xx resistivity with an ac current of 0.1 − 1µA through the sample. 3 Hall bars from the same wafers were studied and showed consistent behaviour. As reference we also measured a Hall bar without an antidot. Additionally we also studied macroscopic samples, where, it is expected, that the viscous effects are small. These samples have Hall-bar geometry (length l× width W = 500µm × 200µm) with six contacts. Experiment in reference device and discussion The electronic analog of the hydrodynamic regime in the pipe is a electric current in a narrow channel of width W ∼ 1 − 10µm. Figure 1b shows the image of the Hall bar device with a micro-hole in the center of the Hall bridge. The resistance between different probes has been measured. Figure 2a shows the longitudinal magnetoresistance for a sample with an antidot and W n s v F l l 2 η (µm) (10 11 cm 2 ) (10 7 cm/s) (µm) (µm) (m 2 /s) 6 6.0 3.3 35 3 0.25 Table 1. Parameters of the electron system at T = 1.4K. Parameters l, l 2 and η are determined in the text. a reference sample without an antidot. Longitudinal magnetoresistance of a viscous 2D high mobility system in GaAs has been studied in previous research for different configurations of current and voltage probes [21][22][23] . Remarkably, we find that probe configuration and sample geometry strongly affect the temperature evolution of local resistance and its value at zero magnetic field. For example, when the current is applied between probes 1 and 6, and voltage is measured between probes 4 and 5 (referred further as C1 configuration), the corresponding resistance R I=1−6;V =4−5 increases with temperature T, while the resistance R I=8−7;V =4−5 , when the current is applied between probes 8 and 7 and voltage is measured between probes 4 and 5 (referred further as C2 configuration), decreases with T and always appears bigger than R I=1−6;V =4−5 . We attribute such behaviour to enhanced viscosity due to diffusive scattering on the rough edge and inhomogeneity of the velocity field, predicted in paper 14 . Indeed we reproduced these results in the samples studied in this work, and Figure 2a shows that the resistance at B=0 in configuration C2 is bigger than the resistance in configuration C1. Moreover, the resistance with an antidot is enhanced in comparison with the reference sample in both configurations. One more striking feature is the anomalously large negative magnetoresistance, which is strongly enhanced for configuration C2. Satellite peaks are clearly observed in samples with antidots resulting in additional broadening of the total magnetoresistance. Therefore, we may conclude here that the effect of the obstacle is adding a series resistor, as has been predicted in paper 34 . Before analyzing the obstacle effect, and in order to make this analysis more complete, we present the results of measurements of longitudinal magnetoresistivity ρ xx (B) in samples without a micro-hole. In order to increase the viscosity effect, we study resistance in C2 configuration. 3/10 Figure 2b shows ρ xx (B) as a function of magnetic field and temperature. In the hydrodynamic approach, the semiclassical treatment of the transport describes the motion of carriers when the higher order moments of the distribution function are taken into account. The momentum relaxation rate 1/τ is determined by electron interaction with phonons and static defects (boundary). The second moment relaxation rate 1/τ 2,ee leads to the viscosity and contains the contribution from the electron-electron scattering and temperature independent scattering by disorder 14,15 . It has been shown that conductivity is determined by two independent parallel channels of electron momentum relaxation: the first is due to momentum relaxation time and the second due to viscosity 14,15 . This approach allows the introduction of the magnetic field dependent viscosity tensor and the derivation of the magnetoresisivity tensor [14][15][16] : where ρ bulk , viscosity η = 1 4 v 2 F τ 2,ee . All relaxation rates are given by: where E F is the Fermi energy, and the coefficient A FL ee be can expressed via the Landau interaction parameter. The relaxation rate 1 τ 2,0 is not related to the electron-electron collisions, since any process responsible for relaxation of the second moment of the distribution function, even scattering by static defect, gives rise to viscosity 14 . The momentum relaxation rate is expressed as: where A ph is the term responsible for the phonon scattering, and 1 τ 0 is the scattering rate due to static disorder (not related to the second moment relaxation rate 1 τ 2,0 ). It is worth noting that above 40 K the scattering from polar LO phonons becomes important and the scattering time deviates from simple linear dependence on temperature 36,37 ). We fit the magnetoresistance curves in Figure 2b and the resistance in zero magnetic field with the 3 fitting parameters : τ(T ), τ * (T ) and τ 2,ee (T ). We compare the temperature dependence of 1 τ 2,ee (T ) and 1 τ(T ) with theoretical predictions given by Equations 3 and 4, which is shown in Figure 2c. The following parameters are extracted: 1/τ 2,0 = 0.95 × 10 11 s, A FL ee = 0.35 × 10 9 s −1 K −2 , A ph = 0.5 × 10 9 sK −1 and 1/τ 0 = 0.65 × 10 10 s, which are correlated with previous studies 21,23 . Note, however, that a discrepancy with Equations 3 and 4 is found at high temperatures, which we attributed to the inelastic process due to scattering by LO phonons . Relaxation time τ * (T ) depends on τ 2,ee (T ) and the boundary slip length l s . Comparing these values, we find that l s = 3.2µm < L. Our data are in good agreement with the theoretical prediction for the case when the slip length is temperature independent. Table 1 shows the mean free paths : l = v F τ, l 2 = v F τ 2,ee and viscosity, calculated with parameters extracted from the fit of experimental data. In the last part of this section, we wish to discuss the influence of the ballistic effect on negative magnetoresistance in our reference samples. As we already mentioned in the introduction, a previous study of the magnetoresistance in high mobility two dimensional GaAs system demonstrated giant two-scale negative magnetoresistance consisting of a narrow temperature independent peak near zero magnetic field and shoulder-like magnetoresistance, which strongly depends on the temperature 27 . The model 28 proposes, that the temperature independent peak is attributed to the ballistic effects, while shoulder is attributed to the hydrodynamic effects due to flowing between randomly located macroscopic "oval" defects. It is worth noting that, because we observe small size peaks in magnetoresistance in C1 configuration (Figure 2a), ballistic contribution, predicted in the model 28 can have a non-negligible effect at least at low temperature. We present two arguments justifying, that ballistic effect is smaller than hydrodynamic contribution. First, we have demonstrated that magnetoresistance and R(T ) strongly depend on the configuration ( C1 or C2), which is unlikely to be attributed to the ballistic effect [21][22][23] . For example, ballistic contribution can not describe the resistance drop with temperature ( Gurzhi effect), observed in our samples 21 . Second, our giant negative magnetoresitance strongly depends on temperature and can be successfully described within a hydrodynamic framework 14 in wide temperature range, in contrast to the T-independent peak observed in paper 27 . However, even though both ballistic and hydrodynamic contribution are equally important at low temperature, at high temperature, the viscosity effect becomes dominant, and all our conclusion can be applied equally well to the samples with and without obstacle. 4/10 3 Experiment: obstacle resistance In this section, we focus on the study of resistance in samples with an obstacle. Figure 3a shows the magnetoresistance for samples with an obstacle for both configurations C1 and C2. One can see small satellite peaks making the central peak wider in comparison with the reference sample. We attribute these oscillations to geometrical resonance effects, which are pronounced in 2D charged liquids 38,39 . We perform numerical simulations of the electron trajectories in ballistic structures for different obstacle sizes (for details see Supplementary material). The results of theses simulations (dots) for a 0 = 1µm are compared to the experimental data. One can see, that the width of the magnetoresistance curve roughly corresponds to the experimental data, while the position of the peak is slightly shifted to a higher magnetic field in comparison with the experiment. Magnetoresistance as a function of the magnetic field for different radii a 0 is shown in Figures 3b,c for two configurations C1 and C2. The diameter of the antidot has been measured directly from an optical microscope image (Figure 1b) with precision 0.1µm. The effective antidot diameter is larger than the lithographic one due to the depletion region, which, however, in our high density sample does not exceed 0.05µm. We estimate this value from the assumption that the width of the region where the potential increases from the bottom to the Fermi energy is of the same order as the Fermi wavelength for typical electron concentrations 40 . Traces for the reference sample without an obstacle are shown for comparison. One can see that the resistance with an obstacle is always larger than the reference resistance. Resistance of a sample with an antidot radius of a 0 = 1.3µm is higher than the resistance with a 0 = 1.4µm, probably due to radius uncertainty (±0.05µm). Viscosity effects are enhanced in C2 configurations and below we focus on the results obtained from this probe configuration. Figure 4a shows the evolution of magnetoresistance with temperature for samples with an obstacle in C2 configuration. We fit a central peak with the Lorentzian curve (Eq.2). Note that this peak is absent in magnetoresistance for C1 configuration (Figure 2a and Figure 3a) because it is overlapped by satellite peaks. As for the reference sample, we used the 3 fitting parameters : τ(T ), τ * (T ) and τ 2,ee (T ). Figure 2c shows the relaxation rates 1/τ(T ), and 1/τ 2,ee (T ) for an obstacle sample in comparison with the reference sample as a function of temperature. One can see that the rate 1/τ 2,ee (T ) is following the dependencies of Eqs. 3-4 with parameters 1/τ 2,0 = 1.15 × 10 11 s, A FL ee = 0.9 × 10 9 s −1 K −2 , while the rate 1/τ(T ) is saturated at low temperatures, and it is unlikely that it can be described by the acoustic phonon scattering mechanism. The difference between rates 1/τ 2,ee (T ) for obstacle and reference samples can be attributed to uncertainty in the determination of the Lorentz curve width due to the satellite ballistic peak. The momentum relaxation rate is extracted from resistivity at zero magnetic field, which is enhanced in the obstacle samples. The temperature dependence of resistivity at zero magnetic field for different obstacle radii and the reference sample in configuration C2 is shown in Figure 4b. Note, that for our approximately square-shaped devices (Figure 1b), 2D resistivity practically equals the resistance: R = 1.6ρ, and below we discuss the resistivity behaviour. One can see that resistance (resistivity) decreases in the temperature interval 1.5K < T < 45K and increases at higher temperatures. We argue here that the ballistic (quasiballistic) contribution is described by the first term Equation 2, and comparison with theory proves that it is much smaller than the viscosity contribution described by the second term. Below we repeat several keyword arguments which justify this conclusion and which have been discussed in previous publications 21,23 . First, the resistivity for C2 configuration decreases with temperature and follows the Gurzhi law ρ ∼ T −2 at least at low T (see Figure 2c) 21 . In contrast, resistivity in macroscopic samples increases with T and follow the linear law ρ ∼ T (below 40 K), due to acoustic phonon scattering (see Figure 4b) 36,37 . Therefore, we would expect that resistivity due to moment relaxation is temperature independent (scattering with static defects or boundary) or increases with T (due to the phonon scattering mechanism). Second, the electron-electron scattering obeys the power law 1 τ 2,ee (T ) ∼ T 2 (the logarithmic term is weakly T-dependent) 21,23 , instead of the linear T law expected for phonon scattering. We compared the experimental dependence of ρ(T ) in zero magnetic field with theoretical models and obtained a good agreement (see Figure 4b -triangulares). Finally, resistivity strongly depends on the probe configuration (Figure 2a), which is unlikely to be attributable to the ballistic effect. Indeed, we calculated the ballistic contribution in our sample geometry and found only weak dependence on the configuration, which disagrees with our observations. In the Figure 4b, we can see that resistivity of the samples with obstacles is always larger than the resistivity of the reference sample within the investigated temperature range. The enhanced obstacle resistivity ρ obst (T ) = ρ total (T ) − ρ 0 (T ) as a function of temperature is shown in Fig. 4c for two obstacle radii. For comparison we demonstrate the resistivity measured in a macroscopic sample ρ macr . Conventional Ohmic behaviour is expected in this device: below 40 K, macroscopic resistivity displays simple linear temperature dependence due to acoustic phonon scattering (shown by solid line), while at higher temperatures scattering from polar LO phonons starts to become important. Indeed dρ macr /dT > 0 in the entire interval of temperatures. In contrast obstacle resistance shows dρ obst /dT < 0 in the same temperature region. Theory and discussion Simplified Ohmic theory predicts that obstacle resistivity should be proportional to obstacle free resistance and the square of the obstacle radius 3 ρ obst (T ) ∼ ρ 0 (T )( a 0 L ) 2 . Therefore, one might expect that obstacle resistivity just reproduces the temperature dependence of the Ohmic resistivity. The solid line in Figure 4c represents the resulting obstacle resistivity without viscosity effects, when only phonon scattering (acoustic and LO phonons) is taken into account. It predicts a very 6/10 strong (∼ 10 times) increase of R obst (T ) with temperature, which indeed disagrees with our experiments. One may conclude here that the T-coefficient of obstacle resistance is attributed to the combination of two effects: viscous flow of electrons in a narrow sample and the hydrodynamic flow around the obstacle. As we already mentioned in the introduction, a lot of theoretical effort has gone into the resolution of the Stokes paradox in two-dimensional charged liquids. The main result is that the effective radius of the obstacle is larger than the geometric radius a 0 and depends on temperature. The inverse scattering length drastically affects electron flow behaviour in the presence of an obstacle: 1 l e f f = 1 l + 1 Three different regimes of the transport have been considered 34 : (i) Diffusive: in this limit a 0 >> l e f f l 2 , and effective radius is give by (ii) Ballistic: in this limit l e f f >> a 0 , and effective radius is give by (iii) Hydrodynamic : in this limit l e f f << a 0 << l e f f l 2 , and effective radius is give by ) . This difference in the parameter regimes leads to markedly different physical behavior in the transport. It is remarkable that, in the hydrodynamic regime, the effective radius only weakly depends on the actual radius a 0 . In order to compare our results with theoretical predictions for corresponding transport limits, we calculate relevant electron parameters as a function of temperature. Figure 5 represents temperature dependence of the characteristic lengths l, l 2,ee and l e f f extracted from experiments on the two reference samples. One can see that the viscous regime conditions l 2,ee < W < l are satisfied in all temperature intervals, which is justified by observation of the Gurzhi effect below T < 40K. Since obstacle radius is much smaller than the width of the sample, the hydrodynamic limit for the Stokes effect requires higher temperatures T > 40K. Model 34 predicts a general behavior for the effective obstacle radius, which covers all transport regimes: We compared the prediction of this model with our results, which are shown in Figure 4c. The theory predicts slightly nonmonotonic behaviour of ρ obs (T ) due to the interplay between ρ 0 (T ) and a e f f (T ) dependencies: at low temperatures, contribution from obstacle free resistivity is dominant, while at higher temperatures, the effective radius exhibits a sharp drop due to viscosity. We can see that the predicted results roughly agree with experimental observations due to the approximate character of the analytical calculations. It is because the theory 34 does not consider collisions with the sample boundary, which lead to a quadratic velocity profile in the sample and a viscous character of the flow even without an obstacle. Summary and conclusion We have studied an electronic analog of the Stokes flow around the obstacle in a two-dimensional system in high quality GaAs quantum wells. The resistance of 2D electrons with a micro-hole fabricated in the center of the sample is always enhanced in comparison with obstacle-free devices. Obstacle resistance decreases with temperature even as dρ 0 /dT > 0. Experimental results confirm the theoretically predicted significance of momentum relaxation in the ballistic and hydrodynamic regimes, which is significantly distinct from conventional Ohmic behaviour.
5,716.2
2020-05-12T00:00:00.000
[ "Physics" ]
Cutting Barnette graphs perfectly is hard A perfect matching cut is a perfect matching that is also a cutset, or equivalently a perfect matching containing an even number of edges on every cycle. The corresponding algorithmic problem, Perfect Matching Cut, is known to be NP-complete in subcubic bipartite graphs [Le&Telle, TCS '22] but its complexity was open in planar graphs and in cubic graphs. We settle both questions at once by showing that Perfect Matching Cut is NP-complete in 3-connected cubic bipartite planar graphs or Barnette graphs. Prior to our work, among problems whose input is solely an undirected graph, only Distance-2 4-Coloring was known NP-complete in Barnette graphs. Notably, Hamiltonian Cycle would only join this private club if Barnette's conjecture were refuted. Introduction Deciding if an input graph admits a perfect matching, i.e., a subset of its edges touching each of its vertices exactly once, notoriously is a tractable task.There is indeed a vast literature, starting arguably in 1947 with Tutte's characterization via determinants [41], of polynomial-time algorithms deciding Perfect Matching (or returning actual solutions) and its optimization generalization Maximum Matching. In this paper, we are interested in another containment of a spanning set of disjoint edges -perfect matching-than as a subgraph.As containing such a set of edges as an induced subgraph is a trivial property 1 (only shared by graphs that are themselves disjoint unions of edges), the meaningful other containment is as a semi-induced subgraph.By that we mean that we look for a bipartition of the vertex set or cut such that the edges of the perfect matching are "induced" in the corresponding cutset (i.e., the edges going from one side of the bipartition to the other), while we do not set any requirement on the presence or absence of edges within each side of the bipartition.This problem was in fact introduced as the Perfect Matching Cut (PMC for short) problem2 by Heggernes and Telle who show that it is NP-complete [17].As the name Perfect Matching Cut suggests, we indeed look for a perfect matching that is also a cutset.Le and Telle further show that PMC remains NP-complete in subcubic bipartite graphs of arbitrarily large girth, whereas it is polynomial-time solvable in a superclass of chordal graphs, and in graphs without a particular subdivided claw as an induced subgraph [27].An in-depth study of the complexity of PMC when forbidding a single induced subgraph or a finite set of subgraphs has been carried out [13,30]. We look at Le and Telle's hardness constructions and wonder what other properties could make PMC tractable (aside from chordality, and forbidding a finite list of subgraphs or induced subgraphs).A simpler reduction for bipartite graphs is first presented.Let us briefly sketch their reduction (without thinking about its correctness) from Monotone Not-All-Equal 3-SAT, where given a negation-free 3-CNF formula, one seeks a truth assignment that sets in each clause a variable to true and a variable to false.Every variable is represented by an edge, and each 3-clause, by a (3-dimensional) cube with three anchor points at three pairwise non-adjacent vertices of the cube.One endpoint of the variable gadget is linked to the anchor points corresponding to this variable among the clause gadgets.Note that this construction creates three vertices of degree 4 in each clause gadget, and vertices of possibly large degree in the variable gadgets.Le and Telle then reduce the maximum degree to at most 3, by appropriately subdividing the cubes and tweaking the anchor points, and replacing the variable gadgets by cycles. Notably the edge subdivision of the clause gadgets creates degree 2-vertices, which are not easy to "pad" with a third neighbor (even more so while keeping the construction bipartite).And indeed, prior to our work, the complexity of PMC in cubic graphs was open.Let us observe that on cubic graphs, the problem becomes equivalent to partitioning the vertex set into two sets each inducing a disjoint union of (independent) cycles.The close relative, Matching Cut, where one looks for a mere matching that is also a cutset, while NP-complete in general [5], is polynomial-time solvable in subcubic graphs [35,2].The complexity of Matching Cut has further been examined in subclasses of planar graphs [37,2], when forbidding some (induced) subgraphs [13,31,30,12], on graphs of bounded diameter [31,26], and on graphs of large minimum degree [4].Matching Cut has also been investigated with respect to parameterized complexity, exact exponential algorithms [25,22], and enumeration [15]. It was also open if PMC is tractable on planar graphs.Note that Bouquet and Picouleau show that a related problem, Disconnected Perfect Matching, where one looks for a perfect matching that contains a cutset, is NP-complete on planar graphs of maximum degree 4, on planar graphs of girth 5, and on 5-regular bipartite graphs [3].They incidentally call this related problem Perfect Matching Cut but subsequent references [13,27] use the name Disconnected Perfect Matching to avoid confusion.We will observe that PMC is equivalent to asking for a perfect matching containing an even number of edges from every cycle of the input graph.The sum of even numbers being even, it is in fact sufficient that the perfect matching contains an even number of edges from every element of a cycle basis.There is a canonical cycle basis for planar graphs: the bounded faces.This gives rise to the following neat reformulation of PMC in planar graphs: is there a perfect matching containing an even number of edges along each face? While Matching Cut is known to be NP-complete on planar graphs [37,2], it could have gone differently for PMC for the following "reasons."Not-All-Equal 3-SAT, which appears as the right starting point to reduce to PMC, is tractable on planar instances [34].In planar graphs, perfect matchings are simpler than arbitrary matchings in that they alone [42] can be counted efficiently [40,21].Let us finally observe that Maximum Cut can be solved in polynomial time in planar graphs [16]. In fact, we show that the reformulations for cubic and planar graphs cannot help algorithmically, by simultaneously settling the complexity of PMC in cubic and in planar graphs, with the following stronger statement. Theorem 1. Perfect Matching Cut is NP-hard in 3-connected cubic bipartite planar graphs. Not very many problems are known to be NP-complete in cubic bipartite planar graphs.Of the seven problems defined on mere undirected graphs from Karp's list of 21 NP-complete problems [20], only Hamiltonian Path is known to remain NP-complete in this class, while the other six problems admit a polynomial-time algorithm.Restricting ourselves to problems where the input is purely an undirected graph, 3 besides Hamiltonian Path/Cycle [36,1], Minimum Independent Dominating Set was also shown NP-complete in cubic bipartite planar graphs [29], as well as P 3 -Packing [24] (hence, an equivalent problem phrased in terms of disjoint dominating and 2-dominating sets [33]), and Distance-2 4-Coloring [10].To our knowledge, Minimum Dominating Set is only known NP-complete in subcubic bipartite planar graphs [14,23]. It is interesting to note that the reductions for Hamiltonian Path, Hamiltonian Cycle, Minimum Independent Dominating Set, and P 3 -Packing all produce cubic bipartite planar graphs that are not 3-connected.Notoriously, lifting the NP-hardness of Hamiltonian Cycle to the 3-connected case would require to disprove Barnette's conjecture 4 (and that would be indeed sufficient [11]).Note that hamiltonicity in cubic graphs is equivalent to the existence of a perfect matching that is not an edge cut (i.e., whose removal is not disconnecting the graph).We wonder whether there is something inherently simpler about 3-connected cubic bipartite planar graphs, which would go beyond hamiltonicity (assuming that Barnette's conjecture is true). Let us call Barnette a 3-connected cubic bipartite planar graph.It appears that, prior to our work, Distance-2 4-Coloring was the only vanilla graph problem shown NP-complete in Barnette graphs [10].Arguing that Distance-2 4-Coloring is a problem on squares of Barnette graphs more than it is on Barnette graphs, a case can be made for Perfect Matching Cut to be the first natural problem proven NP-complete in Barnette graphs. Provably tight subexponential-time algorithm.Note that our reduction together with existing results and a known methodology give a fine-grained understanding, under the Exponential-Time Hypothesis5 (or ETH) [18], on solving Perfect Matching Cut in planar graphs. On the algorithmic side, there is a 2 O( √ n) -time algorithm for PMC in n-vertex planar graphs, as a consequence of a 2 O(w) n O (1) -time algorithm for n-vertex graphs given with a tree-decomposition of width w, and the fact that tree-decompositions of width O( √ n) always exist in planar graphs and can be computed in polynomial-time [28].The 2 O(w) n O (1) time algorithm can be obtained directly or as a consequence of a result of Pilipczuk [38] that any problem expressible in Existential Counting Modal Logic (ECML) admits a singleexponential fixed-parameter algorithm in treewidth.ECML allows existential quantifications over vertex and edge sets followed by a counting modal formula to be satisfied from every vertex.Counting modal formulas enrich quantifier-free Boolean formulas with ♦ S ϕ, whose semantics is that the current vertex v has a number of neighbors satisfying ϕ in the ultimately periodic set S of non-negative integers.One can thus express Perfect Matching Cut in ECML as which states that there is a set X such that every vertex in X has exactly one neighbor outside X, and vice versa. On the complexity side, the Sparsification lemma [19], the folklore linear reductions from bounded-occurrence 3-SAT to bounded-occurrence Monotone Not-All-Equal 3-SAT and to Monotone Not-All-Equal 3-SAT-E4 [6], and finally our quadratic reduction, imply that 2 Ω( √ n) time is required to solve PMC in n-vertex planar graphs.Our reduction (as we will see) indeed has a quadratic blow-up as it creates O(1) vertices per variable and clause, and O(1) vertices for each of the O(n 2 ) crossings in a (non-planar) drawing of the variable-clause incidence graph. Outline of the proof.We reduce the NP-complete problem Monotone Not-All-Equal 3-SAT with exactly 4 occurrences of each variable [6] to PMC.Observe that flipping the value of every variable of a satisfying assignment results in another satisfying assignment.We thus see a solution to Monotone Not-All-Equal 3-SAT simply as a bipartition of the set of variables. As we already mentioned, Not-All-Equal 3-SAT restricted to planar instances (i.e., where the variable-clause incidence graph is planar) is in P. We thus have to design crossing gadgets in addition to variable and clause gadgets.Naturally our gadgets are bipartite graphs with vertices of degree 3, except for some special anchors, vertices of degree 2 with one incident edge leaving the gadget. The variable gadget is designed so that there is a unique way a perfect matching cut can intersect it.It might seem odd that no "binary choice" happens within it.The role of this gadget is only to serve as a baseline for which side of the bipartition the variable lands in, while the "truth assignments" take place in the clause gadgets.(Actually the same happens with Le and Telle's first reduction [27], where the variable gadget is a single edge, which has to be in any solution.) Our variable gadget consists of 36 vertices, including 8 anchor points; see Figure 1.(We will later explain why we have 8 anchor points and not simply 4, that is, one for each occurrence of the variable.)Note that in all the figures, we adopt the following convention: black edges cannot (or can no longer) be part of a perfect matching cut, red edges are in every perfect matching cut, each blue edge e is such that at least one perfect matching cut within its gadget includes e, and at least one excludes e, and brown edges are blue edges that were indeed chosen in the solution. Let us recall that PMC consists of finding a perfect matching containing an even number of edges from each cycle.Thus we look for a perfect matching M such that every path (or walk) between v and w contains a number of edges of M whose parity only depends on v and w.If this parity is even v and w are on the same side, and if it is odd, v and w are on opposite sides.The 8 anchor points of each variable gadget are forced on the same side.This is the side of the variable. At the core of the clause gadget is a subdivided cube of blue edges; see Figure 2.There are three vertices (u 1 , u 8 , u 14 on the picture) of the subdivided cube that are forced on the same side as the corresponding three variables.Three perfect matching cuts are available in the clause gadget, each separating (i.e., putting on opposite sides) a different vertex of {u 1 , u 8 , u 14 } from the other two.Note that this is exactly the semantics of a not-all-equal 3-clause.We in fact need two copies of the subdivided cube, partly to increase the degree of some subdivided vertices, partly for the same reason we duplicated the anchor vertices in the variable gadgets.(The latter will be explained when we present the crossing gadgets.)Increasing the degree of all the subdivided vertices complicate further the gadget and create two odd faces.Fortunately these two odd faces have a common neighboring even face.We can thus "fix" the parity of the two odd faces by plugging the sub-gadget D j in the even face.We eventually need a total of 112 vertices, including 6 anchor points. Let us now describe the crossing gadgets.Basically we want to replace every intersection point of two edges by a 4-vertex cycle.This indeed propagates black edges (those that cannot be in any solution).The issue is that going through such a crossing gadget flips one's side.As we cannot guarantee that a variable "wire" has the same parity of intersection points towards each clause gadget it is linked to, we duplicate these wires.At a previous intersection point, we now have two parallel wires crossing two other parallel wires, making four crossings.The gadget simply consists of four 4-vertex cycles; see Figure 3. Check in Figure 7 that the sides are indeed preserved.This explains why we have 8 anchor points (not 4) in each variable gadget, and 6 anchor points (not 3) in each clause gadget. Preliminaries For a graph G, we denote by V (G) its set of vertices and by E(G) its set of edges.If X, Y ).Note that a cut fully determines a cutset, and among connected graphs a cutset fully determines a cut.When dealing with connected graphs, we may speak of the cut of a cutset.For X ⊆ V (G) the set of outgoing edges of X is E(X, V (G) \ X).For a cutset M of a connected graph G, and u, v ∈ V (G), we say that u and v are on the same side (resp.on opposite sides) of M if u and v are on the same part (resp.on different parts) of the cut of M . A matching (resp.perfect matching) of G is a set M ⊂ E(G) such that each vertex of G is incident to at most (resp.exactly) one edge of M .A perfect matching cut is a perfect matching that is also a cutset. A graph is planar if it can be embedded in the plane, i.e., drawn such that edges (simple curves) may only intersect at their endpoint (the vertices).A plane graph is a planar graph together with such an embedding.Given a plane graph G, a face of G is a connected component of the plane after removing the embedding of G.A facial cycle of a plane graph G is a cycle of G that bounds a face of G.We say that two plane graphs G and H are translates if the embedding of G is a translate of the embedding of H. Proof of Theorem 1 Before we give our reduction, we start with a handful of useful lemmas and observations, which we will later need. Preparatory lemmas Lemma 2. Let G be a graph, and M ⊆ E(G). Then M is a cutset if and only if for every cycle Proof.Suppose that M is a cutset, and let (A, B) be a cut of M .Every closed walk (and in particular, cycle) contains an even number of edges of M , since the edges of M go (along the walk) from A to B, and from B to A. Now assume that every cycle of G has an even number of edges in common with M .We build a cut (A, B).For each connected component H of G, we fix an arbitrary vertex v ∈ V (H), and do the following.For each vertex w ∈ V (H), put w in A if there is a path from v to w taking an even number of edges from M , and in B if there is a path from v to w taking an odd number of edges from M .It holds that A ∪ B = V (G).By our assumption on the cycles of G, A ∩ B = ∅.Hence (A, B) is indeed a cut.The cutset of (A, B) is, by construction, M . Lemma 3. Let G be a plane graph, and M ⊆ E(G). Then M is a cutset if and only if for any facial cycle Proof.The forward implication is a direct consequence of Lemma 2. The converse comes from the known fact that the bounded faces form a cycle basis; see for instance [9]. If H is a subgraph of G, let H be the vector of with 1 entries at the positions corresponding to edges of H. Thus, for any cycle C of G, we have C = Σ 1 i k Fi where F i are facial cycles of G.And |M ∩ E(C)| has the same parity as Suppose that E(C) ∩ M = ∅.As M is a perfect matching, for every v ∈ V (C) there is an edge in M incident to v and not in E(C).As G is cubic, every outgoing edge of As M is a matching, the two edges of E(C) ∩ M do not share an endpoint.It implies that all the four vertices of C are touched by these two edges.Thus no outgoing edge of V (C) can be in M . Corollary 5. Let M be a perfect matching of a cubic graph Thus, there is an outgoing edge of V (C 2 ) that is not in M .Applying Lemma 4 on C 2 , we have E(C 2 ) ∩ M = ∅.We get the converse symmetrically.Lemma 6.Let M be a perfect matching cut of a cubic graph G.If a 6-cycle has three outgoing edges in M , then all six outgoing edges are in M . Proof.Let C be our 6-cycle.Remember that, as M is a perfect matching cut, Proof.By applying Lemma 4 on the 4-cycle containing v 2 v 3 , and the one containing Reduction We will prove Theorem 1 by reduction from the NP-complete Monotone Not-All-Equal 3SAT-E4 [6].In Monotone Not-All-Equal 3SAT-E4, the input is a 3-CNF formula where each variable occurs exactly four times, each clause contains exactly three distinct literals, and no clause contains a negated literal.Here we say that a truth assignment on the variables satisfies a clause C if at least one literal of C is true and at least least one literal of C is false.The objective is to decide whether there is a truth assignment that satisfies all clauses.We can safely assume (and we will) that the variable-clause incidence graph inc(I) of I has no cutvertex among its "variable" vertices.Indeed the reduction from Monotone Not-All-Equal 3-SAT to its four-occurrence variant does not create such cutvertices if they do not exist originally.Now if there is a "variable" cutvertex v in a Monotone Not-All-Equal 3-SAT-instance J, one can split J into J 1 made of one connected component X of inc(J) − {v} plus v, and J 2 made of inc(J) \ X.One can observe that J is positive if and only if J 1 and J 2 are positive.As inc(J 1 ) and inc(J 2 ) sum up to one more vertex than inc(J), such a scheme is a polynomial-time Turing reduction to subinstances without "variable" cutvertices. Let I be an instance of Monotone Not-All-Equal 3SAT-E4 with variables x 1 , x 2 , . . ., x n and clauses m = 4n/3 clauses C 1 , C 2 , . . ., C m .We shall construct, in polynomial time, an equivalent PMC-instance G(I) that is Barnette. Our reduction consists of three steps.First we construct a cubic graph H(I) by introducing variable gadgets and clause gadgets.Then we draw H(I) on the plane, i.e., we map the vertices of H(I) to a set of points on the plane, and the edges of H(I) to a set of simple curves on the plane.We shall refer to this drawing as R. Note that, this drawing may not be planar, i.e., two simple curves (or analogously the corresponding edges) might intersect at a point which is not their endpoints.Finally, we eliminate the crossing points by introducing crossing gadgets.(Recall that if the variable-clause incidence graph of a Not-All-Equal 3-SAT instance is planar, then its satisfiability can be tested in polynomial time [34]; hence, we do need crossing gadgets.)The resulting graph G(I) is Barnette, and we shall prove that G(I) has a perfect matching if and only if I is a positive instance of Monotone Not-All-Equal 3SAT-E4.We now describe the above steps.1.For each variable x i , let X i denote a fresh copy of the graph shown in Figure 1.Note that the variable x i appears in exactly four clauses, say, C j , C k , C p , C q with j < k < p < q. The variable gadget X i contains the special vertices t i,j , b i,j , t i,k , b i,k , t i,p , b i,p , t i,q , b i,q as shown in the figure.We recall that red edges are those forced in any perfect matching cut, while black edges cannot be in any solution.An essential part of the proof will consist of justifying the edge colors in our figures. For each clause C j = (x a , x b , x c ) with a < b < c let C j denote a new copy of the graph shown in Figure 2. The clause gadget C j contains the special vertices t a,j , b a,j , t b,j , b b,j , t c,j , b c,j , as shown in the figure.Then for each variable x i that appears in the clause C j , introduce two new edges E ij = t i,j t i,j , b i,j b i,j .Let H(I) denote the graph defined as follows. We assign to each edge e ∈ E i,j its variable as var(e) = i.Note that, for a variable gadget X i , there are exactly eight outgoing edges of V (X i ).A red edge is selected in any perfect matching cut.A blue edge is selected in some perfect matching cut.A black edge is never selected in any perfect matching cut. 2. In the next step, we generate a drawing R of H(I) on the plane according to the following procedure. a.For each variable x i , we embed X i as a translate of the variable gadget of not intersect in R. For two variables x i , x i and clauses C j , C j with x i ∈ C j , x i ∈ C j , exactly one of the following holds: i.For each pair of edges (e, e ) ∈ E ij × E i j , e and e intersect exactly once in R. When this condition is satisfied, we call (E ij , E i j ) a crossing quadruple.Moreover, we ensure that the interior of the subsegment of e ∈ E ij between its two intersection points with edges of E i j is not crossed by any edge; ii.There is no pair of edges (e, e ) ∈ E ij × E i j such that e and e intersect in R; 3. For each crossing quadruples (E ij , E i j ) replace the four crossing points shown in Figure 3a by the crossing gadget shown in Figure 3b.Let G(I) denote the resulting graph.We shall need the following definitions. Definition 11. An (induced) 4-cycle The special 4-cycles of a particular clause gadget C j are highlighted in Figure 2. In the next section, we show that G(I) is indeed a 3-connected cubic bipartite planar graph. G(I) is Barnette We shall show that the constructed graph is Barnette. Lemma 12. The graph G(I) is 3-connected. Proof.Observe that, for any two adjacent gadgets X , Y, there are two disjoint connector edges from X to Y.We consider G(I) after the removal of two vertices u, v. First assume that u and v are not both connector vertices.Thus two gadgets X and Y are adjacent in G(I) if and only if they are adjacent in G(I) − {u, v}.And in particular, each pair of gadgets are then connected in G(I) − {u, v}. Then G(I) − {u, v} can only be disconnected if there exists, inside a same gadget, two vertices that are disconnected in G(I) − {u, v}.In particular, this gadget is disconnected by the removal of u, v, which forces both u and v to be picked inside it.Indeed every gadget is 2-connected.We go through the three kinds of gadgets. If it is a variable gadget X i , then X i is split in two components each containing a connector vertex.Let a (resp.b) be a connector vertex in the first (resp.second) component.As no "variable" vertex is a cutvertex in inc(I), there is a path P in inc(I) − {x i } from "clause C a " to "clause C b ", where C a and C b are the clauses corresponding respectively to a and b (i.e., with the notations of Figure 1, a If it is a clause gadget C j , it is split in two connected components such that one contains a connector vertex t i,j and other contains b i,j .Thus we can simply follow the path from t i,j to t i,j , then to b i,j and finally back to b i,j to connect the two parts of the clause gadget.If it is a crossing gadget X, then the split separates X in two connected components, but note that there exists a gadget Y incident to both components.Thus, as Y is connected, the subgraph induced by their union, and hence G(I) − {u, v}, is connected. We now deal with the case when both u and v are connector vertices.Observe that every gadget remains connected by removing up to two connector vertices inside it.Therefore every gadget is connected in G(I) − {u, v}.By the first paragraph, the only interesting case is when u and v are the endpoints of two distinct connector edges between the same pair of gadgets.Then, the effect of removing u, v is to remove the link between the two gadgets. However inc(I) cannot have a bridge, for otherwise it would have a "variable" vertex that is a cutvertex.In turn, one can see that this implies that the gadget adjacency graph is bridgeless. Lemma 13. The graph G(I) is Barnette. Proof.By the plane embedding of the crossing gadgets, G(I) is planar.One can check that G(I) is cubic, by observing that within each gadget (variable, clause, crossing), all the vertices have degree 3, except vertices of degree 2, which are exactly those with an incident edge leaving the gadget.By Lemma 12, G(I) is 3-connected. We shall thus prove the bipartiteness of G(I).Recall that our construction had three main components: variable gadgets, clause gadgets and crossing gadgets.For a particular gadget H, observe that, all the outgoing edges of H lie in the external face of H. Circularly order the outgoing edges of H by e 1 , . . ., e p , when going, say, clockwise.Take any two consecutive outgoing edges e i , e i+1 .Let a i = a i (H), a i+1 = a i+1 (H) be the vertices of H that are also incident to e i and e i+1 , respectively.We can observe from our construction that the path from a i to a i+1 , denoted as P (H, a i , a i+1 ) along the external face of H in clockwise order always has an even number of vertices. We call the path P (H, a i , a i+1 ) an exposed path of H. (Observe that a particular gadget has several exposed paths.)Let F be a bounded face of G(I).If F is a finite face of a variable, clause, or crossing gadget H, then |V (F )| is even because H is bipartite.Otherwise, F is a union of exposed paths, and since all exposed paths have even number of vertices, we have that |V (F )| is even. Properties of variable and crossing gadgets Lemma 14.Let M be a perfect matching cut of G(I).Then for any variable gadget X i , M ∩ V (X i ) is the matching formed by the red edges in Figure 1.In particular, M does not contain any connector edge incident to a variable gadget. Proof.Consider the variable gadget X i .By applying Lemma 7 on the 6-cycle S 2 i (which satisfies the requirement of having four particular edges in some 4-cycles), we get that all (c) Edges of L 3 j are in brown. Figure 4 The three types of perfect matching cuts within a clause gadget. From now on we assume that, for a clause gadget the two sets U j = {u 1 , . . ., u 20 }, and V j = {v 1 , . . ., v 20 } are defined.Now we shall prove that for every clause gadget C j and perfect matching cut M , the set M ∩ E (U j ∪ V j ) can be of only three types.Before we prove the corresponding lemma, we introduce the following notations.Let H denote the subgraph of G(I) induced by the vertices of U j ∪ V j of the clause gadget C j .Let the vertices of H be named as shown in Figure 2. We define the following sets: Definition 20.We say that a perfect matching cut Lemma 21.Let M be a perfect matching cut of G(I) and C j be a clause gadget.Then there exists exactly one integer i ∈ {1, 2, 3} such that M is of type i in C j .induced by u 5 , u 6 , u 19 , u 20 , and infer this time that u 5 u 6 ∈ M .As no outgoing edge of U j is in M , it is now easy to verify that L 3 j ⊂ M ; see Figure 4c.Observe that in L 1 j , u 9 u 10 ∈ M and u 12 u 13 ∈ M , while for L 2 j we have u 9 u 10 ∈ M and u 12 u 13 ∈ M and for L 2 j we have u 9 u 10 ∈ M and u 12 u 13 ∈ M .Thus M ∩ U j is determined by the containment of u 9 u 10 and of u 12 u 13 in M .This is also the fact, by symmetry, for V j ∩ M , when considering the edges v 9 v 10 and v 12 v 13 . Proof. Let At this point, apply Lemma 4 to the two 4-cycles u 9 , u 10 , v 10 , v 9 and v 17 , v 18 , v 19 , v 20 .We have that u 9 u 10 ∈ M if and only if v 9 v 10 ∈ M , and u 12 u 13 ∈ M if and only if v 12 v 13 ∈ M .Thus L i j propagates to L i j ∪ R i j . As a direct consequence of Lemma 21, we get the following. Lemma 22. Let M be a perfect matching cut of G(I) and (A, B) be the cut of M .The vertices u 1 , u 8 , u 14 of a clause gadget C j cannot all be on the same side of M .More precisely: 1. L 1 j sets u 1 to one side of M , and u 8 , u 14 to the other; 2. L 2 j sets u 14 to one side of M , and u 1 , u 8 to the other; 3. L 3 j sets u 8 to one side of M , and u 1 , u 14 to the other.Note that for a clause gadget C j , if M is of type 1 (type 2, type 3, respectively) in C j , then the edges in M ∩ E (C j ) are indicated in Figure 5 (Figure 6a, Figure 6b, respectively). Relation between variable and clause gadgets Lemma 23.Let M be a perfect matching cut of G(I).Then for a variable x i and a clause C j with x i ∈ C j , t i,j , t i,j , b i,j , b i,j are on the same side of M .Proof.Let z be any vertex of the cycle S 2 i in the variable gadget X i ; see Figure 1.Observe that there exists a path P (resp.P ) between z and t i,j (resp.b i,j ) such that |M ∩ E(P )| (resp.|M ∩ E(P )|) is even.Hence, due to Observation 8, t i,j and b i,j are on the same side of M . Our construction of G(I) ensures that there exists an even non-negative integer k (where k = 0 if t i,j and t i,j are adjacent) such that all the following holds: there are k crossover 4-cycles F 1 , F 2 , . . ., F k and a path P between t i,j and t i,j where for each 1 l k, E(P ) ∩ E(F l ) is a 2-edge subpath.Now due to Lemma 15 we know that for any 1 l k, |M ∩ E(F l )| = 2.The above arguments further imply that |M ∩ E(F l ) ∩ E(P )| = 1.This implies that |E(P ) ∩ M | = k, which is even.Hence due to Observation 8 we have that t i,j and t i,j are on the same side of M . Using similar reasoning we can infer that b i,j is on the same side as b i,j .Hence t i,j , t i,j , b i,j , b i,j are all on the same side.Proof.First we prove (a).Using Figure 2 observe that there exists a path P between t c,j and u 14 such that P can be written as t c,j z 1 z 2 d 1 d 2 d 3 z 3 z 4 z 5 u 14 where {z 1 , z 2 } ⊂ V (F 6 ) and {z 3 , z 4 , z 5 } ⊂ V (F 5 ).Note that Now we prove (b).Using Figure 2 observe that there exists a path P between b a,j and u 8 such that P can be written as b c,j z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 z 9 z 10 z 11 u 8 where {z 1 , z 2 } ⊂ V (F 1 ), {z 3 , z 4 , z 5 } ⊂ V (F 2 ), {z 6 , z 7 , z 8 } ⊂ V (F 3 ), and {z 9 , z 10 , z 11 } ⊂ V (F 4 ).Now arguing similarly as in (a) on the special 4-cycles F 1 , F 2 , F 3 , F 4 , we have that |M ∩ E(P )| is even.By Observation 8, we conclude that b a,j and u 8 are on the same side of M . Lemma 24. Let M be a perfect matching cut of G(I). Then for any clause gadget For any crossing gadget X j as drawn in Figure 7 we consider the two perfect matching cuts P 1 j of Figure 7a and P 2 j of Figure 7b on X j .Lemma 25.Let X j be a crossing gadget of G(I).For any M ∈ {P 1 j , P 2 j }, M is a perfect matching cut of X j .Vertices t a , b a , t a , b a are always on the same side of M , and t c , b c , t c , b c are always on the same side of M .Moreover, if M = P 1 j , t a and t c are on the same side of M (in X j ), otherwise they are not. Proof.We refer to Figure 7 for the notations on X j .M is a perfect matching cut of X j by Lemma 3. Let C be the external facial cycle of X j .We conclude by Observation 8 on subpaths of C starting at t a or t c . Existence of perfect matching cut implies satisfiability In this section, we show that if G(I) has a perfect matching cut then I has a satisfying assignment.Let M be a perfect matching cut of M and (A, B) be the cut of M .As we already observed, a potential solution to I can be seen as a partition (V A , V B ) of the variables.We set x i in V A if and only if V (S 2 i ) ⊂ A, and we show that P = (V A , V B ) satisfies the Monotone Not-All-Equal 3SAT-E4-instance I. Assume for contradiction that there exists a clause C j such that all variables, x a , x b , x c , of C j are on the same side of P. Thus all the vertices in ∪ i∈{a,b,c} V (S 2 i ) are on the same side of M .Assume without loss of generality that this side is A. Let z i be any vertex of S 2 i .Now fix an integer i ∈ {a, b, c}.Observe, using Figure 1, that there exists a path P between z i and t i,j such that |M ∩ E(P )| is even.Hence, due to Observation 8, we infer that t i,j lies in A. Now due to Lemma 23 we have that t i,j , t i,j , b i,j , b i,j ⊂ A. The above discussion implies that i∈{a,b,c} t i,j , t i,j , b i,j , b i,j ⊂ A. Now invoking Lemma 24, we have that all three vertices in {u 1 , u 8 , u 14 } lie in A. But this contradicts Lemma 22. Hence we get the following.Lemma 26.If G(I) has a perfect matching cut then I is a positive instance. Satisfiability implies the existence of a perfect matching cut In this section, we show that given a Monotone Not-All-Equal 3SAT-E4-instance I and a partition P = (V A , V B ) satisfying I, we can construct a perfect matching cut M P of G(I), as follows: for each variable gadget X i , M P ∩ V (X i ) is the matching imposed by Lemma 14, for each crossing gadget X j incident to two edges e, f , we choose P 1 j if var(e) and var(f ) are on the same side of P, and P 2 j otherwise, for each clause gadget C j over variables a, b, c, we choose the matching of Figure 5 if b is not on the same side of P as a and c, the matching of Figure 6a if c is not on the same side of P as a and b, and the matching of Figure 6b in the last case. As M P is a perfect matching on each gadget, and as every vertex belongs to some gadget, M P is a perfect matching of G(I).By construction, M P contains no connector edges.Recall that any edge that does not have both endpoints inside the same gadget is a connector edge, we call connector vertex a vertex v incident to a connector edge e, and that var(v) = var(e).Lemma 27.For any path Q between two connector vertices u and v, we have |Q ∩ M P | even if and only if var(u) and var(v) are on the same side of P. Proof.As M P does not contain any connector edges, |Q ∩ M P | is defined by the parts of Q inside a gadget.Let Q 1 , . . ., Q k be spanning vertex-disjoint subpaths of Q such that for any i, Q i lies inside a gadget and, there is a connector edge from the last vertex of Q i to the first one of Q i+1 , for every 1 i < k.We prove the property by induction on k. If k = 1, the whole Q lies inside a gadget, and the property is true by Lemma 14 for variable gadgets, Lemma 22 for clause gadgets and Lemma 25 for crossing gadgets. Assume the property true for i k − 1, let u be the last vertex of Q k−1 and v the first vertex of Q k .By induction, var(u) and var(u ) are on the same side of P if and only if is even, and var(v ) and var(v) are on the same side of P if and only if |E(Q k ) ∩ M P | is even.As var(u ) = var(v ), we know that var(u ) is on the same side of P as var(v ), moreover u v ∈ M P .Thus var(u) and var(v) are on the same side of P if and only if is even if and only if var(u) and var(v) are on the same side of P. Lemma 28. M P is a perfect matching cut of G(I). Proof.We already know that M P is a perfect matching.Moreover, M P is a cutset by Lemma 3. Indeed, let C be any cycle in G(I).If C is contained in a gadget then, as M P is a cutset when restricted to a gadget, |C ∩ M P | is even.Otherwise, C contains a connector edge uv, so we can see C as the concatenation of the edge uv and a path Q from v to u.We know that uv ∈ M P , and var(u) = var(v).By Lemma 27, |E(C) ∩ M P | = |E(Q) ∩ M P | is even. Lemma 4 . Let M be a perfect matching cut of a cubic graph G. Let C be an induced 4-vertex cycle of G.Then, exactly one of the following holds: (a) E(C) ∩ M = ∅ and the four outgoing edges of V (C) belong to M .(b) |E(C) ∩ M | = 2, the two edges of E(C) ∩ M are disjoint, and none of the outgoing edges of V (C) belongs to M .Proof.The number of edges of M within E(C) is even by Lemma 3. Thus |E(C)∩M | ∈ {0, 2}, as all four edges of E(C) do not make a matching. Observation 8 . Thus none of these three edges can be in M , because C would have an odd number of edges in M .Symmetrically, no edge among v 2 v 3 , v 4 v 5 and v 6 v 1 can be in M .Thus no edge of C is in M .Let G be a graph and M be a perfect matching cut of G. Let u, v be two vertices of G. Then for any path P between u and v, |E(P ) ∩ M | is even if and only if u and v are on the same side of M .Note that implies that for any paths P, Q from u to v, |E(P ) ∩ M | and |E(Q) ∩ M | have same parity. Figure 1 Figure 1 Variable Gadget Xi corresponding to the variable xi appearing in the clauses Cj, C k , Cp, Cq with j < k < p < q. 1 F2 F 2 F3 F 3 F4 F 4 F5 F 5 F6 F 6 Figure 2 Figure 2Clause gadget Cj = (xa, x b , xc) with a < b < c.A red edge is selected in any perfect matching cut.A blue edge is selected in some perfect matching cut.A black edge is never selected in any perfect matching cut. Figure 3 Figure 3 Replacement of a crossing by a crossing gadget. , b are the second indices of a, b respectively).Thus a and b are connected in G(I) − {u, v}. H denote the subgraph of G(I) induced by the vertices of U j ∪ V j of the clause gadget C j .Let F = {u 9 v 9 , u 10 v 10 , u 12 v 12 , u 13 v 13 }.Consider the 4-cycle C induced by u 17 , u 18 , u 19 , u 20 .Consider the case when M ∩ E(C) = ∅.In this case, applying Lemma 4 on C, we know that {u 19 u 5 , u 20 u 6 , u 17 u 16 , u 18 u 11 } ⊂ M ; see Figure 4a.Since no outgoing edge of U j is in M , due to Lemma 19, it is now easy to verify that L 1 j ⊂ M .In the case where M ∩ E(C) = {u 19 u 20 , u 17 u 18 }, applying Lemma 4 on the 4-cycle induced by u 5 , u 6 , u 19 , u 20 , we infer that u 5 u 6 ∈ M , and once again it is now easy to verify that L 2 j ⊂ M ; see Figure 4b.In the last case, M ∩ E(C) = {u 19 u 17 , u 18 u 20 }.We again apply Lemma 4 on the 4-cycle Figure 5 Figure5 The brown and red edges make the only intersection of a clause gadget with a perfect matching cut M such that M ∩ Uj = L 1 j . ( a ) Edges (brown and red) implied by L 2 j .(b)Edges (brown and red) implied by L 3 j . Figure 6 Figure 6 Same as Figure 5 for L 2 j (left) and L 3 j (right). C j corresponding to the clause C j = (x a , x b , x c ) with a < b < c, the following hold: (a) t c,j and u 14 are on the same side of M , and (b) b a,j and u 8 are on the same side of M . F 5 and F 6 are special 4-cycles.Due to Lemma 18, we have that |M ∩ E(F 5 )| = 2 and |M ∩ E(F 6 )| = 2.This implies there exists exactly one edge e ∈ {t c,j z 1 , z 1 z 2 } such that e ∈ M .Similarly, there exists exactly one edge e ∈ {z 3 z 4 , z 4 z 5 } such that e ∈ M .Moreover, from Lemma 17 it follows that none of {z 2 d 1 , d 1 d 2 , d 2 d 3 , d 3 z 3 } belongs to M .Hence M ∩E(P ) = {e, e }, and |M ∩E(P )| is even.Now invoking Observation 8 we conclude that t c,j and u 14 are on the same side of M . Edges of P 2 j are drawn in brown. Figure 7 Figure 7 P 1 j and P 2 j are the only possible restrictions of M to a crossing gadget.
11,251.8
2023-02-22T00:00:00.000
[ "Mathematics", "Computer Science" ]
Hydromechanical behaviour of hydrophobised soils of varying degrees of saturation: a comprehensive review . Artificially hydrophobised soil has been recently considered as an alternative engineering material that may be used to reduce water (or rainfall) infiltration and hence to enhance the geotechnical performance and stability of earthen structures such as slope and landfill covers. Thorough research has been conducted to study the hydrological behaviour and properties of hydrophobised soil in the last four decades. Mechanical properties of this kind of material has received some attention only since 2011, focusing on how hydrophobisation may affect the shearing behaviour and shear strength parameters including friction angle. Knowledge on the effects of hydrophobisation on other hydromechanical properties of soil that are relevant to geotechnical engineering applications is lacking. This paper therefore aims to conduct a comprehensive review and carry out some reinterpretation of selected literature with references to existing theories or frameworks of soil mechanics. Attempts are made to generalise and highlight not only the shearing behaviour, but also dilatancy, compressibility and stiffness of hydrophobised soil. Research gaps that may be worth exploring are given after the review. Introduction Naturally-occurring hydrophobic soil (sometimes known as water repellent or non-wettable soil) is often found in shallow depth (< 0.5 m) due to wildfire, liquids released from decomposition of plant litter, hydrophobic organic matter released from plant roots and waxes eroded from plant leaves [1].The surface of is kind of soil is often coated with hydrophobic organic matters, which make the soil become low to no affinity to water.This may cause problems, such as the increase of overland flow and consequently soil erosion during rainfall and also possibly the decrease of vegetation cover [1][2][3]. Because of the feature of being low to no affinity to water, engineers are inspired to apply hydrophobic or hydrophobised soil as a cover to reduce water infiltration and hence enhance the stability of earthen structures such as slope and landfill covers [4,5].The process of artificially making soil to be hydrophobic is called hydrophobisation-chemical reactions between a soil and a hydrophobic agent (e.g., dimethyldichlorosilane (DMDCS)) and the production of a hydrophobic coating (e.g., polydimethylsiloxane (PDMS) for the case of DMDCS; [6,7]; see Table 1 for other agents).In this paper, soil with and without hydrophobisation are denoted as treated and untreated soil, respectively.In the literature, attempts were also made to increase soil hydrophobicity by mixing with some hydrophobic agents (e.g.polytetrafluoroethylene (PTFE) [8]).The scope of this paper is limited to the review of the engineering behaviour of artificially and chemically hydrophobised soil, which are mainly found in existing literature. Extensive studies have been carried out to quantify some hydrological properties of hydrophobised soil such as water retention [9] and infiltrability [10].In addition to element tests, flume tests have also been carried out to examine the effectiveness of using a hydrophobised soil cover to reducing water infiltration in small-scale model slopes [4,11].[7] also measured the infiltration rate of hydrophobised soil for landfill cover applications.Apart from the hydrological properties, mechanical behaviour of hydrophobised soil relevant to geotechnical engineering can be found from the literature (Table 1).In 2016, a review has been conducted to study the effects of hydrophobicity on water retention, infiltration and shear strength properties referenced to existing theories of unsaturated soil mechanics [5].This study conducts an update of review, covering not only the shear strength properties, but also other crucial geotechnical properties including dilatancy, compressibility and stiffness.By reinterpreting some selected studies (Table 1), attempts are made to generalise the hydromechanical behaviour of artificially hydrophobised soil.Some further research is also proposed after the review. Index properties Existing studies (e.g., [8,19]) have demonstrated that the soil surface properties, after hydrophobisation, could be changed due to the formation of surface coating (e.g., PDMS when using DCDMS as the hydrophobic agent).Understanding how this surface coating would affect the soil index properties, such as particle size distribution, maximum and minimum void ratio and specific gravity (G s ), are crucial in geotechnical engineering application.In existing literature, effects of hydrophobisation on G s has received some attention.G s of kaolin was found to be reduced as the concentration of the hydrophobic agent increased [7], whereas that of silica sand did not change much after treatment [20].Based on the limited findings, it seems that whether the G s should decrease or remain unchanged depends on the soil type, the hydrophobic agent as well as the wetting agent used to measure the G s [21].For the last factor, in particular, it was pointed out in ASTM standard D854 [22] that wetting agent, such as kerosene, is more appropriate than water for measuring the G s of a hydrophobic/hydrophobised sample [21].Using water to wet this kind of soil does not guarantee full saturation (as a crucial requirement of accurate G s measurement) due to air entrapment [21]. Any effects of hydrophobisation on other soil index properties are currently missing in the literature. Degree of hydrophobicity Degree of hydrophobicity of a treated soil is normally quantified by two parameters, namely apparent contact angle (ACA) and water drop penetration time (WDPT) [5].ACA is the angle between the apparent soil surface and tangential to the liquid-fluid interface [23].Based on the ACA, soil can be classified as hydrophilic (<45°), hydrophobic (>90°) and somewhere in between (45-90°) [24].WDPT is the time required for a water drop to penetrate through the soil surface.According to WDPT, soil can be classified as hydrophilic (< 5 s), slightly hydrophobic (5 -60 s), moderately hydrophobic (61 -600 s), severely hydrophobic (601 -3600 s) and extremely hydrophobic (3601 -18000s) [25]. Most studies (except [26][27][28]) quantified the degree of hydrophobicity of treated soils under completely dry condition (e.g., [7,29]).However, it was shown that the degree of hydrophobicity strongly depended on water content [26][27][28].In general, the degree of hydrophobicity decreases with an increase in water content as there are more channels for water to flow through.This means that potentially a soil that has a high S, even after hydrophobisation, could be mistakenly described as hydrophobic.It is thus important to always report the degree of saturation (S) of a hydrophobic/hydrophobised soil, together with the corresponding ACA and WDPT, when defining the degree of hydrophobicity. Stress-strain relationships Based on the observations from the literature, the stressstrain relationships of untreated and treated soils (using n-octyltriethoxysilane and Zycosil+ as the hydrophobic agent [16,17]) at dry and unsaturated conditions may be generalised as shown in Figure 1.For non-dilative soil, at dry condition, hydrophobisation reduces soil shear stress.Existing microscopic image analyses revealed that this kind of strength reduction could be because of the smoothening of the inter-particle friction due to the formation of hydrophobic coating after treatment [20].Thought the strength reduction would become less prominent when the confining pressure increases [20].At unsaturated condition, because of suction-induced dilatancy [30], untreated soil would normally exhibit a peak shear stress, followed by strain softening.Based on a laboratory study reported by [17], hydrophobisation switched the stress-strain behaviour from strain softening to strain hardening.This means that, apparently, the effects of suction-induced dilatancy vanished.Indeed, microscopic analysis of a hydrophobised soil suggested that water molecules appeared on the particle surface are convex [5].This would cause ambiguities when defining matric (or capillary) suction (i.e., the difference between pore-air and pore-water pressure), and the contribution of 'suction' (if definable) to the strength of treated soil.[31] pointed out that the use of axis-translation technique for controlling suction of a hydrophobic/hydrophobised soil is inappropriate because this technique undesirably forces the convex menisci to be concave for maintaining pressure equilibrium between air and water. As far as the authors are aware, there is the only one study that reports the stress-strain behaviour of hydrophobised soil of varying S, yet only one confining pressure (50 kPa) was considered [17].Clearly, more research is needed to characterise and improve the understanding of the stress-strain behaviour of hydrophobic/hydrophobised soils of varying S levels and wider ranges of confining pressure.For slope application in particular, extra care is needed to study how hydrophobisation affects soil nonlinearity at low stress regimes (e.g., < 100 kPa [32,33]). Shear strength parameters Based on the stress-strain relationships available in the literature, the peak shear stress can be plotted against the corresponding vertical normal stress and the gradient of such plot can be determined as the peak friction angle, following the Mohr-Coulomb failure criterion (Fig. 2).In this study, two methods are used to reinterpret some selected studies from Table 1.The first method assumes the cohesion to be zero (i.e., forcing the linear fitted line to pass through the origin), while the second one does not.In general, hydrophobisation reduces the friction angle by 2 -16 o , depending on the soil type, the hydrophobic agent and the amount of S considered.The reduction of peak friction angle is again attributed to the smoothening effects by the hydrophobic coatings on soil surface.It is interesting to reveal that for the case of glass beads, the reduction of peak friction angle is much more significant when the beads are in unsaturated state (by 10 o ) than in the dry state (by 3 o ).Unfortunately, in existing literature, there is only one set of data that illustrates how hydrophobisation affects the peak friction angle of unsaturated glass beads ( [12] and [14]; refer to Table 1).No data is available for mineral soil, however. For dry, untreated clean sand and glass beads, no cohesion is expected, so it is reasonable to set cohesion to be zero when determining the friction angle.When they are unsaturated, (apparent) cohesion exists because of matric suction.For dry, treated sand and glass beads, on the other hand, previous studies ( [34] and [35]) have reported that the some hydrophobic coatings such as PDMS would induce soil adhesion, yet its effects on cohesion is unknown.Thus, it may be not appropriate to always assume the cohesion of treated soil to be zero.From Fig. 2, presuming cohesion to be zero in treated soil would tend to (not always) overestimate the peak friction angle.Nonetheless, it is worth mentioning that the method of determining the peak friction angle, whether presuming the cohesion to be zero or not, follows a simple linear Mohr-Coulomb failure envelope.Any soil nonlinearity exhibited in low stress regimes is ignored.Any cohesion obtained from a best-fitted linear envelop should be treated with caution.More study is needed to quantify how the adhesion induced by hydrophobisation might affect the soil cohesion. So far, there is only one study to determine the critical state of hydrophobised sand [20].This unique study revealed a reduction of the critical-state friction angle from 29.9° to 27.1° after hydrophobisation by DMDCS. Fig. 2. Peak friction angles of untreated and treated soils at different values of S, calculated based on the reported data from the literature in Table 1.In each reference, the friction angle was determined by assuming cohesion to be zero (the data on the left side) or non-zero (the data on the right side). Dilatancy Based on the relationships between vertical displacement and horizontal displacement obtained from a direct-shear box test, the dilation angle mobilised during the shearing process is determined in this study.Fig. 3 shows the dilatancy of dry glass beads by analysing the shear test data reported by [12,14].Both untreated and treated glass beads showed dilative behaviour, while the treated case appears to require more horizontal displacement to mobilise the peak dilation angle.The observation from glass beads is, however, not consistent with the findings from mineral soil, probably due to different hydrophobic agents used (Fig. 4, silica sand in this case [16,17]).At both low (Fig. 4(a)) and high (Fig. 4(b)) stress regimes, hydrophobisation made the dilation vanished.As expected, at low stress regime, soil dilatancy was more prominent than that at high stress regime.It is thus not surprising to see a greater drop of the peak dilation angle at the low stress regime (by 13 o ; compared to 2 o at high stress case).The underlying reason of the observed different dilatancy behaviour is unclear and needs further investigation. The dilation angles of both treated and untreated sand derived from [17] is shown in Fig. 5.As expected, before treatment, the unsaturated sand exhibited slightly greater dilatancy than the dry case because of suction hardening.After hydrophobisation by Zycosil+, the sand was more contractive at the beginning of shearing and much less dilative at large shear displacements.The reduction of dilation angle for the unsaturated treated sand (12 o ) was more significant.This finding, however, was made at one S and one confining pressure.Because of the lack of data, it is not able yet to generalise the observation for wider ranges of confining pressures and S. The underlying mechanism(s) causing the behavioural change due to hydrophobisation is/are unclear.[16,17]), at (a) low stress levels (<100kPa); and (b) high stress levels (>100kPa).. Compressibility Regarding of soil compressibility, there is only one study that shows how hydrophobisation affects the criticalstate line (CSL) in an e -log p' space (where e is void ratio and p' is mean effective stress) [20].They found that the CSLs of untreated and DMDCS-treated sand have the same shape, but the CSL of the treated sand was at a position lower than that of the untreated sand.It is also interesting to note from their study that the chemical concentration added to hydrophobise the sand has an evident effect on the position of the CSL.The higher the concentration, the lower the position of the CSL would be.As higher chemical concentration results in a thicker hydrophobic coating, the difference of CSLs between untreated and treated case becomes bigger.Because of the lack of data, how hydrophobisation may affect the compressibility and swelling indices of a soil, both dry and unsaturated, is unknown.Fig. 5. Mobilisation of dilatancy angle of untreated and treated silica sands during shearing, calculated based on the data from [17], at dry and unsaturated conditions. Stiffness In the literature, bender elements have been used to determine the shear wave velocity of both untreated and treated glass beads [14].By knowing the bulk density (given in [14]), the maximum shear modulus (G max ) is determined in this study.In Fig. 6(a), as expected, an increase in confining pressure led to an increase in G max of both untreated and treated glass beads at any S.At dry condition, both untreated and treated glass beads have similar G max at the confining pressure of 1 kPa, but the treated glass beads became stiffer at a higher confining pressure of 20 kPa.This was explained that although all specimens were prepared at the same initial e, the treated glass beads were more compressible than the untreated one (though compressive curve (i.e., e -log p' curve) is not provided) after applying a confining pressure [14].The treated case was thus denser and has a higher stiffness. However, at unsaturated condition, it is interesting to observe that at low confining pressure (< 5 kPa), the untreated glass beads were stiffer than the treated case, but it was reversed when a higher confining pressure of 5 kPa was applied.The switch of behaviour may be caused by the combined effects of suction and compressibility [14].As mentioned in Section 3.1, it is ambiguous to define matric suction for treated soil due to the convex shape of water meniscus.If it can be hypothesised that capillary effects are made vanished by hydrophobisation, this could then explain why the treated glass beads have a lower stiffness than the untreated case where suction exists.At relatively high confinement (e.g., 5 kPa), the treated glass beads were more compressible than the untreated case [14], resulting in a denser packing and hence a higher stiffness, even suction was disappeared.The underlying mechanism(s) of causing the switch of the behaviour due to the combined effects of confining pressure and the 'loss' (by hydrophobisation) of matric suction is/are unclear.Nonetheless, it seems that knowing the change in soil compressibility by hydrophobisation is crucial for understanding the stiffness behaviour.Fig. 6.Effects of (a) confining pressure; and (b) degree of saturation on the maximum shear modulus of untreated and treated glass beads, calculated based on the shear wave velocity data from [14].Note that a trend line is provided only when there are more than three data points in a data series. The data of G max from the same study is related with S in Fig. 6(b).Unfortunately, due to the lack of data, no clear trend can be observed or generalised.The G max can be increased or decreased as S increases. Effects of S on G max of untreated and treated glass beads were also studied by [15].A clearer trend can be identified (not shown in the present paper).Consistent with the theories of unsaturated soil mechanics, the G max of untreated glass beads increased as they were drier (i.e., reduction of S), and it peaked at S = 5%.Beyond this, the G max dropped abruptly to the lowest value at S = 0%.On the contrary, the G max of the treated glass beads was independent of S and it remained almost constant at a value very similar to the value obtained from dry untreated case.This implies that the capillary effects and associated suction (as existed in untreated case) may have made disappeared by hydrophobisation. Apart from glass beads, shear wave velocity measured from treated and untreated silica sand is reported by [18].Similarly, the published data are reinterpreted in this study to relate G max with confining pressure and S (Fig. 7).As can be seen in Fig. 7(a), the G max of treated silica sand at the confining pressure of 50 kPa was stiffer than that of the untreated case.Again, the compaction curves of the sand before and after hydrophobisation are not available for more in-depth interpretation (i.e., unsure whether the observed higher stiffness of the treated case was because of the greater compressibility).Fig. 7. Effects of (a) confining pressure; and (b) degree of saturation on the maximum shear modulus of untreated and treated silica sand, reinterpreted based on the data from [18].Note that a trend line is provided only when there are more than three data points in a data series. Attempts were also made to relate the G max with S in Fig. 7(b).Because of the limited data available from the study and also from the literature, it is difficult to identify any plausible trend. Future work Based on the literature review and some reinterpretation of data from existing studies, some research gaps may be identified for future work: 1. Due to the formation of a hydrophobic coating onto the soil particle surface after hydrophobisation, it is important to quantify how this chemical process and the surface modification would affect the basic soil index properties.While the hydrophobisation effects on the soil specific gravity has received some recent attention, how they modify particle-size distribution, maximum and minimum void ratio and specific gravity are currently unknown.2. Degree of hydrophobicity of a soil, as indicated by apparent contact angle or/and water drop penetration time, is normally defined at a fully dry condition.How water content or degree of saturation of a hydrophobised soil may affect the definition of the soil's degree of hydrophobicity is unclear.3.In terms of shear behaviour, it is currently unknown how a hydrophobised soil, both dry or unsaturated, would behave at a low-stress regime (say, less than 100 kPa), given the fact that in slope applications, hydrophobised soil is normally applied at relatively superficial depths where the overburden pressure is low.Moreover, how the formation of hydrophobic coating would affect the soil cohesion (or/and friction angle) has not adequately investigated.4. In terms of soil dilatancy, it is a general observation that hydrophobisation makes the dilatancy of a fully dry soil vanished.The stress-dilatancy behaviour of unsaturated treated soil is missing in the literature. 5.In terms of soil compressibility, there is only one study that reports a possible shift of the critical-state line in e-log p' space due to hydrophobisation by DMDCS.How the treatment causes changes in the compressibility and swelling indices of both dry and unsaturated soil is not available from the literature.6.In terms of stiffness, there are only a few studies (< 5) available in the literature, and the data is limited to gain adequate understanding on the effects of hydrophobisation on stiffness.Nonetheless, from the reinterpretation, it seems that knowing any changes of soil compressibility due to hydrophobisation is one of the keys to help and understand the combined effects of confining pressure and matric suction (or degree of saturation) on stiffness behaviour.It is important to emphasise that in this review, only the behaviour of artificially-hydrophobised granular matters (i.e., silica sand or glass beads) are discussed, because of the limited datasets available in the literature (Table 1).Clearly, soil texture (hence surface chemistry), choice of hydrophobic agents and treatment conditions adopted (e.g., temperature) would directly affect the properties of the hydrophobic coating formed, and hence the contact properties of the treated soil.Cautions should be taken before attempting to generalise the behavioural changes due to hydrophobisation, as observed from this review. Fig. 1 . Fig. 1.Generalised stress-strain relationships of untreated and treated mineral soils at varying degrees of saturation. Fig. 3 . Fig. 3. Mobilisation of dilatancy angle of dry untreated and treated glass beads during shearing, calculated based on the data from [12, 14].δy and δx are the vertical and horizontal displacements, respectively.Negative value means dilation. Table 1 . A summary of reviewed references which examined the hydromechanical properties of hydrophobic/hydrophobised soil
4,936.4
2020-01-01T00:00:00.000
[ "Geology" ]
The External Performance Appraisal of China Energy Regulation: An Empirical Study Using a TOPSIS Method Based on Entropy Weight and Mahalanobis Distance In China’s industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China’s energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China’s energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation. Introduction High-energy consumption, high pollution, and low energy efficiency in China have been more prominent due to various factors including extensive production modes, the absence of energy regulation, and incomplete policy execution. Although energy management practice and the improvement of energy efficiency bring about a significant marginal improvement, numerous energy-intensive enterprises do not carry out effective energy management practices due to diverse reasons such as the lack of a synergistic effect between various stakeholders and having little competitive pressure when conducting the environment-friendly management practices [1,2]. Therefore, it is necessary to perform energy regulation. Moreover, the energy industry shows a significant positive-negative externality and the energy regulation aims to promote positive externality, reducing and even eliminating the negative externality through regulations. Thus, appraising the external performance of energy regulation can provide the appraisal indexes and method for reasonably, effectively, and orderly conducting the energy regulation to further elevate the quality level of the energy regulation. This exerts a practical significance to further improvement of the utilization efficiency of industrial energies and reduction of energy intensity. Extensive previous literatures about energy regulation mainly analyzed influences and methods of energy regulation. Scholars hardly appraise the quality level of global energy industry regulation in terms of the external performance of the energy regulation. Through empirical analysis, Cubbin and Stern pointed out that the quality level of regulation has a significant positive correlation with the productivity per capita and the utilization of productive capacity in an empirical analysis [3]. It indicates that the regulation quality of energy exerts an important effect on the implementation of the regulations. Therefore, conducting appraisal and comparative analysis on the performance level of China's energy regulation by establishing an external performance index system of energy regulation has a practical significance to measurement and orientation of the quality level of China's energy regulation. Additionally, as an important method for solving the multi-attribute decision making problem, the technique for order preference by similarity to an ideal solution based on the entropy weight (E-TOPSIS) is little used to appraise the performance of energy regulation. Thus, this study establishes two indexes (external economic performance and social responsibility performance) based on related data of China's energy industry during 1995~2015. On this basis, the performance level of China's energy regulation is appraised and compared separately using traditional TOPSIS, E-TOPSIS, and E-M-TOPSIS to analyze the development trend of the quality level of China's energy regulation. The rest of the study is organized as follows: Section 2 mainly introduces and reviews the literatures related to the energy regulation and TOPSIS method. Section 3 introduces traditional TOPSIS and E-M-TOPSIS, and proves the properties of the E-M-TOPSIS. Section 4 establishes an index system for external performance appraisal of the energy regulation and conducts the descriptive statistical analysis of the index data. Section 5 appraises and analyzes the index data concerning the external performance of China's energy regulation during 1999~2015 using the appraisal methods in Section 3 to further give the corresponding policy suggestions. The Section 6 comes to a conclusion. Research on the Energy Regulation Energy regulation refers to a series of activities aiming to promote the positive externality while reducing and even eliminating the negative externality by implementing the regulation function in energy field. An increasing number of scholars have investigated the influence of energy regulation. For example, Matsumura et al. internalized the negative externality of energy consumptions by introducing the Pigovian tax to further analyze the influence of additional energy regulation on welfare effects [4]. Their results showed that the additional energy-conservation regulation does harm to long-term social welfares under a perfect competition market. However, under an imperfect competition market, the energy-conservation regulation reduces the cost of energy consumptions and accelerates market competition by increasing the investment of enterprises in energy conservation to future enhance the extra social welfares. Additionally, numerous scholars have explored the influence of regulations on specific energy industries. By employing an autoregressive distributed lag model (ARDL) constrained test and error correction model (ECM), Zhao et al. studied the effect of the regulation on renewable energy power generation. The research result indicated that the regulation has a significant positive effect on the development of renewable energies [5]. From the perspectives of electricity regulation and new energy, Bradshaw suggested that regulation innovation of power system reform has an important effect on overcoming the technological and institutional lock-out of wind and solar energies [6]. In terms of the regulation of energy prices, Ju et al. investigated the prices of five energies involving natural gas, gasoline, fuel oil, steam coal, and coking coal, and pointed out that the energy price distortion caused by energy price regulation is favorable for China's economic development [7]. However, Shi and Sun shared a different point of view in their studies of China's industrial output using the growth models of two sectors and the result showed that the regulatory price distortion exerts a negative influence on both short-and long-term output growths of China [8]. Apart from the aforementioned researches on the influence analysis of the energy regulation, specific energy regulation methods have also gradually become a research hotspot. Abrardi and Cambini suggested that an optimal tariff structure is able to drive the regulated public utilities to decrease energy consumption and enhance energy efficiency so as to obtain a low oil price for attracting consumers [9]. Under the performance-based regulation, Mandel simulated the influence of performance incentive measures on upstream energy efficiency [10]. Additionally, in order to realize energy management and energy-conservation improvement of the machine manufacturing industry, Cai et al. determined a multi-target energy benchmark by using TOPSIS to put forward a multi-target energy benchmark method based on energy-consumption prediction and comprehensive appraisal [11]. However, there are only a few literatures analyzing the regulation performance appraisal of the global energy industry. Existing literatures mainly analyze the subdivisions of the electric power industry. For example, Thamae et al. appraised the regulation performance of Lesotho's electric power industry during 2004~2014 from the aspects of governance, substance, and impact [12]. Research Related to TOPSIS Appraisal Method An external performance appraisal of China's energy regulation is a multi-attribute decision making problem and there are numerous multi-attribute decision making methods [13,14]. Therein, the TOPSIS method is widely used in various fields such as the economy [15][16][17] and management [18][19][20] due to its characteristics including its simple principle, intuitive geometric interpretation, and the fact that it has no special requirements for sample data. As a multi-attribute decision making method, TOPSIS was first proposed by Hwang and Yoon in 1981, and improved and expanded by Zavadskas et al. and Triantaphyllou [21][22][23]. Therein, Triantaphyllou [23] pointed out that using different distance approaches for the same multi-attribute decision problem may result in different results. On this basis, Chen and Tsao [24] compared and analyzed the intuitionistic fuzzy TOPSIS results yielded by different distance approaches. Chang et al. [25] evaluated the performance of mutual funds by extended TOPSIS using two different distance approaches, namely, "Minkowski's metric" and "Mahalanobis" distances. Furthermore, Antuchevičienė et al. [26] and Wang and Wang [27] put forward an improved TOPSIS appraisal method based on Mahalanobis distance with an aim to favorably solve the problem of a linear correlation between indexes. Afterwards, the TOPSIS method was integrated within different weighting methods for further utilization. For example, You et al. determined the weights of indexes using the best-worst method (BWM) to establish the BMW-TOPSIS method for appraising the operation performance of power grid enterprises [20]. By combining the information entropy method to determine weights, Wang et al. and Chauhan et al. established an improved TOPSIS method to investigate the energy performance [28,29]. The use of these methods requires linear independence between various indexes when calculating the distances of various schemes to the positive and negative ideal solutions by using the Euclidean distance in the TOPSIS method. Xin et al. transformed the second-order index of the social security index system into linear independence variables using principal component analysis (PCA). On this basis, they conducted comprehensive appraisal and sorting on social security levels of 31 provinces in mainland China by the TOPSIS comprehensive appraisal method [30]. Although the PCA method can deal with the problems concerning linear correlation between indexes to some extent, it has the drawback of information loss. Thus, based on the M-TOPSIS method, the study determined the weights of various indexes by using information entropy. Afterwards, the study appraised and compared the performance levels of China's energy regulation using E-M-TOPSIS to sufficiently analyze the development trend of the quality level of China's energy regulation. Traditional TOPSIS Method TOPSIS is a widely-used method for solving uncertain multi-attribute decision making problems due to its superiorities including its rational and understandable logic, limited subjective input, and the ability to identify the best alternative quickly and incorporate relative weights of criterion importance [31][32][33][34][35]. Its ranking standard is to evaluate the distances between the appraisal objects and the positive S + and negative ideal S − solutions. Therein, the positive ideal solution is composed of optimal solutions of all indexes, while the negative ideal solution consists of the least solutions of all indexes. According to the distance between the appraisal objects and the positive S + and negative S − ideal solutions, the relative closeness is calculated, and the ranking of each scheme is then obtained. That is, the larger the c i is, the more optimal the scheme. Specifically, it is assumed that there are m scheme sets A = {A 1 , A 2 , . . . , A m } and n index sets . . , f n } and the all indexes are divided into benefit and cost types. The decision judgment matrix X = (x ij ) m×n , i = 1, 2, . . . , m; j = 1, 2, . . . , n is established, in which x ij refers to the value of the jth index in the ith scheme. The weight vectors of all indexes are W = {ω 1 , ω 2 , . . . , ω n }. The TOPSIS method used in the performance appraisal is summarized as follows [21][22][23][24][25][26]: A. The standardized decision matrix R = (r ij ) m×n is built, which is then used to standardize the judgment matrix, therein, B. The weighting standardized decision matrix Z = (z ij ) m×n is built. C. The positive (S + ) and negative (S − ) ideal solutions are determined. For the benefit index, we obtain: For the cost index, we obtain: The Euclidean distances (d + i and d + i ) between each of the schemes and the positive and negative ideal solutions are separately calculated. E. The relative closeness degree c i between each of the schemes and the positive ideal solutions is calculated: F. The sorting is conducted according to the value of c i and obviously, the larger the c i is, the more optimal the scheme. The traditional TOPSIS appraisal method can objectively reflect the difference between various appraisal schemes by introducing positive and negative ideal solutions. However, when there is a significant linear correlation between indexes, the column vector composed of n different attribute indexes cannot make up a group of bases for measuring this linear space. Therefore, some problems appear while using the Euclidean distance to calculate the distance of the various schemes to the positive and negative ideal solutions, which leads to the deviation of the final sorting results of each scheme. An Improved TOPSIS Method Based on Entropy Weight and Mahalanobis Distance Wang and Wang [27] improved the traditional TOPSIS method by introducing the Mahalanobis distance. On this basis, in order to solve the information overlap problem caused by the correlation between variables, the study further determines the weight of each index by using information entropy to establish an objective E-M-TOPSIS method for solving the multi-attribute decision making problem. Moreover, the study has verified the properties of the method. Definition of Mahalanobis Distance The Mahalanobis distance is a statistical distance measure introduced by Mahalanobis, which considers the correlations of the data set and scale-invariant [27,36]. This measure is widely used in various fields such as data clustering [37,38] and multivariate diagnosis and pattern recognition [39,40]. E-M-TOPSIS Method The study improves the traditional TOPSIS method by introducing the Mahalanobis distance and further measures the weight of each index using information entropy. The Mahalanobis distance is a statistical distance characterized by independence on the measurement scale, being free from the influence of dimensions between coordinates, and capable of removing the disturbance of the correlation between variables, namely, it is able to offset the influence of linear correlation between attribute indexes. Meanwhile, information entropy can objectively and reasonably determine the weights of each of the indexes. Suppose there is an appraisal system with m scheme sets A = {A 1 , A 2 , . . . , A m } and n index sets F = { f 1 , f 2 , . . . , f n }. All indexes are divided into benefit and cost types. Following Wang and Wang [27], the improved TOPSIS method based on entropy weight and Mahalanobis distance used for the performance appraisal is illustrated in detail as follows: A. The vector of the appraisal scheme of A i is constructed as follows: where r i refers to the corresponding spatial coordinate of the attribute value of the ith appraisal scheme. The corresponding appraisal matrix is displayed as follows: An appraisal matrix is subjected to standardized processing and therefore the following formula can be obtained: where o ij represents the value of the jth appraisal index in the ith appraisal scheme and also o ij ∈ [0, 1] and, C. Following Shannon and Zhang et al. [41,42], the information entropy H j of the appraisal index is calculated, and is shown as follows: where, k = 1 ln m . On the condition that D. The entropy weight ω j of the appraisal index is calculated as follows: Also, 0 ≤ ω j ≤ 1 and E. The positive (S + ) and negative (S − ) ideal solutions are determined. For the benefit index, we obtain: For the cost index, we obtain: The two Mahalanobis distances (mahal + i and mahal − i ) between each scheme to positive-, and negative-, ideal solutions are calculated. where ∑ −1 is the inverse matrix of the covariance matrix ∑ of attribute variables r 1 , r 2 , . . . , r n , G. The relative closeness degree c i between each of the various schemes and the positive ideal solutions can be expressed as follows: H. The sorting is conducted according to the value of c i and the larger the c i is, the more optimal the scheme. Properties of the E-M-TOPSIS Method The E-M-TOPSIS method has two properties. Property 1. The relative closeness degree c i calculated by using the E-M-TOPSIS method is unchanged for non-singular linear transformation. For the condition ∑ = B ∑ B T , ∑ −1 = (B −1 ) T ∑ −1 B −1 , and therefore: Therefore, the relative closeness degree through the non-singular transformation can be expressed as follows: Property 1 indicates that if the standardization of the original data is a non-singular transformation during the decision making, the standardized process cannot affect the decision-making result. Property 2. On the condition that the appraisal indexes f 1 , f 2 , . . . , f n show linear independence. Proof of Property 2. It is assumed that r i and S + are taken from the same n-dimensional appraisal system, where the mean is µ = (µ 1 , µ 2 , . . . , µ n ) T and the covariance is ∑. The weight vector of the indexes is W and Ω = diag( √ ω 1 , √ ω 2 , . . . , √ ω n ). Due to the linear independence between various indexes, ∑ = diag(σ 2 1 , σ 2 2 , . . . , σ 2 n ) and ∑ −1 = diag 1 Thus, when the appraisal indexes are independent of each other, the weighted Mahalanobis distance is equivalent to the weighted Euclidean distance. However, when the appraisal indexes are correlated with each other, the Mahalanobis distance is shown to be little influenced by the dimension of indexes. Meanwhile, it is able to eliminate information overlap caused by the linear correlation between indexes. Therefore, the Mahalanobis distance is more applicable for solving the complex practical problems. Additionally, in practical applications, the general covariance matrix is unknown and therefore it can be replaced by a sample covariance matrix. In conclusion, the properties, advantages, and limitations of traditional TOPSIS, E-TOPSIS, and E-M-TOPSIS are shown in Table 1. The Appraisal Indexes Concerning External Performance of Energy Regulation The study selected and constructed performance indexes concerning the external responsibility of energy regulation based on a result-oriented principle [43]. The result-oriented principle is one of the basic concepts and core ideas of the performance management theory, which emphasizes the results of operation, management, and work, namely, economic and social benefits, as well as customer satisfaction. The result-oriented principle for the external performance appraisal of energy regulation also intensively analyzes the economic and social benefits, as well as the public's satisfaction degree caused by energy regulation. Considering the loss of corresponding data of the public's satisfaction degree of energy regulation, the study divided performance indexes concerning external responsibility of energy regulation into external economic performance and social responsibility performance for selection and establishment. The economic performance refers to the efficiency appraisal of the resource allocation and utilization. Following Wang [44], the external economic performance of energy regulation mainly involves four indexes: energy consumption elasticity coefficient, power consumption elasticity coefficient, outputs of energy, and power consumptions per unit. In detail, the energy and power consumption elasticity coefficients separately refer to ratios of the growth rates of energy and power consumptions to that of the national economy. This reflects the structural relationship between the development rate of the national economy and the energy or power consumption. The outputs of the energy and power consumptions per unit separately denote the Gross Domestic Product (GDP) produced by the energy or power consumption per unit of a country or a region within a certain period. The two indexes reflect the utilization degree and output efficiency of the energy or the power in economic activities of a country or a region. The performance index about social responsibility mainly involves indexes concerning environmental performance and energy safety related to the energy consumption. The environmental performance represents the negative external effect on society during the energy utilization whose specific indexes include SO 2 emission amount per GDP, dust emission amount per GDP, and wastewater discharge amount per GDP. These indexes reflect the influence of energy utilization on the environment. The energy safety performance mainly deals with the core problem of energy safety: whether the energy supply is sufficient and stable or not, and its specific indexes include external dependence, the proportion of primary energy yield in the total world yield, and the primary energy self-sufficient rate. Here, the external dependence reflects the correlation degree of a country on the foreign energies, while the proportion of the primary energy yield in the worldwide yield and the primary energy self-sufficient rate both show the supply capability of China's energies. Overall, this study establishes the performance index system for the external responsibility of energy regulation, as shown in Table 2. Here, the output of energy consumption per unit (X 3 ), output of power consumption per unit (X 4 ), proportion of primary energy yield in the worldwide yield (X 9 ), and primary energy self-sufficient rate (X 10 ) are separate benefit indexes. The energy consumption elasticity index (X 1 ), power consumption elasticity index (X 2 ), SO 2 emission amount per GDP (X 5 ), dust emission amount per GDP (X 6 ), wastewater discharge amount per GDP (X 7 ), and external dependence (X 8 ) are all cost indexes. Descriptive Statistical Analysis The data are taken from China Stock Market & Accounting Research (CSMAR) database, Wind database, and annual China Energy Statistical Yearbook (In detail, the data about energy consumption elasticity coefficients, power consumption elasticity coefficients, total energy consumptions, import volumes of energies, the total power consumptions, GDPs, GDP deflators, SO 2 and dust emission amounts, and wastewater discharge amount from 1999~2015 are taken from the CSMAR database. The data about the yields, import volumes, and consumptions of the primary energy during 1999~2015 are collected from the Wind database. The total world energy yields during 1999~2015 are taken from the yearly China Energy Statistical Yearbook. It is worth noting that the total world energy yield in 2015 was not recorded because the China Energy Statistical Yearbook of 2017 has not been published. The study acquired the total world energy yield in 2015 by measuring the average growth rate of the total world energy yields in the most recent five years from 2010 to 2014. Additionally, the GDP is calculated according to the GDP deflators by taking 1999 as the base period.). All index data have been subjected to a descriptive statistical analysis and the specific descriptive statistical results are shown in Table 3. Note: the superscript a refers to where the index shows several modes to present the minimum of the modes in this context. It can be seen that the maximum and minimum of all indexes existed within a reasonable interval and the mean of indexes was far larger than the standard deviation, implying that there was a low dispersion degree of data. Moreover, the probability with the extreme outlier was at a low level. The mean, median, and mode of the dust emission amount per GDP (X 6 ) were close to those of the wastewater discharge amount per GDP (X 7 ), indicating that the data of the two indexes were approximately symmetrically distributed. It can be speculated from the skewness that the data of power consumption elasticity index (X 2 ), external dependence (X 8 ), proportion of primary energy yield in the worldwide yield (X 9 ), and primary energy self-sufficient rate (X 10 ) were left-skewed distributed. The other index data were right-skewed distributed. Figures 1-4 separately display fluctuation trends of performance indexes concerning external economic and social responsibility. It can be seen that a common correlation between various performance indexes exists and 71% of correlation coefficients between indexes show significant statistics under 5% of the significant level. Especially, the indexes including output of energy consumption per unit (X 3 ) and SO 2 emission amount per GDP (X 5 ) basically show a significant correlation relationship with all the other indexes. Therefore, during selecting the methods for performance appraisal, it is necessary to select a proper method for solving the correlation in order to avoid the information overlap problem. The External Performance Appraisal of China Energy Regulation Based on the E-M-TOPSIS Method Based on the designed appraisal method for the external performance of energy regulations and selected appraisal indexes, the study evaluates the external performance of China's energy regulation using the E-M-TOPSIS. Firstly, this study calculated information entropies of various indexes according to Formulas (7) and (8) Finally, the study separately calculated the Mahalanobis distances between r i and S + , as well as r i and S − , and then the relative closeness degrees according to Formula (12). The results are displayed in Table 5. Additionally, there are differences between the appraisal results obtained using different appraisal methods for external performances of energy regulations. In order to compare the differences, the study also displays appraisal results of the external performances of China's energy regulation obtained using the E-TOPSIS method in Table 5. It can be speculated from the table that the presence of the correlation between indexes leads to sorting results of the external performance of energy regulations calculated using two different appraisal methods that havei a significant disparity. The E-M-TOPSIS method considering the correlation between the performance indexes concerning external economic and social responsibility exhibits a lower relative closeness degree compared with the E-TOPSIS method due to avoiding the information overlap problem. This also implies that the correlation between indexes cannot be ignorable to some extent. Therefore, due to taking the correlation between various performance indexes into account, the M-TOPSIS method can truly show the external performance characteristic of the energy regulation and reflect the performance level of energy regulation. On this basis, the method can be used for scientific decision-making formulations. Figure 5 shows the fluctuation trends of corresponding relative closeness degrees of three different appraisal methods for the external performance of energy regulation. As shown in Figure 5, the traditional TOPSIS method enlarges the fluctuation interval of the relative closeness degree and increases the fluctuation amplitude of relative closeness degree to some extent because it cannot effectively address the information overlap problem. However, the E-M-TOPSIS method avoids information overlap and causes the relative closeness degree within a fluctuation interval to have a lower amplitude. Meanwhile, the method softens the closeness level to further acquire performance appraisal results reflecting the true level of energy regulation based on independent performance indexes. Figure 6 shows fluctuation trends of external economic and social responsibility performances obtained using the E-M-TOPSIS method. To be specific, the external economic performance dramatically fluctuates with a great fluctuation amplitude and it rose rapidly and unevenly after reaching the wave trough (the minimum value) around 2004 and 2005 (year). Moreover, the increase sped up in 2013 with the constantly deepening reform of China's energy regulation institutions. However, the social responsibility performance maintained a relatively stable fluctuation with an interval. It can be seen that the fluctuation of the external economic performance is more sensitive to energy regulation than the social responsibility performance. Additionally, by comparing the appraisal results of the E-TOPSIS method and the equivalent-weighting traditional TOPSIS method, it can be seen that determining weights by using information entropy does reflect its reasonable and objective characteristics to some extent. Therefore, the E-M-TOPSIS method is applicable for the external performance appraisal of the energy regulation after solving the correlation problem between indexes and determining weights through information entropy. This exerts a practical significance on the scientific appraisal and decision-making of energy regulation policies. Discussion and Policy Implications Compared with the traditional TOPSIS method, the E-M-TOPSIS method is more applicable for evaluating the practical condition of the external performance level of China's energy regulation, which can provide profound policy enlightenment for the management practice of energy regulation. As shown in the fluctuation trend of the relative closeness degree using the E-M-TOPSIS method in Figure 5, the external performance of China's energy regulation stably fluctuated within an interval overall and China's quality level of energy regulation remained stable during 1999~2005. To be specific, the external performance of China's energy regulation unevenly rose after reaching the low ebb in 2004, which conformed to the following fact: corresponding institutional and regulatory organizations of China's Electricity Regulatory Commission were successively built and gradually became mature, and Regional Electricity Regulatory Bureaus were then successively established in 2004. Additionally, China's energy institutional reform deepened in 2013 and China's Electricity Regulatory Commission was officially merged into the National Energy Administration of the People's Republic of China. This resulted in mode transformation from separation to union between governments and regulation. Moreover, energy regulatory content was transformed from the electricity regulation to a broad range of energy regulations (the regulations cover the areas including electricity, coal, oil, and new energy). As shown in Figure 5, the external performance level of China's energy regulation constantly rose from 2013, which implied that a big energy regulation system of union between governments and regulation is applicable for the development phase of China's energy field. Thus, the quality level of China's energy regulation can be favorably elevated by promoting the energy institutional reform, perfecting the legal system and executive system under the broad range of energy regulations, and guaranteeing the steady operation of the energy regulation system. Additionally, the weight of the performance index system for external responsibility of energy regulation based on the information entropy can be determined. The output of energy consumption per unit is shown to be the most important index influencing the external performance level of energy regulations, which reflects the utilization degree and output efficiency of energies in economic activities. Therefore, it is also an orientation index affecting the utilization efficiency and intensity of energies. Hence, improving the utilization efficiency of energies and reducing the energy intensity both exert a direct effect on improving and enhancing the external performance level of energy regulation and vice versa. Conclusions The energy industry exhibits significant positive and negative externalities and the purpose of energy regulation is to promote the positive externality, reducing and even eliminating negative externality by implementing regulations. Appraising the external performance of China energy regulation will provide appraisal indexes and methods for reasonably, effectively, and orderly conducting the energy regulation and has a practical significance to further improving the quality level of energy regulations. The external performance appraisal of China's energy regulation involves multi-attribute decision making in essence. However, inconsistent with the practical data, existing multi-attribute appraisal methods assume that the sample data are all independent and identically distributed. Therefore, in order to avoid the information overlap resulting from the correlation of indexes, the study evaluated the external performance of China's energy regulation using the E-M-TOPSIS method. The appraisal results indicate that the presence of the correlation between indexes causes a great difference of appraised external performance levels of China's energy regulation between the E-M-TOPSIS and traditional TOPSIS method. Compared with the traditional TOPSIS method, the E-M-TOPSIS method that considers the correlation between indexes softens the closeness level overall and causes the closeness to fluctuate within a small-amplitude interval. The appraisal result obtained using the E-M-TOPSIS method is consistent with the practical condition of China's energy regulation. Moreover, the E-M-TOPSIS method is favorably applicable in the external performance appraisal of energy regulation, which exerts a practical significance to the scientific appraisal and decision making of energy regulation policies. Future Work The study appraises and analyzes the performance level of China's energy regulation from the aspect of external performances. Another important factor influencing the performance level of energy regulation is the fact that the indexes related to the internal performance are not introduced into the performance index appraisal system of this research due to the limits of availability and completeness of data, which will be primarily considered in the appraisal of the performance level of China's energy regulation in the future.
7,534.8
2018-01-30T00:00:00.000
[ "Economics" ]
Local phase space and edge modes for diffeomorphism-invariant theories We discuss an approach to characterizing local degrees of freedom of a subregion in diffeomorphism-invariant theories using the extended phase space of Donnelly and Freidel [36]. Such a characterization is important for defining local observables and entanglement entropy in gravitational theories. Traditional phase space constructions for subregions are not invariant with respect to diffeomorphisms that act at the boundary. The extended phase space remedies this problem by introducing edge mode fields at the boundary whose transformations under diffeomorphisms render the extended symplectic structure fully gauge invariant. In this work, we present a general construction for the edge mode symplectic structure. We show that the new fields satisfy a surface symmetry algebra generated by the Noether charges associated with the edge mode fields. For surface-preserving symmetries, the algebra is universal for all diffeomorphism-invariant theories, comprised of diffeomorphisms of the boundary, SL(2, ℝ) transformations of the normal plane, and, in some cases, normal shearing transformations. We also show that if boundary conditions are chosen such that surface translations are symmetries, the algebra acquires a central extension. Introduction In gravitational theories, the problem of defining local subregions and observables is complicated by diffeomorphism invariance. Because it is a gauge symmetry, diffeomorphism invariance leads to constraints that must be satisfied by initial data for the field equations. These constraints relate the values of fields in one subregion of a Cauchy slice to their values elsewhere, so that the fields cannot be interpreted as observables localized to a particular region. While this is true in any gauge theory, a further challenge for diffeomorphism-invariant theories is that specifying a particular subregion is nontrivial, since diffeomorphisms can change the subregion's coordinate position. A related issue in quantum gravitational theories is the problem of defining entanglement entropy for a subregion. The usual definition of entanglement entropy assumes a factorization of the Hilbert space H = H A ⊗ HĀ into tensor factors H A and HĀ associated with a subregion A and its complementĀ. However, all physical states in a gauge theory are required to be annihilated by the constraints, and the nonlocal relations the constraints JHEP02(2018)021 impose on the physical Hilbert space prevents such a factorization from occurring. 1 One way of handling this nonfactorization is to define the entropy in terms of the algebra of observables for the local subregion [1]. This necessitates a choice of center for the algebra, which roughly corresponds to Wilson lines that are cut by the entangling surface. This procedure is further complicated in gravitational theories, since the local subregion and its algebra of observables must be defined in a diffeomorphism-invariant manner. Thus, the issues of local observables and entanglement in gravitational theories are intertwined. Despite these challenges, there are indications that a well-defined notion of local observables and entanglement should exist in gravitational theories. Holography provides a compelling example, where the entanglement of bulk regions bounded by an extremal surface may be expressed in terms of entanglement in the CFT via the Ryu-Takayanagi formula and its quantum corrections [2,3]. Such regions are defined relationally relative to a fixed region on the boundary, and hence give a diffeomorphism-invariant characterization of the local subregion. Work regarding bulk reconstruction suggests that the algebra of observables for this subregion is fully expressible in terms of the subregion algebra of the CFT [4][5][6][7][8][9]. In addition, there are various pieces of circumstantial evidence suggesting that entanglement entropy is a well-defined and useful concept in quantum gravity. The gravitational field equations have been shown to follow from applying the first law of entanglement entropy [10,11] to subregions, both in holography [12][13][14][15][16] and for more general gravitational theories [17][18][19][20], all of which is predicated on a well-defined notion for entanglement for the local subregion. In fact, it is conjectured that connectivity of the spacetime manifold arises from entanglement between the microscopic degrees of freedom from which the gravitational theory emerges [21]. Furthermore, entanglement entropy provides a natural explanation for the proportionality between black hole entropy and horizon area [22][23][24][25], while finessing the issue of entanglement divergences through renormalization of the gravitational couplings [26][27][28]. However, in the case of gauge theories, the matching between entanglement entropy divergences and the renormalization of gravitational couplings is subtle. The entropy computed using conical methods [29] contains contact terms [30][31][32], which are related to the presence of edge modes on the entangling surface. These arise as a consequence of the nonfactorization of the Hilbert space due to the gauge constraints. Only when the entanglement from these edge modes is properly handled does the black hole entropy have a statistical interpretation in terms of a von Neumann entropy [33][34][35]. Recently, Donnelly and Freidel presented a continuum description of the edge modes that arise both in Yang-Mills theory and general relativity [36]. Using covariant phase space techniques [37][38][39][40], they construct a symplectic potential and symplectic form associated with a local subregion. These are expressed as local integrals of the fields and their variations over a Cauchy surface Σ. However, one finds that they are not fully gaugeinvariant: gauge transformations that are nonvanishing at the boundary ∂Σ change the symplectic form by boundary terms. Invariance is restored by introducing new fields in JHEP02(2018)021 a neighborhood of the boundary, whose change under gauge transformations cancels the boundary term from the original symplectic form. These new edge modes thus realize the idea that boundaries break gauge invariance, and cause some would-be gauge modes to become degrees of freedom associated with the subregion [41,42]. The analysis of diffeomorphism-invariant theories in [36] was restricted to general relativity with vanishing cosmological constant. However, the construction can be generalized to arbitrary diffeomorphism-invariant theories, and it is the purpose of the present work to show how this is done. The symplectic potential for the edge modes can be expressed in terms of the Noether charge and the on-shell Lagrangian of the theory, and the symplectic form derived from it has contributions from the edge modes only at the boundary. These edge modes come equipped with set of symmetry transformations, and the symmetry algebra is represented on the phase space as a Poisson bracket algebra. The generators of the surface symmetries are given by the Noether charges associated with the transformations. We find that for generic diffeomorphism-invariant theories, the transformations that preserve the entangling surface generate the algebra Diff(∂Σ) SL(2, R) R 2·(d−2) ∂Σ . In certain cases, including general relativity, the algebra is reduced to Diff(∂Σ) SL(2, R) ∂Σ , consistent with the results of [36]. Furthermore, for any other theory, there always exists a modification of the symplectic structure in the form of a Noether charge ambiguity [43] that reduces the algebra down to Diff(∂Σ) SL(2, R) ∂Σ . We also discuss what happens when the algebra is enlarged to include surface translations, the transformations that do not map ∂Σ to itself. In order for these transformations to be Hamiltonian, the dynamical fields generically have to satisfy boundary conditions at ∂Σ. Assuming the appropriate boundary conditions can be found, the full surface symmetry algebra is a central extension of either Diff(∂Σ) SL(2, R) R 2 ∂Σ , or a larger, simple Lie algebra. The appearance of central charges in these algebras is familiar from similar constructions involving edge modes at asymptotic infinity or black hole horizons [42,44,45]. The construction of the extended phase space for arbitrary diffeomorphism-invariant theories is useful for a number of reasons. For one, higher curvature corrections to the Einstein-Hilbert action generically appear due to quantum gravitational effects. It is useful to have a formalism that can compute the corrections to the edge mode entanglement coming from these higher curvature terms. Additionally, there are several diffeomorphisminvariant theories that are simpler than general relativity in four dimensions, such as 2 dimensional dilaton gravity or 3 dimensional gravity in Anti-de-Sitter space. These could be useful testing grounds in which to understand the edge mode entanglement entropy, before trying to tackle the problem in four or higher dimensions. Finally, the general construction clarifies the relation of the extended phase space to the Wald formalism [46,47], a connection that was also noted in [48]. This paper begins with a review of the covariant canonical formalism in section 2. Care is taken to describe vectors and differential forms on this infinite-dimensional space, and also to understand the effect of diffeomorphisms of the spacetime manifold on the covariant phase space. Section 3 discusses the X fields that appear in the extended phase space, which give rise to the edge modes. Following this, the construction of the extended phase space is given in section 4, which describes how the edge mode fields contribute to the JHEP02(2018)021 extended symplectic form. Ambiguities in the construction are characterized in section 5, and the surface symmetry algebra is identified in section 6. Section 7 gives a summary of results and ideas for future work. Covariant canonical formalism The covariant canonical formalism [37][38][39][40] provides a Hamiltonian description of a field theory's degrees of freedom while maintaining spacetime covariance. This is achieved by working with the space S of solutions to the field equations. As long as the field equations admit a well-posed initial value formulation, each solution is in one-to-one correspondence with its initial data on some Cauchy slice. S may therefore be used to construct a phase space that is equivalent to Hamiltonian formalisms coordinatized by initial positions and momenta. Since a solution need not refer to a choice of initial Cauchy slice and decomposition into spatial and time coordinates, spacetime covariance remains manifest in a phase space constructed from S. The specification of a Cauchy surface and time variable can be viewed as a choice of coordinates on S, with each solution being identified by its initial data. An important subtlety in this construction occurs for field equations with gauge symmetry. The space S involves all solutions to the field equations, so, in particular, treats two solutions that differ only by a gauge transformation as distinct. 2 In this case, S is too large to be the correct phase space for the theory, since gauge-related solutions should represent physically equivalent configurations. Instead, the true phase space P should be obtained by quotienting S by the action of the gauge group. It is useful to view S as a fiber bundle, with each fiber consisting of all solutions related to each other by a gauge transformation, in which case P is simply the base space of this fiber bundle. As discussed in section 4, the Lagrangian for the theory imbues S with the structure of a presymplectic manifold, equipped with a degenerate presymplectic form. This degeneracy is necessary in order for it to project to a well-defined symplectic form on P. The remainder of this section is devoted to describing the geometry of the space S, while the requirements for various functions and forms (including the presymplectic form) to descend to well-defined objects on P are discussed in section 3. Working directly with S allows coordinate-free techniques to be applied to both the spacetime manifold and the solution space itself. In particular, the exterior calculus on the S gives a powerful language for describing the phase space symplectic geometry. We will follow the treatment of the exterior calculus given in [36], 3 where it was used to provide an extremely efficient way of identifying edge modes for a local subregion in a gauge theory. This section provides a review of the formalism, on which the remainder of this paper heavily relies. The theories under consideration consist of dynamical fields, including the metric g ab and any matter fields, propagating on a spacetime manifold M . These fields satisfy 2 Identifying solutions with initial data is still possible if one supplements the original field equations with suitable gauge-fixing conditions. One could therefore consider S as being coordinatized by initial data along with a choice of gauge. 3 For an extended review of this formalism, see [49] and references therein. JHEP02(2018)021 diffeomorphism-invariant equations of motion, and the phase space is constructed from the infinite-dimensional space of solutions to these equations, S. Despite being infinitedimensional, many concepts from finite-dimensional differential geometry, such as vector fields, one-forms, and Lie derivatives, extend straightforwardly to S, assuming it satisfies some technical requirements such as being a Banach manifold [50,51]. One begins by understanding the functions on S, a wide class of which is provided by the dynamical fields themselves. Given a spacetime point x ∈ M and a field φ, the function φ x associates to each solution the value of φ(x) in that solution. More generally, functionals of the dynamical fields, such as integrals over regions of spacetime, also define functions on S by simply evaluating the functional in a given solution. We will often denote φ x simply by φ, with the dependence on the spacetime point x implicit. A vector at a point of S describes an infinitesimal displacement away from a particular solution, and hence corresponds to a solution of the linearized field equations. Specifying a linearized solution about each full solution then defines a vector field V on all of S. The vector field acts on S-functions as a directional derivative, and in particular its action on the functions φ x is to give a new function Φ , which, given a solution, evaluates the linearization Φ of the field φ at the point x. This also allows us to define the exterior derivative of the functions φ x , denoted δφ x . When contracted with the vector field V , the one-form δφ x simply returns the scalar function Φ x V . The one-forms δφ x form an overcomplete basis, so that arbitrary one-forms may be expressed as sums (or integrals over the spacetime point x) of δφ x . This basis is overcomplete because the functions φ x at different points x are related through the equations of motion, so that the forms δφ x are related as well. Forms of higher degree can be constructed from the δφ x one-forms by taking exterior products. The exterior product of a p-form α and a q-form β is simply written αβ, and satisfies αβ = (−1) pq βα. Since we only ever deal with exterior products of forms defined on S instead of more general tensor products, no ambiguity arises by omitting the ∧ symbol, which we instead reserve for spacetime exterior products. The action of the exterior derivative on arbitrary forms is fixed as usual by its action on scalar functions, along with the requirements of linearity, nilpotency δ 2 = 0, and that it acts as an antiderivation, δ(αβ) = (δα)β + (−1) p αδβ. (2.1) The exterior derivative δ always increases the degree of the form by one. On the other hand, each vector field V defines an antiderivation I V that reduces the degree by one through contraction. I V can be completely characterized by its action on one-forms I V δφ x = Φ x V , along with the antiderivation property, linearity, nilpotency I 2 Φ = 0, and requiring that it annihilate scalars. Just as in finite dimensions, the action of the S Lie derivative, denoted L V , is related to δ and I V via Cartan's magic formula [51] that preserves the degree of the form. We next discuss the consequences of working with diffeomorphism invariant theories. A diffeomorphism Y is a smooth, invertible map, Y : M → M , sending the spacetime JHEP02(2018)021 manifold M to itself. The diffeomorphism induces a map of tensors at Y (x) to tensors at x through the pullback Y * [52]. Diffeomorphism invariance is simply the statement that if a configuration of tensor fields φ satisfy the equations of motion, then so do the pulled back fields Y * φ. Now consider a one-parameter family of diffeomorphisms Y λ , with Y 0 the identity. This yields a family of fields Y * λ φ that all satisfy the equations of motion. The first order change induced by Y * λ defines the spacetime Lie derivative £ ξ with respect to ξ a , the tangent vector to the flow of Y λ . Consequently, £ ξ φ must be a solution to the linearized field equations, and the infinitesimal diffeomorphism generated by ξ a defines a vector field on S, which we denoteξ, whose action on δφ is 3) The diffeomorphisms we have considered so far have been taken to act the same on all solutions. A useful generalization of this are the solution-dependent diffeomorphisms, defined through a function, Y : S → Diff(M ), valued in the diffeomorphism group of the manifold, Diff(M ). Letting Y denote the image of this function, we would like to understand how the Lie derivative L V and exterior derivative δ on S combine with the action of the pullback Y * . In the case Y is constant on S, the Lie derivative simply commutes with Y * , and so L V Y * α = Y * L V α, where α is any form constructed from fields and their variations at a single spacetime point. When Y is not constant, V generates one-parameter families of diffeomorphisms Y λ and forms α λ along the flow in S. At a given solution s 0 , define a solution-independent diffeomorphism Y 0 ≡ Y (s 0 ) by the value of Y at s 0 . Then Y * λ α λ and Y * 0 α λ are related to each other at all values of λ by a diffeomorphism, Y * λ (Y −1 0 ) * . The first order change in these quantities at λ = 0 is given by L V , and since the two quantities differ at first order by an infinitesimal diffeomorphism, we find It is argued in appendix A, identity A.3, that the vector χ a (Y ; V ) depends linearly on V , and hence defines a one-form on S, denoted χ a Y . 4 This yields the pullback formula for L V , Applying (2.2) to this equation, one can derive the pullback formula for exterior derivatives from [36] (see A.5 for details), A number of properties of the variational vector field χ a Y follow from the formulas above. First, note χ a Y is not an exact form on S; rather, its exterior derivative can be deduced from (2.6), In [36], χ a Y was denoted δ a Y . We choose a different notation to emphasize that χ a Y is not an exact form, and to avoid confusion with the exterior derivative δ. JHEP02(2018)021 and applying A.7, we conclude Another useful formula relates χ a Y to the vector χ a Y −1 associated with the inverse of Y . Using that Y * and (Y −1 ) * are inverses of each other, we find where the last equality involves the identity A.8. This implies (2.10) Additional identities are derived in appendix A. Finally, as a spacetime vector field, χ a Y also defines a vector-valued one-formχ Y on S, which acts as Iχ Y δφ = £χ Y φ. The contraction Iχ Y defines a derivation that preserves the degree of the form, in contrast to Iξ, which is an antiderivation that reduces the degree. Similarly, δ( χ Y ) a defines a vector-valued two-form on S, and produces an antiderivation I δ( χ Y )ˆt hat increments the degree. Edge mode fields Edge modes appear when a gauge symmetry is broken due to the presence of a boundary ∂Σ of a Cauchy surface Σ. The classical phase space or quantum mechanical Hilbert space associated with Σ transforms nontrivially under gauge transformations that act at the boundary. This can be understood from the perspective of Wilson loops that are cut by the boundary. A closed Wilson loop is gauge-invariant, but the cut Wilson loop becomes a Wilson line in Σ, whose endpoints transform in some representation of the gauge group. To account for these cut-Wilson-loop degrees of freedom, one can introduce fictitious charged fields at ∂Σ, which can be attached to the ends of the Wilson lines to produce a gaugeinvariant object. These new fields are the edge modes of the local subregion. They account for the possibility of charge density existing outside of Σ, which would affect the fields in Σ due to Gauss law constraints. The contribution of the edge modes to the entanglement can therefore be interpreted as parameterizing ignorance of such localized charge densities away from Σ. A similar picture arises in the classical phase space of a diffeomorphism-invariant theory. The edge modes appear when attempting to construct a symplectic structure associated with Σ for the solution space S. Starting with the Lagrangian of the theory, one can construct from its variations a symplectic current ω, a spacetime (d − 1)-form whose integral over a spatial subregion Σ provides a candidate presymplectic form. However, this form fails to be diffeomorphism invariant for two reasons. First, a diffeomorphism moves points on the mainfold around, and hence changes the shape and coordinate location of the surface. Second, since solutions related to each other by a diffeomorphism represent the same physical configuration, the true phase space P is obtained by projecting all solutions in a gauge orbit in S down to a single representative. In order for the symplectic JHEP02(2018)021 form to be compatible with this projection, the infinitesimal diffeomorphisms must be degenerate directions of the presymplectic form [50]. 5 This is equivalent to saying that the Hamiltonian generating the diffeomorphism may be chosen to vanish. While the symplectic form obtained by integrating ω over a surface is degenerate for diffeomorphisms that vanish sufficiently quickly at its boundary, those that do not produce boundary terms that spoil degeneracy. The problem of non-invariance due to diffeomorphisms that move the surface is solved by defining the surface's location in a diffeomorphism-invariant manner. There are a variety of ways that this can be done. One example comes from the Ryu-Takayanagi prescription in holography, where the bulk entangling surface ∂Σ is defined as the extremal surface that asymptotes to a given subregion on the boundary of AdS [2]. Another set of techniques are the relational constructions of [53], where one set of fields can be used to define a coordinate system, and subregions can be defined relationally to these coordinate fields. An important point about the edge modes is that they are necessary even after dealing with this first source of non-invariance: the presymplectic form may still not be appropriately degenerate even after specifying the subregion invariantly. The remainder of this work will primarily be focused on how this second issue is resolved, although the extended phase space provides a formal solution to the first issue as well. As demonstrated in [36], both problems can be handled by introducing a collection of additional fields X whose contribution to the symplectic form restores diffeomorphism invariance. These fields are the edge modes of the extended phase space. This section is devoted to describing these fields and their transformation properties under diffeomorphisms; the precise way in which they contribute to the symplectic form is discussed in section 4. The fields X can be defined through a Diff(M )-valued function X : S → Diff(M ). In a given solution s, X is identified with the diffeomorphism in the image of the map, X = X (s). One way to interpret X is as defining a map from (an open subset of) R d into the spacetime manifold M , and hence can be thought of as a choice of coordinate system covering the local subregion Σ. 6 The problem of defining the subregion Σ is solved by declaring it to be the image under the X map of some fiducial subregion σ in R d . A full solution to the field equations now consists of specifying the map X as well as the value of the dynamical fields φ(x) at each point in spacetime. The transformation law for X under a diffeomorphism Y : M → M is given by the pullback along Y −1 ,X = Y −1 • X. 5 Indeed, only functions on S that are constant along the gauge orbits descend to well-defined functions on P. Similarly, the only forms that survive the projection must be both constant along gauge orbits and annihilate vectors tangent to the gauge orbits. In particular, the functions φ x constructed from the dynamical fields do not survive the projection, while diffeomorphism-invariant functionals of φ x do survive. Note that this is one reason for working with S: it is technically simpler to derive relations involving the local field functions φ x in S than always working with diffeomorphism-invariant objects in P. Most of the relations in this paper are derived in S, and then are argued to hold in P if they involve diffeomorphisminvariant functionals and are properly degenerate. 6 We assume for simplicity that the subregion of interest can be covered by a single coordinate system. For topologically nontrivial subregions, the fields may consist of a collection of maps Xi, one for each coordinate patch needed to cover the region. JHEP02(2018)021 Since X defines a diffeomorphism from R d to M , it can be used to pull back tensor fields on M to R d . We can argue as before that the Lie derivative L V and exterior derivative δ satisfy pullback forumlas analogous to equations (2.4) and (2.6), which serve as defining relations for the variational spacetime vector χ a X . The result of contracting χ a X with a vector fieldξ corresponding to a spacetime diffeomorphism can be deduced by first noting that the pulled back fields X * φ are invariant under diffeomorphisms, sinceX In particular, the S Lie derivative Lξ must annihilate X * φ for any ξ, so from (3.1), and hence We can also derive the transformation law for χ a X under a diffeomorphism from the pullback formulas (2.6) and (3.2). On the one hand we have while on the other hand this can also be computed as where the last equality employed identity A.8. Comparing these expressions and applying the formula (2.10) for χ a Y −1 gives the transformation law The X fields lead to an easy prescription for forming diffeomorphism-invariant quantities: simply work with the pulled back fields X * φ. These are diffeomorphism-invariant due to equation (3.3), and consequently the variation δX * φ is as well. We can explicitly confirm that δX * φ are annihilated by infinitesimal diffeomorphismsξ: Note that these relations ensure that X * φ and δX * φ descend to functions on the reduced phase space P, after quotienting S by the degenerate directions of the presymplectic form. Another combination of one-forms that appears frequently is α + Iχ X α, and it is easily checked that Iξ annihilates this sum. Finally, we note that when no confusion will arise, we will simply denote χ a X by χ a to avoid excessive clutter. When referring to other diffeomorphisms besides X, we will explicitly include the subscript, as in χ a Y . JHEP02(2018)021 4 Extended phase space We now turn to the problem of defining a gauge-invariant symplectic form to associate with the local subregion. In this work, the precise meaning of a local subregion is the domain of dependence of some spacelike hypersurface Σ, 7 which serves as a Cauchy surface for the subregion. We further require that Σ have a boundary ∂Σ, so that it may be thought of as a subspace of a larger Cauchy surface for the full spacetime. The standard procedure of [46,47,50] for constructing a symplectic form for a diffeomorphism-invariant field theory begins with a Lagrangian L[φ], a spacetime d-form constructed covariantly from the dynamical fields φ. Its variation takes the form where E = 0 are the dynamical field equations, and the exact form dθ, where d denotes the spacetime exterior derivative, defines the symplectic potential current , which is a one-form on solution space S. The S-exterior derivative of θ defines the symplectic current (d−1)-form, ω = δθ, whose integral over Σ normally defines the presymplectic form Ω 0 for the phase space. As a consequence of diffeomorphism-invariance, Ω 0 contains degenerate directions: it annihilates any infinitesimal diffeomorphism generated by vector field ξ a that vanishes sufficiently quickly near the boundary. This is succinctly expressed for such a vector field by IξΩ 0 = 0. The true phase space P is obtained by quotienting out these degenerate directions by mapping all diffeomorphism-equivalent solutions to a single point in P. Ω 0 then defines a nondegenerate symplectic form on P through the process of phase space reduction [50]. This procedure is deficient for a local subregion because Ω 0 fails to be degenerate for diffeomorphisms that act near the Cauchy surface's boundary ∂Σ. If the boundary were at asymptotic infinity, such diffeomorphisms could be disallowed by imposing boundary conditions on the fields, or could otherwise be regarded as true time evolution with respect to the fixed asymptotic structure, in which case degeneracy would not be expected [40]. For a local subregion, however, neither option is acceptable. Imposing a boundary condition on the fields at ∂Σ has a nontrivial effect on the dynamics [54][55][56], whereas we are interested in a phase space that locally reproduces the same dynamics as the theory defined on the full spacetime manifold M . Furthermore, the diffeomorphisms acting at ∂Σ cannot be regarded as true time evolution generated by nonvanishing Hamiltonians, because these diffeomorphisms are degenerate directions of a presymplectic form for the entire manifold M . Donnelly and Freidel [36] proposed a resolution to this issue by extending the local phase space to include the X fields described in section 3. The minimal prescription for introducing them into the theory is to simply replace the Lagrangian with its pullback X * L. Since the Lagrangian is a covariant functional of the fields, X * L[φ] = L[X * φ], so 7 The requirement that Σ be spacelike is necessary in order to interpret the symplectic form constructed on it as characterizing a subset of the theory's degrees of freedom. While the construction would seem to also apply to timelike hypersurfaces, such a hypersurface has an empty domain of dependence, and so there is no sense in which it determines the dynamics in some open subset of the manifold. JHEP02(2018)021 that the pulled back Lagrangian depends only on the redefined fields X * φ, and is otherwise independent of X. The variation of this Lagrangian gives Thus the redefined fields satisfy the same equations of motion E[X * φ] = 0 as the original fields, and, due to diffeomorphism invariance, this implies that the original φ fields must satisfy the equations as well. Additionally, the Lagrangian had no further dependence on X, which means the X fields do not satisfy any field equations. If X is understood as defining a coordinate system for the local subregion, the dynamics of the extended (φ, X) system is simply given by the original field equations, expressed in an arbitrary coordinate system determined by X. The symplectic potential current is read off from (4.2), This object is manifestly invariant with respect to solution-dependent diffeomorphisms, since both X * φ and δX * φ are. In particular, θ annihilates any infinitesimal diffeomorphism Iξ, as a consequence of the fact that IξδX * φ = 0 (see equation (3.4)). An equivalent expression for θ can be obtained by introducing the Noether current for a vector field ξ a , where i ξ denotes contraction with the spacetime vector ξ a . Due to diffeomorphism invariance, J ξ is an exact form when the equations of motion hold [46,47], and may be written where Q ξ is the Noether charge and C ξ = 0 are combinations of the field equations that comprise the constraints for the theory [57]. Then θ in (4.3) may be expressed on-shell As an aside, note that we can vary the Lagrangian with respect to (φ, X) instead of the redefined fields (X * φ, X), and equivalent dynamics arise. This variation produces where Cartan's magic formula £χ = iχd+diχ was used, along with the fact that d commutes with pullbacks. Again, φ satisfies the same field equation E[φ] = 0, and X is subjected to no dynamical equations. This variation suggests a potential current θ = X * (θ + iχL), which differs from (4.6) by the exact form dX * Qχ. This difference is simply an ambiguity in the definition of the potential current, since shifting it by an exact form does not affect equation (4.1) [43,47]. However, θ does not annihilate infinitesimal diffeomorphisms Iξ, making θ the preferred choice. The degeneracy requirement for the symplectic potential current therefore gives a prescription to partially fix its ambiguities [48], although additional ambiguities remain, and are discussed in section 5. JHEP02(2018)021 The symplectic potential Θ is now constructed by integrating θ over Σ. Since θ is defined as a pullback by X * , its integral must be over the pre-image σ, for which X(σ) = Σ. This gives The second line uses the alternative expression (4.6) for θ , and is written as an integral of fields defined on the original Cauchy surface Σ, without pulling back by X. This makes use of the general formula σ X * α = X(σ) α, and also applies Stoke's theorem Σ dα = ∂Σ α to write the Noether charge as a boundary integral. Equation (4.9) differs from the symplectic potential for the nonextended phase space, Θ 0 = Σ θ, by both a boundary term depending on the Noether charge, as well as a bulk term coming from the on-shell value of the Lagrangian. For vacuum general relativity with no cosmological constant, this extra bulk contribution vanishes, being proportional to the Ricci scalar [36]. However, when matter is present or the cosmological constant is nonzero, this extra bulk contribution to Θ can survive. As we discuss below, this bulk term imbues the symplectic form on the reduced phase space P with nontrivial cohomology. Taking an exterior derivative of Θ yields the symplectic form, Ω = δΘ. The expression (4.8) leads straightforwardly to where we recall the definition of the symplectic current ω = δθ. This expression for Ω makes it clear that it is invariant with respect to all diffeomorphisms, and that infinitesimal diffeomorphisms are degenerate directions, again because IξδX * φ = 0. The symplectic form can also be expressed as an integral over Σ and its boundary using the original fields φ, by computing the exterior derivative of (4.9). Noting that the integrands implicitly involve a pullback by X * , we find The first term is the symplectic form for the nonextended theory, Ω 0 = Σ ω. The remaining three terms in the bulk Σ integral simplify to an exact form on-shell d(iχθ + 1 2 iχiχL) (see identity A.10), so the final expression is Hence, we arrive at the important result that the symplectic form differs from Ω 0 by terms localized on the boundary ∂Σ involving χ a . This immediately implies that Ω has degenerate directions: any phase space vector field V that vanishes on δφ and whose contraction with χ a vanishes sufficiently quickly near ∂Σ will annihilate Ω. In fact, only JHEP02(2018)021 the values of χ a and ∇ b χ a at ∂Σ contribute to (4.12); all other freedom in χ a is pure gauge. To see why these are the only relevant pieces of χ a for the symplectic form, we can use the explicit expression for the Noether charge given in [47]. Up to ambiguities which are discussed in section 5, the Noether charge is given by where ab is the spacetime volume form with all but the first two indices suppressed, E abcd = δL δR abcd is the variational derivative of the Lagrangian scalar L = −( * L) with respect to the Riemann tensor, and inherits the index symmetries of the Riemann tensor, and W c [φ] is a tensor with (d − 2) covariant, antisymmetric indices suppressed, constructed locally from the dynamical fields; its precise form is not needed in this work. The last two terms in (4.12) depend only on the value of χ a on ∂Σ, while the terms involving Qχ can depend on derivatives of χ a . From (4.13), Qχ involves one derivative of χ a , and (4.12) has terms involving the derivative of Qχ, so that up to two derivatives of χ a could contribute to the symplectic form. To see how these derivatives appear, we decompose δQχ as δQχ = Q δ( χ ) + ϙχ, (4.14) where ϙ ξ = ϙ ξ [φ; δφ] 8 is a variational one-form depending on a vector ξ (which can be a differential form on S), given by and δΓ d ce is the variation of the Christoffel symbol, This decomposition is useful because ϙχ contains only first derivatives of χ a , while Q δ χ = − 1 2 Q [ χ , χ ] involves second derivatives through the derivative of the vector field Lie bracket. In appendix B, it is argued that the second derivatives of χ a in Q δ( χ ) + £χQχ cancel out, so that the boundary contribution in (4.12) depends on only χ a and ∇ b χ a at ∂Σ. This means that Ω has a large number of degenerate directions, corresponding to all values of χ a on Σ that are not fixed by the values of χ a and ∇ b χ a at the boundary. The true phase space P is then obtained by quotienting out these pure gauge degrees of freedom. In doing so, Ω descends to a nondegenerate, closed two-form on the quotient space [50]. However, the symplectic potential Θ does not survive this projection. It depends nontrivially on the value of χ a everywhere on Σ through the term involving the Lagrangian in (4.9), which causes it to become a multivalued form on the quotient space. One way to see its multivaluedness is to note that iχL is a top rank form on Σ, so, by the Poincaré lemma applied to Σ, it can be expressed as the exterior derivative of a (d − 2)-form, (4.17) JHEP02(2018)021 Here, h X is the homotopy operator that inverts the exterior derivative d on closed forms on Σ [58]. As the notation suggests, it depends explicitly on the value of the X fields throughout Σ, which we recall can be thought of as defining a coordinate system for the subregion. Since h X iχL is a spacetime (d − 2)-form and an S one-form, evaluated at ∂Σ it may be expressed in terms of χ a and δφ at ∂Σ, which provide a basis for local variational forms. Hence, and we see that this latter expression depends on χ a at ∂Σ, so therefore will project to the quotient space. However, h X will be a different operator depending on the values of the X fields on Σ, and hence this boundary integral will give a different form on the reduced phase space for different bulk values of X. This shows that the Lagrangian term in Θ projects to a multivalued form on the quotient space. The failure of Θ to be single-valued implies that the reduced phase space P has nontrivial cohomology. In particular, the projected symplectic form Ω is not exact, despite being closed. For a given choice of the value of Θ, the equation Ω = δΘ still holds locally near a given solution in the reduced phase space, but there can be global obstructions since Θ may not return to the same value after tracing out a closed loop in the solution space. It would be interesting to investigate the consequences of this nontrivial topology of the reduced phase space, and in particular whether it has any relation to the appearance of central charges in the surface symmetry algebra. Finally, note that for vacuum general relativity with no cosmological constant, the Lagrangian vanishes on shell, being proportional to the Ricci scalar. In this special case, Θ is not multivalued and descends to a well-defined one-form on the reduced phase space, suggesting that the phase space topology simplifies. However, the inclusion of a cosmological constant or the presence of matter anywhere in the local subregion leads back to the generic case in which Θ is multivalued. JKM ambiguities The constructions of the symplectic potential current θ and Noether charge Q ξ are subject to a number of ambiguities identified by Jacobson, Kang and Myers (JKM) [43,47]. These ambiguities correspond to the ability to add an exact form to the Lagrangian L, the potential current θ, or the Noether charge Q ξ without affecting the dynamics or the defining properties of these forms. Normally it is required that the ambiguous terms be locally constructed from the dynamical fields in a spacetime-covariant manner. In the extended phase space, however, there is additional freedom provided by the X fields as well as the surfaces Σ and ∂Σ to construct forms that would otherwise fail to be covariant. The freedom provided by the X fields is considerable, given that they can be used to construct homotopy operators as in (4.17) and (4.18) that mix the local dynamical fields φ at different spacetime points. For this reason, we refrain from using the X fields in such an explicit manner to construct ambiguity terms. However, we allow for ambiguity terms that are constructed using the structures provided by Σ and ∂Σ, such as their induced metrics and JHEP02(2018)021 extrinsic curvatures. This allows for a wider class of Noether charges, including those that appear in holographic entropy functionals and the second law of black hole mechanics for higher curvature theories [59][60][61][62]. A simple example of which types of objects are permitted in constructing the ambiguity terms is provided by the unit normal u a to Σ versus the lapse function N . Interpreting X µ as a coordinate system for the local subregion, we can take Σ to lie at X 0 = 0. Then the lapse and unit normal are related by The form ∇ a X 0 depends explicitly on the X field, and hence is not allowed in our constructions. However, the unit normal u a can be constructed using only the surface Σ and the metric, and hence is independent of the X fields. This then implies that N also depends on the X fields, and so the lapse function cannot explicitly be used in constructing ambiguity terms. L ambiguity The first ambiguity corresponds to adding an exact form dα to the Lagrangian. This does not affect the equations of motion; however, its variation now contributes to θ. The following changes occur from adding this term to the Lagrangian: Note that since θ changes by an S-exact form, the symplectic current ω is unaffected. Incorporating these changes into the definition of the symplectic potential (4.9) changes Θ by We point out that the new term annihilates infinitesimal diffeomorphisms Iξ, so that Θ remains fully diffeomorphism-invariant. Since Θ changes by an S-exact form, the symplectic form Ω = δΘ receives no change from this type of ambiguity, which can also be checked by tracking the changes of all quantities in (4.12). Given that only Ω, and not Θ, is needed in the construction of the phase space, this ambiguity in L has no effect on the phase space. However, it has some relevance to the surface symmetry algebra discussed in section 6. The generators of this algebra are given by the Noether charge, and for surface symmetries that move ∂Σ (the "surface translations"), this ambiguity would appear to have an effect. However, as discussed in subsection 6.1, once the appropriate boundary terms are included in the generators, the result is independent of this ambiguity. The form of the generator does motivate a natural prescription for fixing the ambiguity such that the Lagrangian has a well-defined variational principle, so that it is completely stationary on-shell, as opposed to being stationary up to boundary contributions. θ ambiguity The second ambiguity comes from the freedom to add an exact form dβ to θ, since doing so does not affect its defining equation (4.1). Here, β ≡ β[φ; δφ] is a spacetime (d − 2)-form and a one-form on S. The changes that arise from this addition are Under these transformations, the symplectic potential (4.9) changes to Hence, the symplectic potential is modified by an arbitrary boundary term β, accompanied by Iχβ that ensures that Θ retains degenerate directions along linearized diffeomorphisms. Unlike the L ambiguity, this modification is not S-exact, and changes the boundary terms in the symplectic form, Because β can in principle involve arbitrarily many derivatives of δφ, its presence can cause Ω to depend on second or higher derivatives of χ a on the boundary. This affects which parts of χ a correspond to degenerate directions, and will lead to different numbers of boundary degrees of freedom in the reduced phase space. As discussed in section 6, this ambiguity can also be used to reduce the surface symmetry algebra to a subalgebra. Give that β contributes to Θ and Ω only at the boundary, it can involve tensors associated with the surface ∂Σ that do not correspond to spacetime-covariant tensors, such as the extrinsic curvature. This allows the Dong entropy [59][60][61], which differs from the Wald entropy [46,47] by extrinsic curvature terms, to be viewed as a Noether charge with a specific choice of ambiguity terms. This is the point of view advocated for in [62], where the ambiguity was resolved by requiring that the entropy functional derived from the resultant Noether charge satisfy a linearized second law. In general, fixing the ambiguity requires some additional input, motivated by the particular application at hand. Q ξ ambiguity The final ambiguity is the ability to shift Q ξ by a closed form γ, with dγ = 0. Since Q ξ depends linearly on ξ a and its derivatives, γ should be chosen to also satisfy this requirement. If γ is identically closed for all ξ a , it then follows that it must be exact, γ = dν [63]. Its integral over the closed surface ∂Σ then vanishes, so that it has no effect on Θ or Ω. JHEP02(2018)021 6 Surface symmetry algebra The extended phase space constructed in section 4 contains new edge mode fields χ a on the boundary of the Cauchy surface for the local subregion, whose presence is required in order to have a gauge-invariant symplectic form. Associated with the edge modes are a new class of transformations that leave the symplectic form and the equations of motion invariant. These new transformations comprise the surface symmetry algebra. This algebra plays an important role in the quantum theory when describing the edge mode contribution to the entanglement entropy, thus it is necessary to identify the algebra and its canonical generators. As discussed in [36], the surface symmetries coincide with diffeomorphisms in the preimage space, Z : . These leave the spacetime fields φ unchanged, but transform the X fields by X → X •Z. This also transforms the pulled back fields X * φ → Z * X * φ, and due to the diffeomorphism invariance of the field equations, the pulled back fields still define solutions. These transformations therefore comprise a set of symmetries for the dynamics in the local subregion. Infinitesimally, these transformations are generated by vector fields w a on R d . Analogous to vector fields defined on M , w a defines a vectorŵ on S, whose action on the pulled back fields X * φ is given by the Lie derivative, while its action on φ is trivial, Lŵφ = 0. On the other hand, we may apply the pullback formula (3.1) to this equation to derive where W a = (X −1 ) * w a . The contractions of the vectorŵ with the basic S one-forms are therefore Iŵ χ a = W a , Iŵδφ = 0. (6.3) We also will assume that w a is independent of the solution, so that δw a = 0. Writing this as 0 = δX * W a , and applying the pullback formula (3.1), one finds δW a = −£χW a . (6.4) In order for the transformation to be a symmetry of the phase space, it must generate a Hamiltonian flow. This means that IŵΩ is exact, and determines the Hamiltonian Hŵ for the flow via δHŵ = −IŵΩ. The contraction with the symplectic form can be computed straightforwardly from (4.12) by first using the decomposition (4.14) for δQχ. Then The first three terms of the first line combine into the first term in the second line, using formula (6.4) for δW a , formula (4.14) for δQ W , and recalling that the integral involves an implicit pullback by X * , so that δ ∂Σ Q W = ∂Σ (δQ W + £χQ W ). JHEP02(2018)021 It is immediately apparent that if the second integral in (6.6) vanishes, the flow is Hamiltonian. This occurs if W a is tangent to ∂Σ or vanishing at ∂Σ, and hence defines a mapping of the surface into itself. If W a is tangential, it generates a diffeomorphism ∂Σ, while vector fields that vanish on ∂Σ generate transformations of the normal bundle to the surface while holding all points on the surface fixed. These transformations were respectively called surface diffeomorphisms and surface boosts in [36]. The remaining transformations consist of the surface translations, where W a has components normal to the surface, and the second integral in (6.6) does not vanish. In general, this term does not give a Hamiltonian flow, except when the fields satisfy certain boundary conditions. We will briefly discuss the surface translations in subsection 6.1, where we show that they can give rise to central charges in the surface symmetry algebra. Returning to the surface-preserving transformations, we find that the Hamiltonian is given by the Noether charge integrated over the boundary, The surface symmetry algebra is generated through the Poisson bracket of the Hamiltonians for all possible surface-preserving vectors. The Poisson bracket is given by where the last equality uses equation (6.4) applied to δV a and that ∂Σ £ W Q V = ∂Σ i W dQ V vanishes when integrated over the surface since W a is parallel to ∂Σ. This shows that the algebra generated by the Poisson bracket is compatible with the Lie algebra of surface preserving vector fields, (6.9) without the appearance of any central charges, i.e. the map w a → Hŵ is a Lie algebra homomorphism. Note that the algebra of surface-preserving vector fields is much larger than the surface symmetry algebra. This is because the generators of surface symmetries depend only on the values of the vector field and its derivative at ∂Σ. Vector fields that die off sufficiently quickly near ∂Σ correspond to vanishing Hamiltonians. The transformations they induce on S are pure gauge, and they drop out after passing to the reduced phase space. To identify the surface symmetry algebra, it is useful to first describe the larger algebra of surface-preserving diffeomorphisms, which contains the surface symmetries as a subalgebra. It takes the form of a semidirect product, Diff(∂Σ) Dir ∂Σ where Diff(∂Σ) is the diffeomorphism group of ∂Σ, and Dir ∂Σ is the normal subgroup of diffeomorphisms that fix all points on ∂Σ. 9 Dir ∂Σ is generated by vector fields W a that vanish on ∂Σ, and it is a normal subgroup because the vanishing property is preserved under commutation with all surface-preserving vector fields: JHEP02(2018)021 where the first term vanishes since W b vanishes at ∂Σ, and the second term vanishes because V b is parallel to ∂Σ, and W a is zero everywhere along the surface. A general surface preserving vector field can then be expressed as where W a 0 vanishes on ∂Σ and W a is tangent to ∂Σ. Note that this decomposition is not canonical; away from ∂Σ there is some freedom in specifying which components of the vector field correspond to the tangential direction. However, given any such choice, it is clear that if W a is nonvanishing at ∂Σ, then it will be nonzero in a neighborhood of ∂Σ, and hence the parallel vector fields act nontrivially on the V a 0 component of other vector fields. Finally, the commutator of two purely parallel vector fields [W , V ] will remain purely parallel, since they are tangent to an integral submanifold. The map W a → W a is therefore a homomorphism from the surface-preserving diffeomorphisms onto Diff(∂Σ), with kernel Dir ∂Σ . This establishes that the group of surface-preserving diffeomorphisms is Diff(∂Σ) Dir ∂Σ . The surface symmetry algebra is represented as a subalgebra of Diff(∂Σ) Dir ∂Σ . The Hamiltonian for a surface-preserving vector field is determined by the Noether charge Q W , which depends only on the value of W a and its first derivative at ∂Σ. Hamiltonians for vector fields that are nonvanishing at ∂Σ provide a faithful representation of the Diff(∂Σ) algebra; however, the vanishing vector fields only represent a subalgebra of Dir ∂Σ . To determine it, note that only the first derivative of W a contributes to the Noether charge, and its tangential derivative vanishes. Letting x i , i = 0, 1, represent coordinates in the normal directions that vanish on ∂Σ, the components of the vector field may be expressed W µ = x i W µ i + O(x 2 ), µ = 0, . . . , d − 1, and the O(x 2 ) terms are determined by the second derivatives, which do not contribute to the Noether charge. Then the commutator of two vectors is (6.12) which is seen to be determined by the matrix commutator of W µ i and V ν j , by allowing the i, j indices to run over 0, . . . , d − 1, setting all entries with i, j > 1 to zero. This algebra gives a copy of SL(2, R) R 2·(d−2) for each point on ∂Σ. The abelian normal subgroup R 2·(d−2) is generated by vectors for which the µ index in W µ i is tangential, i.e. W j i ≡ W µ i ∇ µ x j = 0. These vectors represent shearing transformations of the normal bundle: they generate flows that vanish on ∂Σ, and are parallel to ∂Σ away from the surface. By specifying a normal direction, one obtains a homomorphism sending W µ i to its purely normal part, W j i . The fact that only the traceless part of ∇ a W b contributes to the Noether charge, which follows from the antisymmetry of E abcd from equation (4.13) in c and d, translates to the requirement that W j i be traceless when W a vanishes on ∂Σ. This means that the 2 × 2 matrices W j i generate an SL(2, R) algebra. The generators V µ JHEP02(2018)021 The extra factor of R 2·(d−2) is a novel feature of this analysis, appearing for generic higher curvature theories, but not for general relativity [36]. Its presence or absence is explained by the particular structure of E abcd , the variation of the Lagrangian scalar with respect to R abcd . When E abcd is determined by its trace, i.e., equal to E d(d−1) (g ac g bd −g ad g bc ) with E a scalar, the R 2·(d−2) transformations are pure gauge. The Noether charge for a vector field vanishing at the surface evaluates to 10 where µ is the volume form on ∂Σ and n ab is the binormal; n c d projects out the tangential component in ∇ c W d , leaving only the SL(2, R) transformations as physical symmetries. A particular class of theories in which this occurs are f (R) theories (which include general relativity), where the Lagrangian is a function of the Ricci scalar, and E abcd = 1 2 f (R)(g ac g bd − g ad g bc ). In more general theories, however, n ab E abc d will have a tangential component on the d index, and the algebra enlarges to include the Curiously, there always exists a choice of ambiguity terms, discussed in subsection 5.2, that eliminates the R 2·(d−2) symmetries. Namely, the symplectic potential current θ can be modified as in equation (5.4a), with β chosen to be β = ab E abed s c e δg cd , (6.14) and s c e = −u e u c +n e n c is the projector onto the normal bundle of ∂Σ. Note that the explicit use of normal vectors to ∂Σ makes this β not spacetime-covariant. This is nevertheless in line with the broader set of allowed ambiguity terms discussed above. From equation (5.4d), this term changes the Noether charge of a vector vanishing at ∂Σ to The additional terms involving s c e drop out when contracted with the normal component on the d index of ∇ c W d ; however, on the tangential component the additional terms cancel against the first term. This choice of ambiguity thus reduces the surface symmetry algebra to coincide with the algebra for general relativity, Diff(∂Σ) SL(2, R) ∂Σ . Whether or not to use this choice of β depends on the application at hand, and it is unclear at the moment how exactly β should be fixed when trying to characterize the edge mode contribution to the entanglement entropy of a subregion. The above choice is natural in the sense that it gives the same surface symmetry algebra for any diffeomorphisminvariant theory. This would mean that the surface symmetry algebra is determined by the gauge group of the theory, while the Hamiltonians for the symmetry generators change depending on the specific dynamical theory under consideration. Note also that there are additional ambiguity terms that could be added, some of which enlarge the symmetry algebra by introducing dependence on higher derivatives of the vector field. Determining how to fix the ambiguity remains an important open problem for the extended phase space program. Surface translations While the surface-preserving transformations are present for generic surfaces, in situations where the fields satisfy certain boundary conditions at ∂Σ, the surface-symmetry algebra can enhance to include surface translations. These are generated by vector fields that contain a normal component to ∂Σ on the surface. For such a vector field, the second integral in (6.6) does not vanish, so for this transformation to be Hamiltonian, this integral must be an exact S form. To understand when this can occur, it is useful to first rewrite the integral in terms of pulled back fields on ∂σ, the preimage of ∂Σ under the X map: (6.16) Since δw a = 0, it is clear from this last expression that the flow will be Hamiltonian only if at the boundary, θ is exact when contracted with w a , where B[φ] is some functional of the fields, possibly involving structures defined only at ∂Σ such as the extrinsic curvature. When this condition is satisfied, the second integral in (6.6) simply becomes δ ∂Σ i W B, and so the full Hamiltonian for an arbitrary vector field w a is Hŵ = Next we compute the algebra of the surface symmetry generators under the Poisson bracket. It is worth noting first that by contracting equation (6.17) with Iv, we find that the B functional satisfies With this, the Poisson bracket is given by Hence, the commutator algebra of the vector fields w a is represented by the algebra provided by the Poisson bracket, except when both vector fields have normal components at the surface, in which case the second term in (6.20) gives a modification. In fact, the quantities JHEP02(2018)021 provide a central extension of the algebra, which is verified by showing that they are locally constant on the phase space, and hence commute with all generators. The exterior derivative is On shell, we have δL = dθ, and from (6.17) we can argue that the replacement i W i V dδB → i W i V dθ is valid at ∂Σ. Hence, the above variation vanishes, and K[ŵ,v] indeed defines a central extension of the algebra. The modification that B makes to the symmetry generators takes the same form as a Noether charge ambiguity arising from changing the Lagrangian L → L + dα, with α = −B. Using the modified Lagrangian L − dB, the potential current changes to θ − δB. The boundary condition (6.17) then implies that the terms involving θ in (6.6) vanish. The symmetry generators are simply given by the integrated Noether charge, which is modified to Q W → Q W −i W B by the ambiguity. Hence, the generators Hŵ are the same as in (6.18), and their Poisson brackets still involve the central charges K[ŵ,v]. Finally, note that the constancy of the central charges requires the variation of the modified Lagrangian L−dB be zero when evaluated on ∂Σ. Requiring that variations of the Lagrangian have no boundary term on shell generally determines the boundary conditions for the theory. The same is true here: a choice of B satisfying (6.17) can generally only be found if the fields obey certain boundary conditions, and different boundary conditions lead to different choices for B. The surface translations can be parameterized by normal vector fields W i defined on ∂Σ. Assuming ∂ i W j = 0 in some coordinate system, where i, j are normal indices, we can work out their commutation relations with generators of the rest of the algebra: where A denotes a tangential index. The first relation shows that the new generators commute among themselves (although the corresponding Poisson bracket is equal to the central charge K[ŵ,v]), while the second and third show that W i transforms as a vector under SL(2, R) and as a scalar under Diff(∂Σ). If the Noether charge ambiguity is chosen as in equation (6.14) so that the normal shearing generators x j V A j drop out of the algebra, the resulting surface symmetry algebra is Diff(∂Σ) SL(2, R) R 2 ∂Σ . However, if the normal shearing transformations are retained, equation (6.26) shows that the surface translations are no longer a normal subgroup, since the commutator gives rise to generators of Diff(∂Σ) and SL(2, R) ∂Σ . In this case, the full surface symmetry algebra is simple. The above analysis was carried out assuming that all normal vectors generate a surface symmetry. In practice, equation (6.17) may only be obeyed for some specifically chosen JHEP02(2018)021 normal vectors [44]. The resulting algebra will then be a subalgebra of the generic case considered in this section. Discussion Building on the results of [36], this paper has described a general procedure for constructing the extended phase space in a diffeomorphism-invariant theory for a local subregion. The integral of the symplectic current for the unextended theory fails to be degenerate for diffeomorphisms that act at the boundary, and this necessitates the introduction of new fields, X, to ensure degeneracy. These fields can be thought of as defining a coordinate system for the local subregion, and the extended solution space consists of fields satisfying the equations of motion in all possible coordinate systems parameterized by X. While the X fields do not satisfy dynamical equations themselves, it was shown in section 4 that their variations contribute to the symplectic form through the boundary integral in equation (4.12). There are a few novel features of the extended phase space for arbitrary diffeomorphism-invariant theories that do not arise in vacuum general relativity with zero cosmological constant. First, in any theory whose Lagrangian does not vanish on-shell, the symplectic potential Θ is not a single-valued one form on the reduced phase space P. This is due to the bulk integral of the Lagrangian that appears in equation (4.9), along with the fact that variations for which χ a has support only away from the boundary ∂Σ are degenerate directions of the extended symplectic form, (4.12). Because of this, Ω fails to be exact, despite satisfying δΩ = 0. Investigating the consequences of this nontrivial cohomology for P remains an interesting topic for future work. Another new result comes from the form of the surface symmetry algebra. As in general relativity, any phase space transformation generated byŵ for which W a ≡ Iŵ χ a is tangential at ∂Σ is Hamiltonian. These generate the group Diff(∂Σ) Dir ∂Σ of surfacepreserving diffeomorphisms, but only a subgroup is represented on the phase space. This subgroup was found in section 6 to be Diff(∂Σ) SL(2, R) R 2·(d−2) ∂Σ , which is larger than the surface symmetry group Diff(∂Σ) SL(2, R) ∂Σ found in [36] for general relativity. The additional abelian factor R 2·(d−2) arises generically; however, it is not present in f (R) theories, in which the tensor E abcd is constructed solely from the metric and scalars. We also noted that for any theory, there exists a choice (6.14) of ambiguity terms that can be added to θ, with the effect of eliminating the R 2·(d−2) factor of the surface symmetry algebra. The inclusion of surface translations into the surface symmetry algebra was discussed in section 6.1. This requires the existence of a (d − 1)-form B satisfying the relation (6.17) for at least some vector fields that are normal to the boundary. If such a form can be found, the surface translations are generated by the Hamiltonians (6.18). Interestingly, the Poisson brackets of these Hamiltonians acquire central charges given by (6.21), which depend on the on-shell value of the modified Lagrangian L − dB at ∂Σ. Such central charges are a common occurrence in surface symmetry algebras that include surface translations [44,45,49,[64][65][66][67]. In general, the existence of B requires that the fields satisfy boundary conditions at ∂Σ. An important topic for future work would be to classify which JHEP02(2018)021 boundary conditions the fields must satisfy in order for B to exist. For example, with Dirichlet boundary conditions where the field values are specified at ∂Σ, B is given by the Gibbons-Hawking boundary term, constructed from the trace of the extrinsic curvature in the normal direction [68]. However, such boundary conditions are quite restrictive on the dynamics. For a local subsystem in which ∂Σ simply represents a partition of a spatial slice, one would not expect Dirichlet conditions to be compatible with all solutions of the theory. An alternative approach would be to impose conditions that specify the location of the surface in a diffeomorphism-invariant manner, without placing any restriction on the dynamics. One example is requiring that the surface extremize its area or some other entropy functional, as is common in holographic entropy calculations [2, 59-61, 69, 70]. Since extremal surfaces exist in generic solutions, these boundary conditions put no dynamical restrictions on the theory, but rather restrict where the surface ∂Σ lies. The effects of JKM ambiguity terms in the extended phase space construction were discussed in section 5. It was noted that the B form that appears when analyzing the surface translations could be interpreted as a Lagrangian ambiguity, L → L−dB. Note that this type of ambiguity does not affect the symplectic form (4.12), and, as a consequence, the generators of the surface symmetries do not depend on this replacement. In fact, the generators (6.18) are invariant with respect to additional changes to the Lagrangian L → L + dα, since such a change shifts the Noether charge Q W → Q W + i W α, but also induces the change B → B + α. An ambiguity that does affect the phase space is the shift freedom in the symplectic potential current, θ → θ + dβ. We noted that certain choices of β can change the number of edge mode degrees of freedom, and also can affect the surface symmetry algebra. In the future, we would like to understand how this ambiguity should be fixed. One idea would be to use the ambiguity to ensure some B can be found satisfying equation (6.17). In this case, the ambiguity is fixed as an integrability condition for θ. Such an approach seems related to the ideas of [62] in which the ambiguity was chosen to give an entropy functional satisfying a linearized second law. Another approach discussed in [61,[70][71][72] fixes the ambiguity through the choice of metric splittings that arise when performing the replica trick in the computation of holographic entanglement entropy. As discussed in the introduction, one of the main motivations for constructing the extended phase space is to understand entanglement entropy in diffeomorphism-invariant theories [36]. The Hilbert space for such a theory does not factorize across an entangling surface due to the constraints. However, one can instead construct an extended Hilbert space for a local subregion as a quantization of the extended phase space constructed above. This extended Hilbert space will contain edge mode degrees of freedom that transform in representations of the surface symmetry algebra. A similar extended Hilbert space can be constructed for the complementary region with Cauchy surfaceΣ, whose edge modes and surface symmetries will match those associated with Σ. The physical Hilbert space for Σ ∪Σ is given by the so-called entangling product of the two extended Hilbert spaces, which is the tensor product modded out by the action of the surface symmetry algebra. One then finds that the density matrix associated with Σ splits into a sum over superselection sectors, labelled by the representations of the surface symmetry group. JHEP02(2018)021 This block diagonal form of the density matrix leads to a von Neumann entropy that is the sum of three types of terms, where the sum is over the representations R i of the surface symmetry group, p i give the probability of being in a given representation, and S i is the von Neumann entropy within each superselection sector. The first term represents the average entropy of the interior degrees of freedom, while the second term is a classical Shannon entropy coming from uncertainty in the surface symmetry representation corresponding to the state. The last term arises from entanglement between the edge modes themselves, and is only present for a nonabelian surface symmetry algebra [73,74]. The dimension of the representation has some expression in terms of the Casimirs of the group, and hence this term will take the form of an expectation value of local operators at the entangling surface. It is conjectured that this term provides a statistical interpretation for the Wald-like contributions in the generalized entropy, S gen = S Wald-like + S out [36]. Put another way, given a UV completion for the quantum gravitational theory, the edge modes keep track of the entanglement between the UV modes that are in a fixed state, corresponding to the low energy "code subspace" [8,75]. On reason for considering the extended phase space in the context of entanglement entropy comes from issues of divergences in entanglement entropy. These divergences arise generically in quantum field theories, and a regulation prescription is needed in order to get a finite result. A common regulator for Yang-Mills theories is a lattice [1,73,74], which preserves the gauge invariance of the theory. Unfortunately, a lattice breaks diffeomorphism invariance, which can be problematic when using it as a regulator for gravitational theories (see [76] for a review of the lattice approach to quantum gravity). The extended phase space provides a continuum description of the edge modes that respects diffeomorphism invariance. As such, it should be amenable to finding a regulation prescription that does not spoil the gauge invariance of the gravitational theory. Finding such a description is an important next step in defining entanglement entropy for a gravitational theory. There are a number of directions for future work on the extended phase space itself, outside of its application to entanglement entropy. One topic of interest is to clarify the fiber bundle geometry of the solution space S, which arises due to diffeomorphism invariance. A fiber in this space consists of all solutions that are related by diffeomorphism, and the χ a fields define a flat connection on the bundle. Flatness in this case is equivalent to the equation δ( χ a ) + 1 2 [ χ , χ ] a = 0 for the variation of χ a . This fiber bundle description of S will be reported on in a future work [77]. Another technical question that arises is whether S truly carries a smooth manifold structure. One obstruction to smoothness would be if the equations of motion are not well-posed in some coordinate system. In this case, the solutions do not depend smoothly on the initial conditions on the Cauchy slice Σ, calling into question the smooth manifold structure of S. If X is used to define the coordinate system, this would mean that for some values of X the solution space is not smooth. A possible way around this is to always work in a coordinate system in which the field equations are well-posed, and the gauge transformation to this coordinate system would JHEP02(2018)021 impose dynamical equations on the X fields. Another obstruction to smoothness comes from issues related to ergodicity and chaos in totally constrained systems [78]. It would be interesting to understand if these issues are problematic for the phase space construction given here, and whether the X fields ameliorate any of these problems. Another interesting application would be to formulate the first law of black hole mechanics and various related ideas in terms of the extended phase space. This could be particularly interesting in clarifying certain gauge dependence that appears when looking at second order perturbative identities, such as described in [79]. The edge modes should characterize all possible gauge choices, and they may inform some of the relations found in [16,80,81] when considering different gauges besides the Gaussian null coordinates used in [79]. They could also be useful in understanding quasilocal gravitational energy, and in particular how to define the gravitational energy inside a small ball. This can generally be determined by integrating a pseudotensor over the ball, but there is no preferred choice for a gravitational pseudotensor, so this procedure is ambiguous. It would be interesting if a preferred choice presented itself by considering second order variations of the first law of causal diamonds [17,20], using the extended phase space. Some ideas in this direction are being considered in [82], but it is difficult to find a quasilocal gravitational energy that satisfies the desirable property of being proportional to the Bel-Robinson energy density in the small ball limit [83,84]. Finally, it would be very useful to recast the extended phase space construction in vielbein variables. Some progress on the vielbein formulation was reported in [48]. Since vielbeins have an additional internal gauge symmetry associated with local Lorentz invariance, care must be taken when applying covariant canonical constructions [85,86]. It would be particularly interesting to analyze the surface symmetry algebra that arises in this case, which could differ from the algebra derived using metric variables because the gauge group is different. Comparing the algebras and edge modes in both cases would weigh on the question of how physically relevant and universal their contribution to entanglement entropy is. JHEP02(2018)021 Proof. This is simply the derivation property of the Lie derivative applied to all tensor fields on S. I U α is a contraction of the vector U with the one-form α, so the Lie derivative first acts on U to give the vector field commutator L V U = [V, U ], and then acts on α, with the contraction I U now being applied to L V α. Hence, on an arbitrary form, Proof. The discussion of section 2 derived equation (2.4), so all that remains is to show that χ a (Y ; V ) is linear in the vector V . This can be demonstrated inductively on the degree of α. For scalars, it is enough to show it holds on the functions φ x . Applying A.1, we have on the one hand while on the other hand, since I V commutes with Y * . Equating these expressions, we find Since the right hand side of this expression is linear in V , χ(Y ; V ) must be as well. Now suppose A.3 holds for all forms of degree n − 1, and take α to be degree n. Then for an arbitrary vector U , I U Y * α is degree n − 1, so where identity A.2 was applied along with the fact that I U commutes with £ ξ . On the other hand, Since U was arbitrary, equating these expressions shows thatχ a (Y ; V ) = I V χ a Y , showing that the formula holds for forms of degree n. Proof. This is essentially the antiderivation property applied to £χ Y . The spacetime Lie derivative £χ Y acting on a tensor can be written in terms of χ a Y and its derivatives contracted with the tensor, where all instances of χ a Y appear to the left. It is straightforward to see that when I V contracts with χ a Y in this expression, the terms will combine into £ (I V χ Y ) , and since I V does not change the spacetime tensor structure of the object it contracts, the remaining terms will combine into −£χ Y I V , with the minus coming from the antiderivation property of I V . JHEP02(2018)021 Proof. This may also be demonstrated inductively on the degree of α. For scalars, we simply note that equation (A.3) is valid for arbitrary vectors V , and since χ a (Y ; . Assume now A.5 holds for all (n−1)-forms, and take α an n-form and V an arbitrary vector. Then The first equality applies A.1, the second uses A.3 and the fact that I V Y * α is an (n − 1)-form, and the last equality follows from A.1 and A.4. Since V is arbitrary, this completes the proof. Proof. This is a consequence of the formula for the commutator of two vectors, [ξ, ζ] = ξ b ∇ b ζ a − ζ b ∇ b ξ a , along with the fact that since χ a is an S one-form, it anticommutes with itself. Alternatively, the formula may be checked by contracting with arbitrary vectors V and U . Letting I V χ a Y = −ξ a and I U χ a Y = −ζ a , we have Proof. For ordinary spacetime vectors ξ a and ζ a , the Lie derivative satisfies [58] £ ξ £ ζ = £ [ξ,ζ] + £ ζ £ ξ . (A.8) Since χ a Y are anticommuting, this formula is modified to from which the identity follows. Note that A.6 provides a formula for [ χ Y , χ Y ] a . Proof. This identity is a standard property of the Lie derivative, see e.g. [87]. A.10 £χθ + δiχL + £χiχL = d iχθ + 1 2 iχiχL Proof. The first term in this expression is £χθ = diχθ + iχdθ, which gives one of the terms on the right hand side of the identity, along with iχdθ. Next we have where we applied equation (2.8) for δ χ a , and used that δL = dθ on shell. The −iχdθ term cancels against the similar term appearing in £χθ, so that the remaining pieces are which follows from identity A.9 and dL = 0. Hence, the terms on the left of the A.10 combine into the exact form d(iχθ + 1 2 iχiχL). A.12 Lξ = £ ξ + I δξP roof. This formula is meant to apply to local functionals of the fields defined at a single spacetime point. Since I δξˆa nnihilates scalars, it clearly is true for that case. Then assume the formula has been shown for all (n − 1)-forms, and take α to be an n-form. For an arbitrary vector V , since I V α is an (n − 1)-form, we have 15) and the last two terms in this expression cancel due to identity A.11. Since V was arbitrary, we conclude that the identity holds for all n forms, and by induction for all S differential forms. JHEP02(2018)021 A.13 Lχ = Iχδ − δIχ Proof. This is essentially a definition of what is meant by Lχ. The left hand side is the graded commutator of the derivation Iχ and the antiderivation δ, which defines the antiderivation Lχ [87]. A.15 Lχ = £χ − I δ χP roof. The formalism of graded commutators developed in [87] is a useful tool in proving this identity. Given two graded derivations D 1 and D 2 , their graded commutator D 1 D 2 − (−1) k 1 k 2 D 2 D 1 is another graded derivation, where k i are the degrees of the respective derivations, i.e. the amount the derivation increases or decreases the degree of the form on which it acts. Hence, since I V and Lχ are derivations of degrees −1 and 1, they satisfy where equation (2.8) was used in the last equality. We then prove the identity through induction on the degree of the form on which it acts. It is true for scalars because I δ χφ = 0. Then suppose it is true for all (n − 1)-forms, and take α to be an n-form. JHEP02(2018)021 The first line employs equation (A.18), the second line uses identities A.14 and A.12 as well as the fact that I V α is an (n−1)-form, and the third line employs equation (A. 19). Since V is arbitrary, we conclude the identity holds for all n-forms, which completes the proof. B Edge mode derivatives in the symplectic form In this appendix, we derive the result advertised in section 4, that the symplectic form (4.12) does not depend on second or higher derivatives of χ a . Derivatives of χ a appear in Ω through the terms δQχ + £χQχ. The Lie derivative term may be expressed Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
19,318.6
2018-02-01T00:00:00.000
[ "Mathematics" ]
Selection of Aptamers for Mature White Adipocytes by Cell SELEX Using Flow Cytometry Background Adipose tissue, mainly composed of adipocytes, plays an important role in metabolism by regulating energy homeostasis. Obesity is primarily caused by an abundance of adipose tissue. Therefore, specific targeting of adipose tissue is critical during the treatment of obesity, and plays a major role in overcoming it. However, the knowledge of cell-surface markers specific to adipocytes is limited. Methods and Results We applied the CELL SELEX (Systematic Evolution of Ligands by EXponential enrichment) method using flow cytometry to isolate molecular probes for specific recognition of adipocytes. The aptamer library, a mixture of FITC-tagged single-stranded random DNAs, is used as a source for acquiring molecular probes. With the increasing number of selection cycles, there was a steady increase in the fluorescence intensity toward mature adipocytes. Through 12 rounds of SELEX, enriched aptamers showing specific recognition toward mature 3T3-L1 adipocyte cells were isolated. Among these, two aptamers (MA-33 and 91) were able to selectively bind to mature adipocytes with an equilibrium dissociation constant (Kd) in the nanomolar range. These aptamers did not bind to preadipocytes or other cell lines (such as HeLa, HEK-293, or C2C12 cells). Additionally, it was confirmed that MA-33 and 91 can distinguish between mature primary white and primary brown adipocytes. Conclusions These selected aptamers have the potential to be applied as markers for detecting mature white adipocytes and monitoring adipogenesis, and could emerge as an important tool in the treatment of obesity. Introduction In our present-day society, many people suffer from various metabolic disorders [1]. Obesity is associated with many metabolic disorders and is certainly one of society's most controversial contemporary issues. It leads to type II diabetes, dyslipidemia and hypertension, and constitutes an increased risk for the development of cardiovascular diseases [2;3]. From the results of multiple studies, scientists have suggested various therapeutic methods related to obesity, such as bariatric surgery, anti-obesity medications (orlistat, rimonabant and sibutramine) and stem cell therapy [4][5][6]. Nevertheless, the development of more effective methods is still required for treating obesity, as all of the existing methods have potential side effects [7;8]. A specific molecular probe against mature adipocyte cells would be valuable for mitigating potential side effects. Aptamers have been widely applied in the diagnosis and treatment of various diseases. These are oligonucleic acid molecules that have specific three-dimensional structures for recognizing their target molecules, which can be proteins, chemicals or heterogeneous cells, among others [9][10][11]. Aptamers were isolated from a library pool (oligonucleic acids; initial pool size of 10 14,15 ) by the Systemic Evolution of Ligands by EXponential enrichment (SELEX) method. The SELEX technique was developed by Dr. Larry Gold, who selected RNA ligands against the T4 DNA polymerase using repeated rounds of in vitro selection [12]. In 1990, the SELEX technique was upgraded to forms of a Spiegelmer, cell SELEX capillary electrophoresis SELEX (CE-SELEX), Counter-SELEX, and Toggle SELEX [13][14][15][16][17]. Macugen, the first aptamer-based drug approved by the U.S Food and Drug Administration (FDA), is offered by OSI Pharmaceuticals and used as a therapeutic agent for age-related macular degeneration (AMD) [18]. In addition, NeoVentures Biotechnology Inc. has successfully commercialized the first aptamer-based diagnostic kit for the detection of mycotoxins in grains. At present, several aptamers are considered to be therapeutic or diagnostic agents and are undergoing clinical trials [19]. Given this background, the SELEX technique can be used for the diagnosis of white adipocyte density/replication. Adipose tissue, mainly composed of adipocytes, is an important metabolic organ, which serves as a modulator of energy homeostasis [20]. Obesity is induced when the energy balance is broken in the body. In mammals, adipose tissue is typically classified into white adipose tissue (WAT) and brown adipose tissue (BAT) according to its functions and morphological appearance [21]. WAT is used as a store of extra energy, and the cells contain a single large lipid droplet. BAT, a specialized form of adipose tissue, can generate heat for energy consumption as a thermogenic organ. Brown adipose cells contain multiple smaller lipid droplets. Based on these characteristics, the development of obesity is closely related to the differentiation of white adipocytes [22]. The techniques employed in the diagnosis of obesity include physical examination, blood test, body mass index (BMI), and skin fold test using X-ray technique or a body average density measurement. In this study, we attempted to isolate specific aptamers that specifically recognize mature adipocyte cells using 3T3-L1 cells, which is considered as a typical cell line of white preadipocytes. Subsequently, we selected two aptamers, which specifically bind to mature white adipocytes, by the cell SELEX method using a FACS. Even the isolated aptamers were able to distinguish primary white adipocytes from primary brown adipocytes. These aptamers can be applied as valuable tools for a variety of antiobesity approaches. Ethics Statement All procedures used in animal experiments were performed according to a protocol approved by the Animal Care and Use Committee of the Korea Research Institute of Bioscience and Biotechnology (Permit Number: KRIBB-ACE-13047). All surgery was performed under ether anesthesia, and all efforts were made to minimize suffering. Aptamer Library and Primers A single-stranded DNA (ssDNA) library was labeled with a fluorescein isothiocyanate (FITC) and synthesized by Integrated DNA Technologies, Inc. (Coralville, IA, USA). The library contained 40 random nucleotides (nt) flanked by two 19-nt primer hybridization sites (59-FITC-CGCGGAAGCGTGCTGGGCC-N 40 -CATAACCCAGAGGTCGAT-39). For the amplification of the selected aptamer pool, a FITC-labeled forward primer (59-FITC-GGGGAATTCGCGGAAGCGTGCTGGGCC-39) and a reverse primer (59-GGGGGGATCCATCGACCTCTGGGT-TATG-39) were used in the PCR process. Another forward primer was utilized for the cloning of the selected DNA pool. Additional nucleotide sequences (containing restriction enzyme sites) were added in forward and reverse primers for efficient subcloning. Cell SELEX Using FACS To start the procedures, we prepared preadipocytes (0 day; negative cells) and mature adipocytes (12-day cultured cells in a differentiation medium; target cells) of 3T3-L1 cells. In the first SELEX, the FITC-labeled ssDNA library (10 nmol; initial pool size 10 16 ) dissolved in 20 mL DW was denatured at 95uC for 5 min and then cooled on ice for 10 min to form a secondary structure. During this step, salmon sperm DNA (0.1 mg/mL; Sigma, St Louis, MO, USA) dissolved in 300 mL of binding buffer (4.5 g/L glucose, 5 mM MgCl 2 and 10% FBS in Dulbecco's PBS [Sigma]) was incubated with 1610 6 target cells to inhibit non-specific binding. Next, the cells were incubated with the ssDNA library (10 nmol) and bovine serum albumin (1 mg/mL; Thermo Inc., Rockford, IL, USA) at 37uC for 30 min. After centrifugation, the supernatant was removed and the cells were washed five times with 1 mL of binding buffer. The ssDNA library-bound cells were then enriched using a FACSAria cell sorter (BD Bioscience, San Jose, CA, USA) and the bound ssDNAs were eluted from the sorted cells by heating at 95uC for 5 min. The eluted DNAs were purified by phenol-chloroform (Sigma) extraction, a Sephadex G-25 column (Sigma) and ethanol precipitation. The purified ssDNAs were amplified by PCR with FITC-labeled primers. For the next round of selection, the single-stranded DNA population was obtained via strand separation of PCR products (heat treatment at 95uC for 10 min). After five rounds of selection, the binding time and treated concentration of the ssDNA pool were decreased respectively to 15 min and 10 pmol, while the other conditions were maintained until the final SELEX round (12 rounds). Negative selection was performed two times during the third and eighth rounds of SELEX. To ensure the identification of extremely specific aptamers for mature 3T3-L1 cells, we performed three additional SELEX rounds using a nine-round SELEX pool. The ssDNA pool of twelfth round was cloned into DH5a using a TA cloning kit and was sequenced (Enzynomics, Daejeon, Korea). A total of 12 rounds of SELEX were performed, and two aptamers, MA-33 and MA-91, were obtained. Aptamer Binding Assay by FACS and Confocal Microscopic Imaging To monitor the enrichment of aptamer pools during SELEX, a 10 pmol ssDNA pool of each round was incubated with 1610 6 mature adipocyte cells in 200 mL of binding buffer at 37uC for 7 min. Selected aptamers were incubated with various cell lines under the same condition. The cells were washed five times with 0.4 mL of binding buffer. The pellets were suspended in 20 mL of binding buffer, and 10 mL of cells were dropped on a glass slide. The FITC signal of the aptamers was detected with a LSM 510 META confocal microscope (Zeiss, Thornwood, NY, USA). The other half of the cells was resuspended in 0.2 mL of binding buffer and the fluorescence was determined with a FACS Calibur flow cytometer (BD Bioscience). Competition Binding Assay Both FITC-labeled and unlabeled aptamers for MA-33 and MA-91 were used for competition binding assay. The 10-fold excess unlabeled aptamer (4.5 mM) was incubated with mature 3T3-L1 cells at 37uC for 7 min. After preincubation of the unlabeled aptamer, the other FITC-labeled aptamer (450 nM) was added. After incubation at 37uC for 7 min, the cells were washed five times with washing buffer, and the fluorescence was determined with a FACS Calibur flow cytometer (BD Bioscience). Structure Prediction and Kd Determination Secondary structures of the aptamers were predicted using the mfold program (The RNA Institute) [23]. We chose aptamers with the most thermodynamically stable predicted structure after the sequencing of each aptamer. Individual aptamers were incubated with negative or positive cells. As the treated aptamers increased in number, the mean fluorescence intensity of the aptamer-coated cells was detected using FACS. Next, the equation Y = B max X/ (Kd+X) was used to calculate the equilibrium dissociation constant (Kd) of aptamer-cell interaction via SigmaPlot (Jandel, San Rafael, CA, USA) (X; radioligand concentration, Y; specific binding, and B max ; maximum number of binding sites). Isolation, Culture, and Differentiation of Primary White and Brown Preadipocytes White preadipocytes were isolated from subcutaneous areas (inguinal and epididymal fat) and visceral adipose tissue (mesenteric fat) of 8-wk-old male ICR mice (Jackson Laboratory, Bar Harbor, ME, USA). Additionally, brown preadipocytes were extracted from interscapular brown fat pads of mice. The dissected tissues were digested for 45 min at 37uC in an isolation buffer at pH 7.4 (123 mM NaCl, 5 mM KCl, 1.3 mM CaCl 2 , 5 mM glucose, 100 mM HEPES, 4% filtered BSA, and 1 mg/mL collagenase type II [Worthington, Lakewood, NJ, USA]). The undigested tissues were then removed using a 100-mm cell strainer, after which the remaining cells were centrifuged at 1,300 rpm for 3 min to pellet the white or brown preadipocytes. The white preadipocyte cells were resuspended in DMEM/F12 [1:1; Gibco-Invitrogen] containing a 1% antibiotic-antimycotic solution and 10% BCS. Confluent cells were exposed to MDI containing 1 mM rosiglitazone in DMEM/F12. After 2 days, the cells were maintained in DMEM/F12 with 10 mg/mL insulin and 1 mM rosiglitazone (Sigma) for 2 days in each case for 20 days [30]. Isolated brown preadipocytes were induced to differentiate into mature brown adipocytes, as described previously [31;32]. For differentiation, confluent brown preadipocytes were placed in high-glucose DMEM, 20% FBS, and a differentiation cocktail (0.5 mM 3-isobutyl-1-methylxanthine, 125 mM indomethacin, 0.5 mM dexamethasone, 20 nM insulin, and 1 nM 3, 39,5triiodo-L-thyronine [T 3 ] [Sigma]) for 2 days. Next, the medium was changed to a maintenance medium consisting of DMEM, 20% FBS, 1 nM T 3 (Sigma), and 20 nM insulin, which was refreshed every other day. Oil-Red-O Staining After washing twice with PBS, the cultured cells were fixed with 10% formalin for 30 min at room temperature. The cells were then washed with distilled water and stained with a 0.3% filtered Oil-Red-O solution in 60% isopropanol (Sigma) for 30 min at room temperature. The stained cells were washed five times with distilled water and then dried. To extract the combined Oil-Red-O dye, isopropanol was added to the stained cells. The extracted samples were detected at 510 nm using a GeneQuant 1300 spectrophotometer (GE HealthCare, Uppsala, Sweden) [26][27][28][29]. Micrographs were obtained from triplicate samples. Trypsin Treatment of Mature Adipocytes Mature adipocytes were incubated with trypsin-EDTA (Gibco-Invitrogen) for 5 min. After washing two times with binding buffer, 1610 6 cells were treated with 400 nM aptamers (library, MA-33 or MA-91) as described in the binding affinity test section. Enrichment of Aptamers for Mature Adipocytes by CELL SELEX using Flow Cytometry The mature adipocytes and preadipocytes from 3T3-L1, a typical cell line for investigating adipocytes in vitro, were used for positive and negative selection, respectively (Fig. S1). After the induction of adipogenic differentiation, the cell morphological patterns and expression levels of the adipogenic markers (aP2, adiponectin, PPARc, resistin and C/EBPa) were examined by FACS, western blotting, and real-time PCR analysis [29;33]. Then, to obtain aptamer candidates for mature adipocyte cells, we applied the CELL SELEX method using flow cytometry (we termed this method FACS-SELEX) [5;34], which is a typical cellbased SELEX method. The random 40-mer single-stranded DNA (ssDNA) library was used to select specific aptamers toward mature adipocytes. The amplified ssDNA pool after each round of selection was used for the next round of SELEX. Negative selection was attempted two times against preadipocyte cells during the third and eighth rounds. After 11 rounds of selection, four representative round pools (3, 5, 9 and 11 rounds) were evaluated for their binding affinity towards mature adipocyte cells. In general, increasing the number of SELEX rounds led to a gradual increase in the fluorescence intensity towards mature adipocytes, and specific aptamers with better binding affinity to mature adipocyte cells were enriched in each round pool. However, the peak shift displayed a tendency to move back to the library peak during round 11 ( Fig. 1A lower panel). On the other hand, the peak shift was not detected with preadipocyte cells (Fig. 1A upper panel). To ensure extremely specific aptamers for mature 3T3-L1 cells, we performed an additional three SELEX rounds using a nine-round SELEX pool. It is known that the SSC-H (side scatter) value of flow cytometry is well correlated with the degree of adipogenic differentiation [33;35]. Thus, differentiated adipocytes were incubated in a nine-round pool and the cells were analyzed and sorted by FACS (Fig. S2). The aptamer pool for the next round was obtained from only fully mature adipocytes showing high SSC-H values (see P3 region of Fig. S3-C). The specificity of the aptamer pool after 12 rounds of SELEX was examined using various cell lines, in this case HEK-293, C2C12, and HeLa cells by FACS as well as a confocal image analysis (Figs. 1B and 1C). As shown in the left panel of Figs. 1B and 1C, the peak shift of the 12-round aptamer pool was only detected in mature 3T3-L1 adipocyte cells compared to that of the library. A similar result was also observed in the confocal image analysis (see right panel of Figs. 1B and 1C), implying that the aptamer pool after 12 rounds of SELEX is highly specific toward mature adipocyte cells. To examine if the targets of the 12-round aptamer pool were membrane proteins on the target cells, we broke up the membrane proteins by a trypsin treatment before adding the 12-round aptamer pool [36]. After binding the 12-round aptamer pool to trypsin-treated cells, the peak shift was not detected (Fig. 1D). In contrast, the affinity of the 12-round pool with the target cells was retained when it was treated with 2% EDTA on mature adipocyte cells to conserve the membrane proteins. These results clearly suggested that the majority of the binding partners of the 12-round aptamer pool consisted of membrane proteins on mature adipocyte cells. Isolation of Aptamers Specific for Mature Adipocytes from an Enriched 12-round Aptamer Pool Based on data from the 12-round aptamer pool, we concluded that the 12-round aptamer pool holds a number of potential aptamer candidates with high affinity toward mature adipocytes. Therefore, 91 aptamers were cloned and sequenced from the finalround aptamer pool (Table S1). However, the conserved sequence motif cannot be obtained from sequence alignment analysis. This may be due to the number of targets on the surface of the mature adipocytes. As an initial step, the secondary structure of 91 aptamers was predicted using the mfold program (http://mfold.rit. albany.edu). Subsequently, their binding affinity for both pre-and mature adipocytes was analyzed by using FACS. Finally, three aptamers with high affinity for mature adipocytes were isolated and the equilibrium dissociation constants (Kd) were calculated (Table 1). These aptamers, named MA-33, MA-64, and MA-91, showed affinity towards mature adipocytes in the nanomolar range. Verification of the Specificity of Two Aptamers for Mature Adipocytes We chose two aptamers with high affinity towards mature adipocytes, and binding assays of these aptamers were performed by using flow cytometry (Fig. S3). The MA-33 and MA-91 aptamers showed very tight binding towards mature adipocytes with Kd values of 142.963.8 nM and 33.162.9 nM, respectively ( Table 1 and Fig. 2A). On the other hand, these aptamers do not bind to undifferentiated 3T3-L1 preadipocyte cells (Fig. 2A). These results clearly demonstrate that the two selected aptamers efficiently discriminate between preadipocytes and mature adipocytes and specifically bind only with mature adipocyte cells. Fig. 2B shows the secondary structure of MA-33 and MA-91 aptamers predicted using a secondary structure prediction program. The specificity of these two aptamers was further confirmed in various cell lines, such as HEK-293, C2C12, and HeLa cells, using a confocal microscopic analysis (Fig. 2C). As shown in Fig. 2C, the MA-33 and MA-91 were exclusively bound to mature adipocytes. Next, we tested whether the binding partners of the selected aptamers were membrane protein(s) of the target cells by FACS Figure 1. Binding affinity test of each round pool with mature adipocytes as a target cell. A. After a total of 11 rounds of SELEX processing, the binding affinities of four representative round pools (3, 5, 9 and 11 rounds) were analyzed for preadipocytes (negative cells) and mature adipocytes (target cells) using flow cytometry. B. The MA-12-round pool, a more enriched pool than the MA-9-round pool towards mature adipocytes, was enriched through additional three SELEX rounds using the MA-9-round pool (see Fig. S2). The binding affinity of MA-12-round pool was analyzed for preadipocytes (negative cells) and mature adipocytes (target cells) using flow cytometry and confocal imaging. C. The binding specificity of the MA-12-round aptamers for various cell lines was verified by using three types of cell lines (HEK-293, C2C12, and HeLa cells). Each cell line was incubated with a library or with the MA-12-round pool. Next, the cells were monitored by FACS (left) and confocal imaging (right). D. To examine the binding of MA-12-round aptamers towards trypsin-treated target cells, differentiated 3T3-L1 cells were treated with trypsin or 2% EDTA. Then, the cells were incubated with MA-12-round pool and were analyzed by FACS. doi:10.1371/journal.pone.0097747.g001 (Fig. 2D). The results indicated that the peaks of MA-33 and Ma-91 were only moved to the right when mixed with 2% EDTAtreated mature adipocyte cells compared to control cells and trypsin-treated mature adipocyte cells. These results clearly suggested that the target(s) of the two selected aptamers were proteins expressed on the cell surfaces of mature adipocytes. Monitoring of MA-33 and MA-91 Binding during Adipogenic Differentiation Next, the time-course monitoring of the aptamer binding process during the adipogenic differentiation of 3T3-L1 cells was monitored. MA-33 or MA-91 was incubated with cells harvested at 0, 2, 4, 6, 8, 10 or 12 days during adipogenic differentiation. Aptamer binding was then assessed using flow cytometry. As shown in Fig. 3, MA-33 and MA-91 did not bind with the cells until 4 days after adipogenic differentiation. However, MA-33 and MA-91 abruptly bound with cells on the 6th day, after which the binding affinity was maintained, and increased, until the late stages of adipogenic differentiation (Fig. 3). The binding patterns of MA-33 and MA-91 were generally similar to each other, but they differed significantly from that of Adipo-8, a previously reported aptamer specific for mature adipocytes. These results indicated that the binding target(s) of MA-33 and MA-91 might differ from that of Adipo-8. Additionally, we carried out binding assay of FITC-labeled MA-91 in the presence of excess unlabeled MA-33 (also vice versa). Even though ten-fold excess of unlabeled MA-33 was pretreated with mature adipocytes, it could not influence the binding of FITC-labeled MA-91 (Fig. S5). Therefore, MA-33 and MA-91 were assumed to have the different binding target(s) on the surface of mature white adipocytes. Furthermore, these results clearly imply that the binding of MA-33 and MA-91 is not nonspecific toward cell surface proteins, but specific to certain target protein(s). Selected Aptamers can Distinguish Primary White Adipocytes from Primary Brown Adipocytes In this study, we obtained aptamers specific for mature adipocytes using 3T3-L1 preadipocyte cell line. Next, to validate the specific binding of the aptamers toward primary adipocyte cells, we prepared two types of primary adipocyte cells, white and brown adipocytes, from white and brown adipose tissues isolated from ICR mice. White adipocyte precursor cells were separated from both subcutaneous areas (inguinal and epididymal fat) and from the visceral adipose tissue (mesenteric fat) of 8-week-old male ICR mice. The isolated precursor cells were cultured, and the cells were then induced to differentiate into mature white adipocytes. Lipid accumulation was assessed by Oil-Red-O staining (Fig. 4A), which indicated that the isolated precursor cells were well differentiated under our experimental conditions. FITC-labeled MA-33 and MA-91 aptamers were incubated with undifferentiated or differentiated cells, and the fluorescence intensity was detected using FACS and confocal image analysis. The results clearly suggested that MA-33 and MA-91 aptamers bind only to primary mature white adipocyte cells, whereas they do not recognize primary white precursor cells (Figs. 4B to E). Next, to test the possibility that the selected aptamers can distinguish white mature adipocytes from brown mature adipocytes, MA-33 and MA-91 aptamers were incubated with primary brown adipocyte cells. Brown precursor cells were isolated from the interscapular brown fat pad of mice one day after birth, and were induced to differentiate into mature brown adipocytes by culturing with a brown differentiation medium. The efficient differentiation of brown adipocytes was confirmed by Oil-Red-O staining (Fig. 5A). Unexpectedly, the selected aptamers did not bind to differentiated brown adipocytes, unlike mature white adipocytes (Figs. 5B to E). Based on these results, we concluded that the two selected aptamers could distinguish between white adipocytes and brown adipocytes. In particular, they can specifically detect mature white adipocyte cells. Additionally, the specificity of these aptamers is retained not only in cell lines but also in primary cells. Discussion and Conclusion Many studies have suggested the potential use of aptamers in therapeutic approaches, diagnostic methods, and basic science applications. This has been facilitated by the SELEX technique, repeated rounds of in vitro selection, and advancements in using RNA and DNA aptamers since the 1990s [12;37]. Since the discovery of aptamers, the SELEX process was modified by various methods and the duration of a selection experiment was reduced from six weeks to three days [38]. In fact, several aptamers are already used as drug delivery system or as diagnostic tools. Our research goal was to isolate specific aptamers for mature adipocytes. In this study, FITC-labeled ssDNA aptamers were used as a library for the FACS-Cell SELEX process to obtain aptamers specific for mature adipocytes. The 3T3-L1 line, the well-characterized white adipocyte model cell line for studying obesity, was chosen as the target cell line. The transcriptional activation and repression of adipocyte genes were clearly revealed during the progression of 3T3-L1 preadipocyte differentiation [39;40]. To obtain aptamers having high specificity to mature adipocytes, negative selection was performed at least two times during a 12-round selection process using 3T3-L1 preadipocytes. The aptamer population with high fluorescence intensity toward mature adipocytes was increased to nine rounds, but this pattern was not shown when assessing the preadipocytes (Fig. 1A). After adipogenic induction, the cells were composed of a heterogeneous population. The granularity of the cells (SSC) gradually increased during differentiation and was well correlated with the degree of differentiation [33;35]. Therefore, we performed additional SELEX rounds to obtain more specific aptamers towards mature adipocytes using mature adipocytes having high SSC values (Fig. S3). From a total of 12 rounds of SELEX, high specificity of the aptamer pool of MA-12 round was obtained and confirmed in various cell lines by FACS and a confocal image analysis (Figs. 1B and 1C). One of the most important aspects in this study is the reduction in the numbers of SELEX cycles using FACS sorting [34]. On average, more than 20 cycles of SELEX rounds are required to enrich an aptamer population with high specificity [41;42]. For example, Tang et al. [43] have selected a specific aptamer to recognize Ramos cells, a Burkitt's lymphoma cell line by 23 SELEX round. After 22 rounds of selection, AptTOV1 was selected from ovarian cancer cells as a target cell [44]. However, we enriched and isolated specific aptamers with only 12 SELEX rounds. Adipo-8, a specific aptamer for differentiated 3T3-L1 cells as determined by Liu et al., [45] was selected via 19 SELEX rounds using Cell SELEX despite the fact that the target cell is identical. A total of 91 candidates were selected after 12 rounds of SELEX, and the binding affinity for preadipocytes and mature adipocytes was investigated. Finally, MA-33 and MA-91 were selected as specific aptamers for mature adipocytes (Fig. S4). The two selected aptamers can distinguish the differentiated 3T3-L1 cells from various cells (Fig. 2C). MA-91, which showed the lowest Kd value among the three selected candidates, was further tested to determine its properties. We compared the binding affinity of MA-91 with that of Adipo-8 in differentiated 3T3-L1 cells (Fig. S4). Although the binding temperature of Adipo-8 was different from our selection condition, the binding affinity of Adipo-8 was retained at our binding temperature. The affinity of MA-91 was slightly stronger than that of Adipo-8. Furthermore, MA-33 and MA-91 could detect mature primary white adipocytes, but not primary brown adipocytes (both pre-and mature form). This indicates that our aptamers can detect proteins expressed only on the membrane of mature white adipocytes. Additionally, we anticipate that the target protein(s) of aptamers can be used as a biomarker(s) for mature white adipocytes and for this reason they are valuable as a target(s) for monitoring and regulating obesity [46;47]. Until recently, liposuction is the preferred treatment that removes fat from various parts on the human body [48]. Like any major surgery, this technique also carries various risks such as contour irregularities, fat embolism and bleeding [49]. One of the biggest risk factors is the inability to specifically distinguish fat tissue from various tissues. Many different liposuction skills have been developed for reducing side effects, but they are not perfect solutions. In this study, we suggest that MA-33 and 91 will be a new way to solve this problem. Furthermore, we isolated the aptamers at 37uC, indicating the high potential for in vivo applications [50]. Therefore two aptamers of this study are expected to be available against obesity care. Considering the results obtained thus far, MA-33 and MA-91 can both be used as aptabody (a biomarker probe for a target cell to replace an antibody) for mature white adipocytes. They are anticipated to have a wide range of applications in the following areas. (1) Since most membrane proteins are receptor-types, MA-33 and MA-91 may inhibit the receptor signaling related to adipogenic differentiation [51]. For example, insulin signaling regulates glucose homeostasis and the energy balance by lipid storage in adipose tissue [52;53]. Signaling occurs via the insulin receptor, and aberrant signaling results in the clinical manifestation of obesity, diabetes, and different cancers. In fact, mice with fat-specific disruption of the insulin receptor gene have low fat mass levels, experience a loss of body weight, and undergo obesity-related glucose intolerance [54]. (2) Our study indicates that specific delivery of anti-obesity drug(s) to mature white adipocytes by targeted vehicles via the internalization characteristics of aptamer(s) is possible [55]. (3) This approach can also be used to obtain homogeneous adipocytes for both basic research and for clinical applications [56]. At present, we are examining whether the selected aptamers can bind to a third class of adipocytes called brite adipocytes (known as beige cells) [57][58][59][60][61]. Brite adipocytes are catagorized as brown adipocyte-like cells, which reside in some white adipose tissues. In conclusion, we isolated two aptamers with highly specific binding activity towards mature white adipocytes. These aptamers can be valuable as potential tools for basic and clinical approaches related to anti-obesity treatment. Figure S1 Schematic presentation of FACS-Cell SELEX used to isolate the aptamer(s) for mature adipocytes. This method integrates the Cell SELEX technique with FACS sorting system. To monitor the enrichment of aptamer pools during SELEX, FITC-labeled ssDNA library (10 14 ,10 16 ) was incubated with target cells (pre-adipocytes or mature-adipocytes). Then, the ssDNA library-bound cells were sorted using FACS. For the next round, purified ssDNA was amplified by PCR with FITClabeled primers. We repeated this process for enrichment of aptamers. After the final round of SELEX, aptamer candidates were identified by cloning and sequencing. (TIF) Figure S2 Additional SELEX rounds were performed with the quantified mature adipocytes. The characteristics of the differentiated 3T3-L1 cells were determined using forward scatter (FITC; x-axis) versus side-scatter (y-axis) during FACS analysis. The differentiated 3T3-L1 cells (A) were incubated with a library (B) or the MA-9-round pool (C). The cells were divided into three sections according to the side-scatter values (P1,P2,P3; differentiation degree) on dot plots. The aptamers were isolated and eluted from only the P3 region, and this was then amplified by PCR for the next round. We repeated this process three times to create the MA-12-round pool. (TIF) Figure S3 Binding test of two selected aptamers. Ninetyone candidates in total were sequenced, and the binding affinity levels for preadipocytes and mature adipocytes were analyzed. Among the aptamers tested, MA-33 (A) and MA-91 (B) were selected based on their specificity and affinity characteristics. (TIF) Figure S4 Comparative experimental studies among MA-33, MA-91, and Adipo-8. Mature 3T3-L1 cells were incubated with MA-33, MA-91, or Adipo-8. The binding affinity levels were then confirmed using FACS analysis. (TIF) Figure S5 Competitive binding assay between MA-33 and MA-91. FITC-labeled MA-91 aptamer was incubated with 10-fold excess unlabeled MA-33. Then, the binding of MA-91 toward mature adipocyte cells was measured using FACS analysis. (TIF)
7,043.4
2014-05-20T00:00:00.000
[ "Biology" ]
The Third Polarization of Light We are all taught that there are only two polarizations of light because Maxwell’s equations only support two polarizations. This is mathematically true for the electromagnetic fields, but we have learned since the days of Maxwell that the “real” electromagnetic field is not the electromagnetic field tensor Fμv (composed of Electric and Magnetic field terms) but rather the electromagnetic vector potential Aμ. When considered carefully, this requires a third polarization of light with very unusual properties. This third polarization of light does not generate electric or magnetic fields but should be detectable by its impact on supercurrents or quantum interference. It is also unavoidable since it automatically appears under Lorentz transformations to different moving frames. Introduction As scientists and engineers, most of us are more familiar with Maxwell's equations than with the underlying vector potential.The vector potential A μ was first created as a mathematical convenience to solve automatically two of Maxwell's four equations.This was helpful algebraically, but since no one knew how to observe A μ directly, it was taken purely as a solving technique.With the advent of superconductivity, it became clear that the primary field driving supercurents is the vector potential itself.In the theory of superconductors, the London equation [1] [2] shows the supercurrent proportional to the vector potential-a direct measurement of the vector component of A in the material. London Equation: ( ) Here e = the charge of the electron, n s = the density of superconducting electrons, m = the mass of the electron and c = the speed of light.The impact of this equation sunk in more clearly when the world's most sensitive magnetometer was developed, called a SQUID (Superconducting Quantum Interference Device), shown schematically in Figure 1 [3] [4].A SQUID goes through a complete signal cycle every magnetic flux quantum = h/2e = 2.07 × 10 −15 webers and is sensitive enough to measure the magnetic fields generated in human brains [5].A SQUID has a loop of superconducting material through which a magnetic field passes to give a total magnetic flux equal to the magnetic field B times the area of the loop.Amazingly, SQUID's detect a magnetic flux inside the loop even if the magnetic field itself never touches any part of the SQUID circuitry.The only field that interacts with the SQUID circuitry is the vector potential A, which produces a circular current according to the London equation above.Since the device shows large effects without any E or B field touching the circuit, we have a convincing demonstration that the vector potential A is a real field.Since E and B come from derivatives of the vector potential, we conclude that the vector potential A is the fundamental field of all electromagnetic phenomena. Once we have established that the vector potential is the fundamental E&M field, then one quickly finds a third polarization of light which has no electric or magnetic fields but which will interact with superconductors and quantum interference detectors. For convenient reference, we here show the derivative relationship between the vector potential A µ , the E&M Field Tensor F µν and then the E and B fields themselves. The Third Polarization With that background we consider the form of a plane wave using the vector potential.The most general form any plane wave can have is given by Equation ( 3), which shows a plane wave of wave vector k v , where all indices μ and τ go from 0 to 3 with 0 being time. This most general form of a plane wave is usually then constrained by the Lorentz condition in Equation ( 4), which ensures that there is no radiating charge density in the E&M waves.Putting the two together, the following form is often used for a plane wave of X polarization moving in the Z direction.0 0 0 e where 0 0 This is an intuitive and convenient form for the vector potential which satisfies the Lorentz condition, but when we transform it to a different Lorentz frame, it does not maintain its form.For instance if we transform to a coordinate system moving at velocity −βc in the x direction, we know that the direction of light propagation will tilt from the z direction toward the x direction.As the Lorentz transform approaches the speed of light (β → 1), the direction of propagation will approach the pure x direction, and the electric vector will be perpendicular along the z axis (Figure 2). Equation ( 6) shows the computation.If the form of the vector potential A µ stayed constant, we would have almost the entire spatial part of the transformed A µ vector aligned with the E vector along the z axis as β → 1.However, the Lorentz transformation does quite the opposite; it keeps the z component at zero for all velocities.Thus a standard Lorentz transform takes us immediately to a different form of light. The New Polarization: The reason this happens while keeping all the proper and familiar transformations in the E and B fields is that there is a new mode in the vector potential which produces only zero E and B fields.When A μ is a multiple of k μ , then the E&M tensor F μv vanishes.This is the form of the new polarization.The familiar X and Y polarization states will still exist, of course, simply by projecting out this third state. New Polarization Note that this mode satisfies the Lorentz condition in Equation (4) simply because for light k μ k μ always equals zero.Furthermore, since our standard Lorentz transformations are required for any covariant quantity, such as Here we see how a Lorentz transform changes the form of a plane wave.The original plane wave has the simple popular form where the spatial A vector is parallel to the E vector.A Lorentz transform changes that waveform into a completely different form where in the limiting case of a new frame moving close to -c in the X direction, the spatial part of the A vector is orthogonal to the E vector. Figure 1 . Figure1.A SQUID (Superconducting Quantum Interference Device) measures the magnetic flux inside the loop using the voltage across a pair of Josephson junctions.The signal goes through one oscillation every magnetic flux quantum (h/2e) and is the most sensitive magnet field detector in the world.It measures the magnetic flux even when no magnetic or electric field touches any part of the circuit. Figure 2 . Figure 2.Here we see how a Lorentz transform changes the form of a plane wave.The original plane wave has the simple popular form where the spatial A vector is parallel to the E vector.A Lorentz transform changes that waveform into a completely different form where in the limiting case of a new frame moving close to -c in the X direction, the spatial part of the A vector is orthogonal to the E vector.
1,603
2015-02-27T00:00:00.000
[ "Physics" ]
Relationships Between Copper-Related Proteomes and Lifestyles in β Proteobacteria Copper is an essential transition metal whose redox properties are used for a variety of enzymatic oxido-reductions and in electron transfer chains. It is also toxic to living beings, and therefore its cellular concentration must be strictly controlled. We have performed in silico analyses of the predicted proteomes of more than one hundred species of β proteobacteria to characterize their copper-related proteomes, including cuproproteins, i.e., proteins with active-site copper ions, copper chaperones, and copper-homeostasis systems. Copper-related proteomes represent between 0 and 1.48% of the total proteomes of β proteobacteria. The numbers of cuproproteins are globally proportional to the proteome sizes in all phylogenetic groups and strongly linked to aerobic respiration. In contrast, environmental bacteria have considerably larger proportions of copper-homeostasis systems than the other groups of bacteria, irrespective of their proteome sizes. Evolution toward commensalism, obligate, host-restricted pathogenesis or symbiosis is globally reflected in the loss of copper-homeostasis systems. In endosymbionts, defense systems and copper chaperones have disappeared, whereas residual cuproenzymes are electron transfer proteins for aerobic respiration. Lifestyle is thus a major determinant of the size and composition of the copper-related proteome, and it is particularly reflected in systems involved in copper homeostasis. Analyses of the copper-related proteomes of a number of species belonging to the Burkholderia, Bordetella, and Neisseria genera indicates that commensals are in the process of shedding their copper-homeostasis systems and chaperones to greater extents yet than pathogens. Copper is an essential transition metal whose redox properties are used for a variety of enzymatic oxido-reductions and in electron transfer chains. It is also toxic to living beings, and therefore its cellular concentration must be strictly controlled. We have performed in silico analyses of the predicted proteomes of more than one hundred species of β proteobacteria to characterize their copper-related proteomes, including cuproproteins, i.e., proteins with active-site copper ions, copper chaperones, and copper-homeostasis systems. Copper-related proteomes represent between 0 and 1.48% of the total proteomes of β proteobacteria. The numbers of cuproproteins are globally proportional to the proteome sizes in all phylogenetic groups and strongly linked to aerobic respiration. In contrast, environmental bacteria have considerably larger proportions of copper-homeostasis systems than the other groups of bacteria, irrespective of their proteome sizes. Evolution toward commensalism, obligate, hostrestricted pathogenesis or symbiosis is globally reflected in the loss of copperhomeostasis systems. In endosymbionts, defense systems and copper chaperones have disappeared, whereas residual cuproenzymes are electron transfer proteins for aerobic respiration. Lifestyle is thus a major determinant of the size and composition of the copper-related proteome, and it is particularly reflected in systems involved in copper homeostasis. Analyses of the copper-related proteomes of a number of species belonging to the Burkholderia, Bordetella, and Neisseria genera indicates that commensals are in the process of shedding their copper-homeostasis systems and chaperones to greater extents yet than pathogens. INTRODUCTION β proteobacteria form a large phylogenetic group mainly composed of environmental species and a few important pathogens, notably of the Burkholderia, Bordetella, and Neisseria genera. β proteobacteria also comprise a few known phytopathogens, commensals, endophytes, symbionts, and endosymbionts. Thus, members of this phylogenetic group represent a broad range of lifestyles, though information is scarce for most identified species. Copper is a transition metal whose redox properties are widely used notably in electron transfer chains for respiration and photosynthesis, and in enzymes involved in oxidoreduction and hydrolytic reactions. Thus, bacteria need to acquire copper ions from the milieu (Stewart et al., 2019). However, Cu(I), which can cross the cytoplasmic membrane, is toxic at high concentrations (Fu et al., 2014;Djoko et al., 2015). Toxicity is thought to be caused by the reactive hydroxyl radical generated in Fenton and Haber-Weiss type reactions, and by copper displacing iron from its sites in metallo-proteins (Solioz, 2018). In particular, the biogenesis of 4Fe-4S centers is vulnerable to copper (Macomber and Imlay, 2009), and copper can also displace iron from assembled 4Fe-4S clusters (Azzouzi et al., 2013;Djoko and McEwan, 2013). As a consequence of this toxicity, bacteria have developed ways to strictly control intracellular copper concentrations (Nies, 2003;Ladomersky and Petris, 2015). A number of systems that ensure copper homeostasis, including extrusion of copper from the cytoplasm or from the periplasm and oxidation of Cu(I) into less toxic Cu(II), have been described in a few model bacteria including Escherichia coli, Salmonella typhimurium, Pseudomonas aeruginosa, Enterococcus hirae, and Mycobacterium tuberculosis (Wolschendorf et al., 2011;Fu et al., 2014). In contrast, little in known for most other bacteria (Solioz, 2018). Expression of copper-related proteins is controlled by specific two-component signal transduction systems and transcriptional regulators. Typically, transcription of genes or operons involved in the export of copper or its detoxification is activated when excess copper is detected in the periplasm using twocomponent systems such as CusRS or CopRS, or in the cytoplasm using regulators of the MerR, ArsR, or CsoR families (Ma et al., 2009;Capdevila et al., 2017;Chandrangsu et al., 2017). Acquisition systems in case of copper starvation are less wellknown. Few specific import systems of copper have been described, and they are mainly dedicated to the assembly of respiration and denitrification complexes (Ekici et al., 2012;Khalfaoui-Hassani et al., 2018). β proteobacteria include Cupriavidus metallidurans, aptly named from its capacity to survive in environments heavily contaminated with transition metals, as it was first isolated from the sludge of a decantation tank in a zinc factory (Mergeay et al., 1985). This organism is extremely well equipped to deal with excessive concentrations of those elements including copper, either by efflux, complexation or reducing precipitation (Von Rozycki and Nies, 2009;Grosse et al., 2016;Herzberg et al., 2016). Rather far from this type of niche is the obligate, host-restricted pathogen Bordetella pertussis, the whooping cough agent (Melvin et al., 2014;Linz et al., 2019). B. pertussis lives in the respiratory mucosa of humans, mainly as an extracellular pathogen. It has no known environmental reservoir and is believed to be transmitted directly between humans. We have initiated the study of copper homeostasis in B. pertussis and discovered that it has considerably streamlined its defense against copper relative to model pathogenic bacteria. Bordetella bronchiseptica, a close relative of B. pertussis with a larger genome and a more promiscuous lifestyle, that can survive in the environment in addition to infecting mammals (Taylor-Mulneix et al., 2017), has more copper-regulated defense systems against excess of this metal than B. pertussis (our unpublished observations). This finding prompted us to analyze the predicted copperrelated proteomes of a large range of β proteobacteria to investigate more broadly the links between their lifestyles and the homeostasis of this metal. Retrieval of β Proteobacterial Proteomes All β proteobacterial species with genomic sequences in the NCBI database (release Nov 2018) were collected, resulting in 465 distinct species. Among them, those for which the genomes are completely sequenced were selected. A single bacterial species was selected for most genera, based on the numbers of publications available in the Pubmed database on each species. However, we selected one representative isolate of all the species of the three β proteobacterial genera -Bordetella, Burkholderia and Neisseria-that include important pathogens. The RefSeq genome files of the selected species were retrieved whenever available, and assembled genome files were used in the other cases. We generated the predicted proteomes of the selected species by translating all their annotated open reading frames. The proteins thus obtained were analyzed in the CLC main package to predict protein domains according to the Pfam nomenclature. In silico Searches for Cu-Related Domains An exhaustive search was conducted to identify all types of known copper-related protein domains that can be found in β proteobacteria. Instead of using a Blast approach to retrieve putative members of known copper-related protein families, we used family signatures as found in Pfam. Firstly, all proteins whose three-dimensional structures contain a copper ion were retrieved from the metal-specific MetalPDB database (Putignano et al., 2018) as described previously (Sharma et al., 2018). The 1397 distinct 3-dimensional structures of proteins with bound Cu were found to correspond to 4391 protein sequences, as some entries include several polypeptide chains. Pfam predictions were performed for all sequences, and 233 distinct Pfam domains were identified in that set. Among those, we determined which domains provide amino acyl residues that coordinate copper. The binding sites of Cu are mainly formed by the side chains of His, Cys, and Met residues (Rubino et al., 2011), as well as Asp and Glu according to MetalPDB. A metal binding site was considered plausible if all coordinating residues belong to a single predicted Pfam domain, yielding 27 potential Cubinding domains. Mismetallated domains and eukaryote-specific domains were discarded. Secondly, we used the BACMET database to identify additional Cu-related Pfam domains absent from MetalDB as described (Li et al., 2019). In BACMET, 97 proteins are reported to be involved in copper resistance in eubacteria, but this list is highly redundant. This additional search identified two new domains, CopD and CopB. BACMET also led to the identification of domains involved in metal sensing, in particular histidine sensor-kinases of two-component systems and cytoplasmic transcriptional regulators of the MerR family. However, we chose to disregard metal-sensing domains of regulation systems in our analyses, because of the difficulty to determine their metal specificity based on sequence alone (Ibanez et al., 2015). Functional studies and three-dimensional structures are generally required to establish selectivity (Pennella et al., 2003;Ma et al., 2009). Thirdly, we searched the Pfam database by text mining for copper-related domains that might have been missed in the first two approaches. A few additional domains were found, namely CutC, NosL, Cu-oxidase_4, and NnrS. After having identified all known copper-related domains, we searched for their presence in each of our predicted β proteobacterial proteomes using hmmsearch 1 . For most domains, there is no ambiguity regarding their specificity for copper. However, for others further analyses were necessary. To identify metal-specific exporters, we retrieved all the putative proteins identified as P1-type ATPases (domain E1-E2_ATPases) and RND (domain ACR-tran) transporters according to Pfam in our set of proteins. We performed BlastP analyses against the 17545 proteins of the TCDB database 2 (link: TCDB FastA Sequences), which provides subcategories of transporters according to their substrates. Only hits with E values of 0.0 were selected, namely the copper resistance ATPases (TCDB category 3.A.3.5) and the metal-specific RND HME transporters (heavy metal efflux; TCDB 2.A.6.1). Note that ATPases providing copper to respiratory complexes (Gonzalez-Guerrero et al., 2010;Hassani et al., 2010) are part of a distinct TCDB category (3.A.3.27) that we did not consider in our analysis. Multicopper oxidases (MCO) harbor at least two coppercontaining domains that can also be found in other types of proteins of various functions. X-ray structures of bona fide MCOs were analyzed to determine their specific domain organizations, which are Cu-oxidase_3/Cu-oxidase/Cu-oxidase_2; Cu-oxidase_3/Cu-oxidase_2; and Cu-oxidase_3//Cu-oxidase_2. The same approach was used for copper-containing nitrite reductases, yielding the following domain organizations: Cu-oxidase_3/Cu-oxidase; Cu-oxidase_3//Cytochrome_CB B3; Copper-bind/Cu-oxidase_3; Copper-bind/Cu-oxidase_3/ Cu-oxidase_2. For nitrous oxide reductases (N 2 OR), the nos_propeller domain was used as the defining signature according to MetalPDB. Three complexes of aerobic respiration, cytochrome bo ubiquinol oxidases, cytochrome C oxidases aa3, and cytochrome C oxidases cbb3, use copper and contain a COX1 domain. Cytochrome C oxidases cbb3 can be defined by the presence of a FixO Pfam domain in one of their components. TIGRFAM 3 was used to determine the identities of the other two complexes. Cytochrome bo ubiquinol oxidases have a component with a CyoB domain signature (TIGR02843). For cytochrome oxidase aa3 identification, the TIGRFAM 1 hmmer.org 2 http://www.tcdb.org/download.php 3 https://www.jcvi.org/tigrfams signature QoxB (TIGR02882) was used. As some nitric oxide reductases also harbor this domain, we used the KEGG oxidative phosphorylation and nitrogen metabolism reference pathways to assign unambiguously the cytochrome C oxidases aa3 in our set of proteomes. For Copper Storage Proteins (CSPs) (Dennison et al., 2018), the only available signature is a domain of unknown function (DUF326; PF02860), and therefore we created a new, unique CSP signature. A DUF326 domain is found in 18 proteins of our set that are annotated in GenBank RefSeq files as "fourhelix bundle copper-binding protein." Using those keywords, 22 additional proteins were identified. Sequence alignments of the 40 proteins with ClustalW was used to create a new CSP profile, called CSP.hmm (Supplementary File S1), using hmmbuild 1 . Searching for this profile in our proteomes retrieved seven additional proteins. As CSPs can be periplasmic or cytoplasmic, they were distinguished based on the prediction of a Tat signal peptide using the Tatfind server 4 . Eight proteins belong to the extracytoplasmic group (Csp1/2_Ecsp), and the other 39 are cytoplasmic (Csp3_Ccsp). Hierarchical Clustering Analyses Hierarchical clustering was performed using the Cluster 3.0 software 5 to group bacteria based on the occurrence and abundance of the various types of copper-related proteins. We used medians of copy numbers of each type of proteins in each species and dispersion of the values around the medians for hierarchical clustering, with the Correlation (uncentered) similarity metric parameter. All other parameters were kept at their default values. Hierarchical clustering was also used to identify co-occurrences of the various proteins in our bacterial set without centering around medians. Note that those analyses were performed on 86 species, as only one species was selected for the Burkholderia, Bordetella, and Neisseria genera to avoid their overrepresentation in our bacterial set. The dendrograms were exported to Figtree for visualization 6 . Phylogenetic Analyses A phylogenetic tree was built based on 16S RNA sequences using one species for each genus (86 sequences). They were aligned with cmalign of the Infernal package 7 using the bacterial small subunit ribosomal RNA profile (SSU_rRNA_bacteria, RF0077). The MEGA X software 8 (Kumar et al., 2018) was used to build a Neighbor-joining-tree using the Maximum likelihood algorithm with the Tamura-Nei nucleotide substitution model (Tamura and Nei, 1993) and 1000 replicates using the Bootstrap method (Felstenstein, 1985). The phylogenetic tree was visualized with FigTree. Diversity of β Proteobacteria After collecting all β proteobacterial species whose genomes are completely sequenced, we selected one species for each genus in order to obtain as wide a range of lineages as possible while keeping the analyses to a manageable level. For the Bordetella, Neisseria, and Burkholderia genera, we selected one isolate as representative of each species. Altogether, this yielded 119 distinct species of β proteobacteria, with the largest possible variety of genera and lifestyles, including a few unclassified species (Supplementary Table S1). As no completely sequenced genomes from the Ferritrophicales, Ferrovales, and Procabacteriales orders were available at the time of our analyses, no representatives of those phylogenetic groups were included. Many species in our set are described to live in natural milieus such as water and soils and will be called hereafter environmental bacteria, although limited information is available in many cases. The environmental species reported to frequently cause infections in specific conditions were placed in a category called environmental/opportunists. Bona fide pathogens were sorted according to their types of hosts, yielding animal pathogens and phytopathogens. The single species of fungus pathogen was included in the group of environmental bacteria. We also identified a few endophytes, symbionts, endosymbionts, and commensal species. Commensals responsible for opportunistic infections were placed in a separate category. We generated the complete predicted proteomes of all selected isolates and determined the Pfam domain(s) present in each protein. The largest proteome sizes in the various categories of bacteria are those of opportunists, environmental/opportunists and pathogens, although there are considerable variations within each of those groups ( Figure 1A). Endosymbionts are at the other end of the spectrum, as expected. Strikingly, the proteome size of Verminephrobacter eisiniae, a heritable extracellular earthworm symbiont, is similar to those of environmental β Proteobacteria (Pinel et al., 2008;Lund et al., 2014). Identification of Copper-Related Proteins in the Selected Set of β Proteobacteria We inventoried all predicted proteins that harbor Cu-related domain(s). As several of those domains are frequently associated with one another, some proteins or protein complexes in our set include more than one Cu-related domain ( Table 1). Copperrelated proteins were sorted in several categories. The first encompasses cuproproteins, i.e., proteins that harbor copper ion(s) necessary for their activity. Fourteen distinct families of cuproproteins are present in β proteobacteria. The second group corresponds to proteins mediating copper homeostasis, in particular defense against copper. Resistance to excess copper mainly consists in transporting it from the cytoplasm to the periplasm via P 1B -type copper ATPases (Migocka, 2015) and from the periplasm to the external milieu via heavy metal exporters (HME) of the resistance/nodulation/division (RND) superfamily of transporters (Kim et al., 2011). An additional line of defense is to detoxify Cu(I) by oxidation to Cu(II) using multicopper oxidases (MCOs) (Singh et al., 2004;Kaur et al., 2019). Note that in addition to mediating Cu homeostasis, MCOs are bona fide cuproproteins. Regarding the other facet of copper homeostasis, i.e., import of copper as a micronutrient, few systems have been described. Reports that specific P 1B -type copper ATPases might import rather than expel copper have thus far not been convincingly substantiated (Solioz, 2018). Specific major facilitator superfamily (MFS) importers take up copper across the cytoplasmic membrane, notably for cytochrome C oxidase assembly (Ekici et al., 2012;Khalfaoui-Hassani et al., 2018). TonB-dependent receptors (TBDR) that transport copper across the outer membrane have also been reported, notably in Pseudomonas (Yoneyama and Nakae, 1996;Wunsch et al., 2003). However, as the MFS and TBDR families include many paralogs but no clear signatures specify copper transporters, we chose to not include those two classes of transporters in this analysis. Other proteins potentially involved in copper homeostasis include the periplasmic and inner membrane proteins CopC and CopD, respectively, reported to form a system that sequesters copper in the periplasm (Cooksey, 1994), and other proteins of ill-defined functions, the outer membrane protein CopB (Lawaree et al., 2016) and the periplasmic protein CopK (Monchy et al., 2006). The third category includes copper-specific chaperones (Robinson and Winge, 2010;Capdevila et al., 2017), most of which play more than one role. Cu chaperones transfer copper for cuproprotein assembly or participate in copper extrusion by handing copper over to exporters. They also buffer copper, which may contribute to limiting its toxicity (Corbett et al., 2011;Osman et al., 2013;Vita et al., 2016;Solioz, 2018;Novoa-Aponte et al., 2019;Utz et al., 2019). We chose to place Csp3_Ccsp (cytosolic copper storage proteins) both in the homeostasis and copper chaperone categories, based on its putative functions (Dennison et al., 2018). A first observation is that the proportion of copper-related proteins in a given species is not necessarily related to its proteome size (Figures 1A,B). Among those, cuproproteins account for 40% of copper-related proteins, homeostasis proteins for 28% and copper chaperones for 32%, using medians. However, those proportions vary depending on the lifestyles. Thus, pathogens, commensals and symbionts have greater proportions of cuproproteins and chaperones and lower proportions of homeostasis systems than environmental bacteria (Figures 1C-E, Supplementary Figure S1 and Supplementary Table S2). Endophytes and symbionts have few copperrelated proteins, which are mostly cuproproteins, and hardly any homeostasis systems (Supplementary Figure S1 and Supplementary Table S2). For phytopathogens, the dispersion of the values is too large and the size of the samples is too small to identify trends. In environmental bacteria, by far our largest sample, the situation is rather contrasted, likely depending on their respective niches. Using medians, copper-related proteins make up 0.71% of their proteomes, of which 34, 33, and 33% are cuproproteins, copper-homeostasis FIGURE 1 | Profiles of copper-related proteomes as a function of the lifestyles of β proteobacteria. The bacteria were grouped according to their lifestyles, and various data were extracted from their predicted proteomes. E, OE, AP, PP, OC, C, S, EnP, and EnS represent environmental, opportunistic/environmental, animal pathogen, phytopathogen, opportunistic/commensal, commensal, symbiotic, endophytic, and endosymbiotic bacteria, respectively. (A) Sizes (in numbers of proteins) of the total proteomes as a function of the lifestyles of β proteobacteria. (B) Proportions (in%) of the copper-related proteomes relative to the entire proteomes. The copper-related proteomes include all predicted proteins of the three categories described in the text (cuproproteins, copper-homeostasis proteins, and copper chaperones). (C-E) Proportions of cuproproteins, copper-homeostasis proteins, and copper chaperones relative to the copper-related proteomes in selected bacterial groups, namely environmental bacteria (C), animal pathogens (D), and commensals (E). Note that in panel E both commensals and commensals/opportunists were included in order to increase the number of species in that group. The red symbols in panel E correspond to O. formigenes. In each panel, the different species are each represented by one symbol. The orange horizontal lines represent the medians of the values for the various groups. proteins, and copper chaperones, respectively (Figures 1B,C). Nitrosomonadales have particularly high proportions of proteins of all three categories relative to the sizes of their proteomes (Supplementary Table S2). Some Burkholderiales including C. metallidurans are well equipped to deal with excess copper. The champion is Herminiimonas arsenicoxydans, with 1.48% of its medium-size proteome made of copper-related proteins (Figure 2). Analysis by Types of Proteins Use of copper in β proteobacteria is largely linked to aerobic respiration, as the most represented cuproproteins are cytochrome C oxidases aa3 and cbb3 (Figure 2 and Supplementary Table S2 and Supplementary Figure S1). Very few species in our set lack both types of cytochrome C oxidases, including one uncharacterized Bordetella species. Cytochromes bo ubiquinol oxidase complexes are present in half of our bacterial set. The only bacteria totally devoid of genes for aerobic respiration are Oxalobacter formigenes, a commensal of the human digestive tract, and two endosymbiont candidates with minute proteomes, Tremblaya princeps and Vidania fulgoroideae. Neisseriales have few cytochrome C oxidases or ubiquinol oxidases, but they possess complexes for respiration on oxidized nitrogen species two of which, nitrite reductase (Mellies et al., 1997) and N 2 OR, are cuproproteins. Genes for those proteins are also found in some environmental bacteria, including ammonium-oxidizing bacteria (Casciotti and Ward, 2001), some Rhodocyclacles and some Burkholderiales (Figure 2 and Supplementary Figure S1). NnrS is a cuproprotein linked to nitrosative stress (Patra et al., 2019). It is present in most β proteobacteria, at several copies in Rhodocyclales, Nitrosomonadales, Neisseriales, and some Burkholderiales. Small copper-containing plastocyanin-like electron transfer proteins and cupredoxins are also broadly present in β Proteobacteria, with genes found at 1-3 copies in more than half of the genomes. Largest numbers are in environmental bacteria (Supplementary Table S2). Periplasmic MCOs are both cuproproteins and proteins involved in copper homeostasis, using O 2 to mediate oxidation of Cu(I) to Cu(II) as a means to reduce its toxicity. The numbers of MCO-coding genes vary from 0 in symbionts, some animal pathogens and anaerobes, to four or five in C. metallidurans, The proteins were sorted by functional categories: cuproproteins, copper-homeostasis proteins, and copper chaperones. The Pfam names and identity numbers of the domains that interact with copper in each protein are provided when available. Note that multicopper oxidases are both cuproproteins and copper homeostasis proteins, and Csp3_Ccsp were placed in both homeostasis proteins and copper chaperones. Asterisks indicate proteins whose definition required additional in silico analyses (see section "Materials and Methods"). H. arsenicoxidans, Ralstonia solanacearum, and Nitrosomonas europaea ( Supplementary Table S2). Cu, Zn superoxide dismutases (SodC) protect the cell from superoxide stress, either exogenous or endogenous (Battistoni, 2003;Broxton and Culotta, 2016;Larosa and Remacle, 2018). sodC genes are found in half of the species, with two copies in a few environmental species. Most endosymbionts and a majority of Neisseriales are devoid of sodC, consistent with micro-aerophilic environments (Figure 2 and Supplementary Figure S1). A single gene copy for laccase (multicopper polyphenol oxidase) is found in most β proteobacteria. Finally, other categories of cuproproteins are present in more limited numbers of species, most likely for specific metabolisms (Figure 2 and Supplementary Figure S1). Cu Homeostasis in β Proteobacteria Resistance to copper is mediated by several mechanisms involving Cu-specific ATPases, RND transporters, MCOs, in some cases coupled with export across the outer membrane (Lawaree et al., 2016), and sequestration by copper-binding proteins, including copper chaperones, in the two compartments. As chaperones also participate in Cu traffic within cells, they are considered separately. Genes for copper-specific P 1B -type ATPases are found in 106 species from our set, making those systems the most widespread mechanisms of defense against copper in β proteobacteria. The largest numbers, up to 5, are found in specific environmental bacteria ( Figure 3B). Genes for RND-mediated export by heavy metal exporters (HME) are present in 62 species. Up to 10 RND HME-coding genes are present in some environmental bacteria, irrespective of the size of their proteome, a strong indication that those bacteria are exposed to transition metals in their environment ( Figure 3A). It is, however, difficult to determine which of those might be involved in copper efflux. In contrast, other environmental bacteria, symbionts, and animal pathogens do not possess such genes at all, indicating that this defense feature is strongly correlated with lifestyle. For instance, Neisseriales and most animal pathogens have no HME genes but have at least one Cu-ATPase gene (Figure 2 and Supplementary Figure S1). As the two types of systems expel copper from different cellular compartments, its removal from the cytoplasm might be sufficient for bacteria that have no environmental phase in their life cycles. Among other proteins reported to mediate Cu homeostasis, several copies of CopC and CopD are found in bacteria that live in metal-rich environments and in some Burkholderiae, whereas symbionts and most animal and phytopathogens are devoid of them (Figure 2). This suggests a role for defense against copper, in line with early reports (Silver and Ji, 1994). Other poorly characterized proteins, including CopB, CopK, CutA1, and CutC are absent from the genomes of most β Proteobacteria or present in low copy numbers, indicating that they likely represent specific adaptations of a limited number of genera. Cu Metallochaperones The isolated HMA domain, called CopZ in model bacteria, has several roles: it sequesters Cu in the cytoplasm and also hands Cu(I) to Cu-ATPases, either as a means of defense or to supply copper for cytochrome C oxidase assembly (Corbett et al., 2011;Utz et al., 2019). Among the single-domain HMA proteins in our set, greatest numbers are found in environmental bacteria including C. metallidurans, H. arsenicoxydans, and Polaromonas naphtalenivorans. Very few β proteobacteria do not have any, supporting its role for copper homeostasis in that phylogenetic group. CusF is a periplasmic Cu-binding protein that transfers copper to the RND HME transporter CusABC for extrusion of Cu to the external milieu in model bacteria. Largest numbers of cusF genes are found in B. multivorans, Alicycliphilus denitrificans, H. arsenicoxydans, Dechloromonas suillum, Polaromonas naphtalenivorans, i.e., opportunists or environmental bacteria that possess multiple RND HME systems ( Figure 3C). However, cusF is present in a number of species devoid of RND HMEs, in particular in animal pathogens including all pathogenic Bordetellae (Figure 2 and Supplementary Figure S1). It must thus fulfill another role in the absence of RND HME systems. The PCu A C and ScoI-SenC chaperones are involved in the assembly of respiration or photosynthetic complexes or of nitrite reductase (Jen et al., 2015), and they have also been reported to participate in copper homeostasis (Trasnea et al., 2016). They are found in most β proteobacterial proteomes, except for O. formigenes, Candidatus Symbiobacter mobilis, endosymbionts and others that have few or no genes for aerobic respiration (Supplementary Figure S1). NosL is a Cu chaperone involved in the assembly of N 2 OR (Zumft, 2005), and accordingly it is mostly found in species with that enzyme. Finally, genes for copper storage proteins (Dennison et al., 2018) Csp3_Ccsp (cytoplasmic Cu storage) are present in a number of environmental bacteria as well as in several Bordetellae. As for Csp1/2_Ecsp (periplasmic Cu storage) proteins, they are only found in a few species, including several Neisseriales (Supplementary Figure S1). Co-occurring Copper-Related Proteins in β Proteobacteria Hierarchical clustering was performed on 86 species with a single representative of each bacterial genus, including Burkholderia cepacia, B. bronchiseptica, and N. gonorrheae. Most of the cooccurrences of copper-related proteins in that bacterial set revealed by those analyses were expected based on the functions of the respective proteins (Figure 4). Thus, RND HME exporters and CusF, which form export systems across the outer membrane, FIGURE 4 | Clustering of copper-related proteins in β proteobacteria. Hierarchical clustering was performed for the 31 protein types and 86 species (including one species of each of the Neisseria, Burkholderia, and Bordetella genera) to identify co-occurrences of proteins in β proteobacteria. FigTree was used to visualize the results. The number of nodes between two proteins is negatively correlated with their co-occurrence. are often found together. Similarly, isolated HMA domains have been described to transfer copper to Cu-ATPases for export from the cytoplasm, and accordingly, the two proteins cluster in our set. We also observed co-occurrence of RND HME and Cu-ATPases. This is in good agreement with the report that the two systems can synergize for the defense against copper (Padilla-Benavides et al., 2014). The chaperone NosL cluster with N 2 OR, as expected from its role for N 2 OR assembly. Other associations revealed by our analyses include MCO with the OMP CopB, cupredoxin with SenC, Csp3 with Cyto_bo, PcuAC and Cu-oxidase_4, and Csp1/2 with DUF386. Some of those may provide indications on the putative functions of little characterized copper-related proteins, e.g., for the assembly of specific complexes. Classification of β Proteobacteria According to Their Copper-Related Profiles We also performed clustering of the bacterial species based on their respective copper-related proteomes and compared this classification with a 16S-RNA-based phylogenetic tree (Figures 5A,B). This analysis was performed on 86 species as above to avoid overrepresentation of the Burkholderia, Neisseria and Bordetella genera. As our results have indicated that the copper homeostasis protein complements found in β proteobacteria appear to correlate with lifestyles better than cuproproteins or chaperones, we first used the homeostasis subset of copper-related proteins to perform hierarchical clustering. Interestingly, those analyses yielded a tree in which bacteria with a host-associated lifestyle form a separate cluster (top branch in Figure 5B) from environmental bacteria, which form several other large clusters, most likely related to their niches. Outliers include O. formigenes, a commensal with a very different set of copper-related proteins than other commensals, and two other bacteria. Hierarchical clustering gives rather different results than the 16S-RNA-based phylogenetic tree with the same 86 species (Figure 5A). In an attempt to refine the sorting of bacteria with hostassociated lifestyles, we performed a second round of hierarchical clustering with the subset of bacteria (21 species) found in the upper branch of the tree shown in Figure 5B, this time based on their entire complements of copper-related proteins. This analysis revealed distinct groups ( Figure 5C). Those found in group 1 are totally dependent on a host cell. They are all endosymbionts, except for Candidatus S. mobilis, which forms a consortium by maintaining cell-to-cell contact with Chlorobium chlorochromatii, a non-motile photolithoautotrophic green sulfur bacterium (see Supplementary Table S1). Unlike C. chlorochromatii, Candidatus S. mobilis fully depends on this symbiosis. Bacteria found in group 2 are extracellular but hostdependent bacteria, i.e., symbionts, commensals, and pathogens. One exception is Polynucleobacter necessarius, an endosymbiont of a protist. Its large proteome and its free living Polynucleobacter asymbioticus relative suggest that this symbiosis evolved recently. Bacteria found in group 3 live in the environment, even if one of them, B. bronchiseptica, is also an animal pathogen. Altogether, thus, the known copper-related proteomes of β proteobacteria correlate reasonably well with their lifestyles and niches. Copper-Related Proteins in Specific β Proteobacterial Genera We took advantage of the availability of the full genomic sequences of large numbers of species of Bordetella (15), Burkholderia (11), and Neisseria (10) to perform more detailed analyses of the relationship between lifestyle and copper-related proteomes. All Neisseriae have small-size proteomes and are mostly commensals (Liu et al., 2015). However, N. meningitidis, a commensal of the human nasopharynx, is responsible for life-threatening meningitis or sepsis when it breaches the epithelial barrier, and N. gonorrhoeae, an obligate humanrestricted pathogen, infects the genital tract. In contrast, the 11 species of Burkholderia all have large proteomes. They are environmental species, phytopathogens, or patho-opportunists that can cause serious infections (Mahenthiralingam et al., 2005;Cui et al., 2016). With 15 representatives, the genus Bordetella displays more varied lifestyles, including obligate host-restricted pathogens, environmental species, wide-host-range pathogens, commensals, opportunists, and uncharacterized species (Linz et al., 2019) Table S1). (Supplementary The proportions of copper-related proteins relative to the total proteomes vary more widely among Burkholderiae and Bordetellae than among Neisseriae (Figure 6). Environmental and opportunistic species of the three genera generally have more cuproenzymes, in particular involved in aerobic respiration in most Bordetella and Burkholderia species, than obligate pathogens or commensals. In contrast, Neisseriae have few Cucontaining subunits of aerobic respiratory chains, but they harbor other cuproenzymes, i.e., nitrite reductase and/or N 2 OR, that are absent from most Bordetellae and Burkholderiae. Neisseriae are devoid of MCOs, unlike the other two genera. Interestingly, plastocyanin-like proteins are found in all Neisseriae and a majority of Bordetellae, but they are absent from all but one Burkholderia. Major differences between lifestyles are again reflected in the sets of copper homeostasis proteins. Neisseriae have shed most of their Cu homeostasis systems, which represent only between 7 and 18% of their copper-related proteomes, much lower than the β-proteobacterial median (27%). This contrasts starkly with Burkholderiae, that are replete with copper homeostasis proteins and chaperones. In that group, Burkholderia multivorans has the largest proportions of total copper-related proteins and of copper homeostasis systems, while phytopathogens, Burkholderia glumae and Burkholderia plantarii, have fewer such proteins. Bordetellae present more varied patterns of copper-related proteins than the other two genera. Obligate, host-restricted pathogens Bordetella holmesii and Bordetella avium have the smallest proteomes and very few defense systems, in contrast with environmental Bordetella petrii, which has the largest proportions of cuproproteins, copper homeostasis systems, and copper chaperones. Interestingly, differences are conspicuous among environmental species in their complements and copy numbers of copper-homeostasis proteins, suggesting specific adaptations to distinct niches by horizontal gene transfers or gene duplications. For instance, Bordetella flabilis has one of the largest proteomes among environmental Bordetellae but fewer copperhomeostasis systems than B. petrii or Bordetella bronchialis, possibly indicating that it is in the process of adaption to a more restricted niche. Bordetella sp. N has the largest proteome of Bordetellae so far, which predicts an environmental niche. However, it has hardly any homeostasis systems or chaperones, at odds with a bona fide environmental lifestyle. Another intriguing isolate is Bordetella sp. J329 isolated from a patient. It is devoid of Cyt C oxidases and SodC, which are rare features among Bordetellae. Copper-Related Proteomes in Other Proteobacteria Finally, to determine whether our findings with β Proteobacteria could be generalized to other phylogenetic groups, we selected 30 species of α and γ Proteobacteria of various lifestyles (pathogens, symbionts, and environmental species), with fully sequenced and assembled genomes, and we analyzed their copper-related proteomes as above (Supplementary Table S3). Similar to β Proteobacteria, environmental species have larger proportions of their copper-related proteomes dedicated to homeostasis than the other groups, and symbionts have hardly any copper homeostasis systems and chaperones (Figure 7). However, differences between pathogens and environmental species are less marked in this analysis, most likely because of the limited sample size and because several species defined as pathogens are also environmental, such as Legionella pneumophila and Vibrio cholerae. Altogether, thus, the trends observed in this smaller set confirm the correlation between lifestyles and copper homeostasis. DISCUSSION In the evolution of life on Earth, the use of copper has been linked to the appearance of molecular oxygen, as oxidation of insoluble Cu(I) to soluble Cu(II) made copper bio-available for enzymatic oxido-reductions, hydrolysis reactions, and electron transfer chains (Solioz, 2018). Most aerobic or facultative aerobic bacteria use copper, as several protein complexes involved in aerobic respiration, i.e., cytochrome bo ubiquinol oxidases, type cbb3 cytochrome C oxidases and type aa3 cytochrome C oxidases, include copper-containing subunits. In contrast, many anaerobes do not use copper at all and thus in general, aerobic bacteria have larger cuproproteomes than anaerobic ones (Ridge et al., 2008). The link between oxygen and copper utilization is a reason why the overwhelming majority of β proteobacteria have cuproproteins, with the exception of endosymbionts that have shed most of their metabolic capacities. Indeed, most β proteobacteria in our set are aerobes or facultative aerobes. Median numbers show that the proportions of cuproproteins relative to the sizes of the predicted proteomes are rather similar between pathogens, commensals, and environmental bacteria, but smaller in symbionts and endosymbionts. This is also broadly true for copper chaperones, many of which are involved in assembly of energy-generating complexes. Supplementary Table S3. Three groups (panel A: environmental species, panel B: pathogens, and panel C: symbionts) were made according to lifestyles, and the proportions (in%) of cuproproteins, copper-homeostasis proteins, and copper chaperones relative to the complete copper-related proteomes were determined for each species. The orange horizontal lines represent the medians of the values for the various groups. In β Proteobacteria, two factors appear to determine the proportions of proteins involved in copper homeostasis: the size of the global proteome of each species and its lifestyle. Unsurprisingly, bacteria with larger proteomes generally have more defense systems than smaller-proteome species, as they probably live in less constant environments and thus need to adapt to a variety of stressful conditions, possibly including high concentrations of toxic metals. The sizes of copper-homeostasis proteomes are mainly determined by the lifestyles of the species. Thus, as a general rule, environmental bacteria have relatively large proportions of their proteomes dedicated to copper homeostasis, irrespective of the sizes of their total proteomes. Copper-homeostasis proteins are more abundant relative to the total proteomes, by a factor of three, in environmental bacteria than in the other bacterial groups. The evolution from environmental bacteria to pathogens, commensals, and symbionts in β proteobacteria is globally characterized by the shedding of copper homeostasis systems. In other words, natural selection specifically favors the elimination of copper defense genes from bacteria in the course of their adaptation to eukaryotic hosts niches. This appears to be the case in other Proteobacteria as well. However, absence of known copper homeostasis genes does not preclude the existence of other, yet unidentified homeostasis systems. Non-specific systems might also be involved, as exemplified by yersiniabactin in E. coli, a siderophore that also participates in copper homeostasis (Chaturvedi et al., 2012;Koh et al., 2017). Other small molecules such as glutathione, bacillithiol, and mycothiol may also play important roles in metal tolerance (Helbig et al., 2008). Among β Proteobacteria, two groups of bacteria harbor greater than average numbers of copper-homeostasis genes: those that live in environments polluted by metals, and Nitrosomonadales. The first group is exemplified by extremely resistant organisms such as C. metallidurans or H. arsenicoxidans, which thrive in soils heavily contaminated with transition metals thanks to large numbers of export, storage, and detoxification systems. Overrepresentation of copper-related proteins involved in homeostasis relative to cuproproteins and copper chaperones in environmental bacteria corroborates the idea that the use of copper as a cofactor and the defense against copper are not correlated but evolved independently, as in other phylogenetic groups (Solioz, 2018). Note that C. metallidurans and H. arsenicoxydans also have larger proportions of cuproproteins than most β proteobacteria, in part because they have four or five MCOs, cuproproteins that also contribute to protection against copper. Another factor that is likely to strongly contribute to the abundance of copper defense systems in environmental bacteria is the necessity to survive predation by protozoa in soils and water. The relationship between protozoa and bacteria is a longstanding one, and copper is part of the arsenal used by the former to poison and kill their bacterial preys (Hao et al., 2016). In consequence, there is a strong correlation between the presence of copper efflux systems in bacteria and their ability to survive in amoeba and other protozoa. Those systems in turn may contribute to the ability of environmental bacteria to cause opportunistic infections in specific settings such as in immunocompromised hosts. Indeed phagocytes, which use transition metals to kill invading microbes as part of the innate immune response (Fu et al., 2014;Djoko et al., 2015), share pathways of intracellular copper trafficking with protozoa (Hao et al., 2016). It is thus very likely that the abundance of copper homeostasis systems in many environmental β proteobacteria has been selected for by the need to resist killing by protozoa, and those defenses then enabled opportunists to resist killing by phagocytic cells in their occasional mammalian hosts. Pathogenic bacteria of mammals represent a special case with respect to copper. Copper is sequestrated away from the pathogens by host proteins in mucosa or body fluids, and therefore bacteria need to specifically acquire it for assembly of their own cuproproteins. On the other hand, as outlined above, the innate immune system uses copper as a line of defense, with macrophages importing copper into the phagolysosome compartment as a means to destroy bacteria, together with oxidative and nitrosative stress and antimicrobial peptides. Animal pathogens must thus strike a fine balance to deal with copper starvation or excess depending on the specific environments encountered in their hosts. We have started to address this issue with the host-restricted, obligate pathogen B. pertussis and discovered that it has lost a number of genes coding for defense systems present in model bacterial pathogens. According to phylogenetic classifications, Bordetellae are close to an environmental/opportunist species, Achromobacter xylosoxidans, and to an endosymbiont, Candidatus Kinetoplastibacterium blastocrithidii. Comparisons between A. xylosoxidans (genome: 7.3 Mb), K. blastocrithidii (0.86 Mb), B. pertussis (4.086 Mb), and B. bronchiseptica (5.34 Mb), the latter of which can both infect mammals and survive in the environment, show that A. xylosoxidans has considerably more Cu-ATPases, HME transporters, MCO and other homeostasis systems than the two Bordetella species, but only slightly more cuproproteins and chaperones. B. bronchiseptica is in an intermediate situation regarding its defenses against copper (see below). Unlike B. pertussis, B. bronchiseptica has two distinct but interconnected cycles, in mammalian hosts and in amoeba (Taylor-Mulneix et al., 2017). Protozoan predation may thus have positively selected for the copper homeostasis systems that remain in B. bronchiseptica but are no longer functional in B. pertussis. At the other end of this spectrum, the endosymbiont K. blastocrithidii has no homeostasis systems. Such a scenario fully supports the global trends observed on our large sample of β proteobacteria. Interestingly, however, most differences between B. pertussis and B. bronchiseptica with respect to copper-related systems are not found in their genomes but at the level of transcription (our unpublished observations). Thus, B. pertussis possesses genes coding for a Cu-ATPAse and an MCO that are no longer expressed and regulated by copper, while in B. bronchiseptica both are strongly upregulated. B. pertussis has thought to have derived by genomic reduction from a common ancestor close to current-day B. bronchiseptica (Diavatopoulos et al., 2005), and our observations support the idea that streamlining of the B. pertussis genome is an on-going process. Thus, a caveat of in silico genomic analyses is that they can only reveal which genes are present or absent, but not whether they are functional. With very small total proteomes and few copper-related proteins, endosymbionts represent one extreme of our bacterial set. The proportions of cuproproteins relative to their total proteomes are hardly lower than in other β proteobacterial groups, and thus those genes appear to follow the general genomic reduction toward a symbiotic lifestyle. The proportions of their copper-homeostasis proteins relative to their total proteomes are markedly lower than in other bacterial groups, indicating that those genes are shed faster than others. Genes coding for chaperones tend to follow the same route. If chaperones were only providing copper for cuproenzyme assembly, one would expect them to decrease in similar proportions as genes for cuproproteins. That they are lost in greater proportions suggests that some of them also served defense purposes, likely by sequestration, in the bacterial ancestors of those symbionts. Copper is increasingly used as an antibacterial agent, notably in hospitals, in agriculture, and in animal breeding (Grass et al., 2011;Lemire et al., 2013;Schmidt et al., 2016). As shown here, environmental bacteria are well-equipped to deal with such aggressions. Alarmingly, such bacteria cause a growing number of opportunistic infections, including C. metallidurans and other β proteobacteria that we have classified as "environmental opportunists" based on our review of the literature (Langevin et al., 2011;Bayhan et al., 2013;Bilgin et al., 2015;Jardine et al., 2017). Finding ways to limit the emergence of new opportunists favored by widespread use of copper as an antibacterial agent will thus become increasingly important. Finally, our study includes a number of bacterial species with bioremediation potential. Deciphering the interplay between metallic stress and stress caused by toxic chemicals might help make the best use of such organisms. CONCLUSION In silico analyses of the predicted copper-related proteomes of a large panel of β proteobacteria have indicated that lifestyle shapes the copper-related proteome, and this is particularly reflected in systems involved in copper homeostasis. Evolution from environmental niches toward commensalism, obligate pathogenesis or symbiosis parallels the loss of copperhomeostasis systems. Endosymbionts represent an extreme situation with respect to copper, having lost almost all copperrelated genes with the exception of few cuproproteins involved in electron transfer. The correlation between lifestyle and copper homeostasis appears to hold true in other groups of Proteobacteria as well. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the manuscript/Supplementary Files. AUTHOR CONTRIBUTIONS RA and FJ-D conceived the study. RA, AR-M, and GR gathered the data and prepared the figures and tables. FJ-D wrote the manuscript. All authors analyzed the data and reviewed the manuscript. ACKNOWLEDGMENTS AR-M and GR acknowledge the support of doctoral fellowships from the University of Lille-Région Nord-Pas-de Calais and from the University of Lille, respectively. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2019.02217/full#supplementary-material FIGURE S1 | Full complements of copper-related proteins (cuproproteins, copper-homeostasis proteins, and copper chaperones) in the entire set of β proteobacteria analyzed in this study. The sizes of the proteomes (numbers of proteins) and the lifestyles are given in the second vertical panel. E, OE, AP, PP, OC, C, S, EnP, and EnS represent environmental, opportunistic/environmental, animal pathogen, phytopathogen, opportunistic/commensal, commensal, symbiotic, endophytic, and endosymbiotic bacteria, respectively. In the third vertical panel are shown the proportions (in %) of the copper-related proteomes (CuRP) relative to the full proteomes for each species, and the proportions (in %) of the proteins in the three functional categories (cuproproteins, copper-homeostasis proteins, and copper chaperones) relative to the complete proteomes. In that panel, the white, green, and red background colors represent average values, above-average values and below-average values, respectively, relative to the 119 species, with the intensities of the colors relating to the distances to the average values. The abundances of each type of proteins in each bacterial species are represented in the last panel. Note that MCOs are both cuproproteins and copperhomeostasis proteins, and Csp3s (cytoplasmic copper storage proteins) are both copper-homeostasis proteins and copper chaperones. The intensities of the colors increase with the numbers of paralogs in each bacterial species, with white indicating the absence of the protein. The absolute numbers of proteins of each type in each bacterial species can be found in Supplementary Table S2. TABLE S1 | β proteobacterial species analyzed in this study. The species are sorted according to phylogenetic analyses, and the references used to determine their lifestyles are indicated along with the links to the corresponding publications. The different lifestyles are represented by colors: dark green for environmental species (E), green for opportunist/environmental species (OE), red for animal pathogens (AP), yellow for phytopathogens (PP), light beige for commensal species (C), orange for opportunist/commensal species (OC), beige for symbionts (S), light green for endophytes (EnP) and pink for endosymbionts (EnS). The sizes of the predicted proteomes are also provided. TABLE S2 | Copper-related proteomes of β proteobacteria. The list of the 119 species analyzed in this study is shown. The species are sorted according to phylogenetic analyses. The GenBank accession numbers of the replicons are given as well as their total sizes in megabases (Mb). The GC content (% GC) and the sizes of the proteomes (numbers of predicted proteins) are also reported. For each Pfam domain, the number of occurrences in each species is given as well as the locus_tag name (ID xxxx) found in the corresponding GenBank file. For respiratory components no locus_tag is given but only the number of occurrences of these multi-component systems. TABLE S3 | Copper-related proteomes of α and γ proteobacteria. The list of the 30 species analyzed in this study is shown. The GenBank accession numbers of the replicons are given as well as their total sizes in megabases (Mb). The GC content (% GC) and the sizes of the proteomes (numbers of predicted proteins) are also reported. For each Pfam domain, the number of occurrences in each species is given as well as the locus_tag name (ID xxxx) found in the corresponding GenBank file. For respiratory components no locus_tag is given but only the number of occurrences of these multi-component systems. FILE S1 | HMM profile for the copper storage protein domain, CSP.hmm.
11,439.4
2019-09-24T00:00:00.000
[ "Environmental Science", "Biology", "Chemistry" ]
Enhanced Solar Photocatalytic Reduction of Cr(VI) Using a (ZnO/CuO) Nanocomposite Grafted onto a Polyester Membrane for Wastewater Treatment Among chemical water pollutants, Cr(VI) is a highly toxic heavy metal; solar photocatalysis is a cost-effective method to reduce Cr(VI) to innocuous Cr(III). In this research work, an efficient and economically feasible ZnO/CuO nanocomposite was grafted onto the polyester fabric ZnO/CuO/PF through the SILAR method. Characterization by SEM, EDX, XRD, and DRS confirmed the successful grafting of highly crystalline, solar active nanoflakes of ZnO/CuO nanocomposite onto the polyester fabric. The grafting of the ZnO/CuO nanocomposite was confirmed by FTIR analysis of the ZnO/CuO/PF membrane. A solar photocatalytic reduction reaction of Cr(VI) was carried out by ZnO/CuO/PF under natural sunlight (solar flux 5–6 kW h/m2). The response surface methodology was employed to determine the interactive effect of three reaction variables: initial concentration of Cr(VI), pH, and solar irradiation time. According to UV/Vis spectrophotometry, 97% of chromium was removed from wastewater in acidic conditions after four hours of sunlight irradiation. ZnO/CuO/PF demonstrated reusability for 11 batches of wastewater under natural sunlight. Evaluation of Cr(VI) reduction was also executed by complexation of Cr(VI) and Cr(III) with 1, 5-diphenylcarbazide. The total percentage removal of Cr after solar photocatalysis was carried out by AAS of the wastewater sample. The ZnO/CuO/PF enhanced the reduction of Cr(VI) metal from wastewater remarkably. Introduction Water is a basic requirement for all life forms on earth, but availability of fresh water is limited due to the poor management of industrial wastewater that is directly discharged into waste streams without any treatment [1]. This contaminated water can be reused for cultivation and drinking purposes after passing through effective treatment in order to overcome this great issue [2]. In industrial wastewater, the major environmental pollutants that adversely affect the ecological food chain are heavy metals [3]. A number of heavy metals such as chromium (Cr), nickel (Ni), lead (Pb), cadmium (Cd), mercury (Hg), zinc Mercerization of Polyester Fabric The procured untreated polyester fabric was washed with distilled water and ethanol to remove any impurities. Binding forces of polyester fabric were enhanced by rendering it hydrophilic with the surface deposition of hydroxyl groups (-OH), and a chemical treatment method named mercerization was employed. A one-meter square piece of polyester fabric (PF)was boiled in 40 g/L of NaOH with continuous stirring for 30 min and temperature was maintained at 60 • C. After cooling the solution, the polyester fabric was removed and the excessive NaOH was washed out with distilled water. Untreated polyester fabric was functionalized so that the nano-photocatalyst may graft evenly and densely onto its surface [26]. Fabrication of ZnO/CuO Nanocomposite on Polyester Fabric (ZnO/CuO/PF Membrane) For the grafting of ZnO/CuO nanocomposite on the polyester fabric (by the SILAR method), during the first step a cationic complex of copper sodium zincate was formed by adding 0.05 M of Zn(CH 3 COO) 2 ·2H 2 O, 0.05 M of Cu(CH 3 COO) 2 ·2H 2 O, and 0.2 M of NaOH in 1:10 ratio. One piece (10 × 10 inch) of polyester fabric (PF) was firstly dipped in the cationic solution for 30 s then dipped in the anionic solution for 30 s to graft ZnO/CuO nanocomposite onto the PF membrane. This cycle was repeated 30 times to have a better growth of ZnO/CuO nanocomposite on the PF membrane. In the second step, functionalized polyester fabric was cut into pieces of 10 cm 2 and each piece was sequentially dipped in the above prepared cationic solution, in distilled water as the anionic solution for 30 s, then air-dried and the cycle was repeated 30 times. The surface grafted (-OH) groups react with the cationic solution in order to form ZnO/CuO nanocomposite ( Figure 1) [27]. Finally, unreacted ions were removed from the ZnO/CuO coated polyester fabric by washing it with distilled water. After air drying all the fabricated PMRs, each side of them was exposed to UV light (intensity 44 W) for 30 min to bind the nanocomposite firmly onto the surface of the functionalized polyester fabric. The photocatalyst load on PMR was measured to be 58 ± 2 µg/cm 3 by comparing the weight of the untreated polyester fabric and the PF loaded with ZnO/CuO nanocomposite. Characterization of Fabricated ZnO/CuO/PF Membrane ZnO/CuO nanocomposite was scraped from the surface of the functionalized polyester fabric and characterized to analyze its crystallinity, purity, and crystallite size by an X-ray diffractometer (Jeol JDX-3532, UK diffractometer) using CuKα irradiation (λ = 1.54 Å). The morphology, surface texture, and elemental analysis of ZnO/CuO nanocomposite was examined by using a scanning electron microscope (Quanta 2500, FEG (USA) and energy dispersive X-ray analysis (Oxford instruments, Abingdon, UK). The optical properties of the nanocomposite were determined by DRS (Perkin Elmer Lambda 1050, Buckinghamshire, UK). The FTIR analysis of the untreated polyester fabric and solar photocatalytic membrane loaded with ZnO/CuO nanocomposite was done by an IFS 125HR FTIR spectrometer (Bruker, Yokohama, Japan). For the assessment of thermal stability, TGA of ZnO/CuO nanocomposite was carried out by using a PerkinElmer Thermal Analyzer. Evaluation of Concentration of Cr (VI) and Cr (III) by Complexation The effluent containing hexavalent chromium formed a complex with 1, 5-diphenylcarbazide used as a complexing agent. First, 0.5 g of 1, 5-DPC in 100 mL of acetone was dissolved completely and then diluted with distilled water. Samples of Cr(VI) standard solution were prepared in concentrations from 10 to 50 ppm. The extraction reagents, NaOH (2%) and Na2CO3 (3%), were prepared in distilled water and dissolved in 0.1% potassium permanganate solution until a pink color was obtained. After mixing these reagents with Cr(VI) standards, 4 mL of sulfuric acid was added to obtain a pH of 2 (solution A). Then, 1, 5-DPC solution (2 mL) was mixed with solution A and a red-violet color of Characterization of Fabricated ZnO/CuO/PF Membrane ZnO/CuO nanocomposite was scraped from the surface of the functionalized polyester fabric and characterized to analyze its crystallinity, purity, and crystallite size by an X-ray diffractometer (Jeol JDX-3532, UK diffractometer) using CuKα irradiation (λ = 1.54 Å). The morphology, surface texture, and elemental analysis of ZnO/CuO nanocomposite was examined by using a scanning electron microscope (Quanta 2500, FEG (USA) and energy dispersive X-ray analysis (Oxford instruments, Abingdon, UK). The optical properties of the nanocomposite were determined by DRS (Perkin Elmer Lambda 1050, Buckinghamshire, UK). The FTIR analysis of the untreated polyester fabric and solar photocatalytic membrane loaded with ZnO/CuO nanocomposite was done by an IFS 125HR FTIR spectrometer (Bruker, Yokohama, Japan). For the assessment of thermal stability, TGA of ZnO/CuO nanocomposite was carried out by using a PerkinElmer Thermal Analyzer. Evaluation of Concentration of Cr (VI) and Cr (III) by Complexation The effluent containing hexavalent chromium formed a complex with 1, 5-diphenylcarbazide used as a complexing agent. First, 0.5 g of 1, 5-DPC in 100 mL of acetone was dissolved completely and then diluted with distilled water. Samples of Cr(VI) standard solution were prepared in concentrations from 10 to 50 ppm. The extraction reagents, NaOH (2%) and Na 2 CO 3 (3%), were prepared in distilled water and dissolved in 0.1% potassium permanganate solution until a pink color was obtained. After mixing these reagents with Cr(VI) standards, 4 mL of sulfuric acid was added to obtain a pH of 2 (solution A). Then, 1, 5-DPC solution (2 mL) was mixed with solution A and a red-violet color of the Cr(VI) complex formed rapidly under the acidic conditions [28][29][30][31]. The concentration of Cr(III) was tested; (Cr(H 2 O) 6 ) 3+ was produced upon the addition of a small quantity of Na 2 CO 3 in Cr(III), forming grey-green precipitates of tri-aqua, trihydroxy, and chromium (III) complex. The precipitates were filtered to obtain chromium-free water [32]. The total percentage removal of chromium after complexation was determined by ICP-MS (Agilent 7700×, Santa Clara, CA, USA). Photocatalytic Reduction of Cr(VI) in Wastewater Photocatalytic reduction was performed for standard solutions of Cr(VI) through a series of experiments in which pH was adjusted through acidic and basic solutions and the irradiation time of natural sunlight was varied using a ZnO/CuO/PE-based PMR. The PMRs of 5 × 5 cm 3 size were cut and immersed in glass containers of 10 × 10 × 4 cm 3 with a working volume of 50 mL of standard solutions of Cr(VI) and exposed to natural direct sunlight during 11 a.m.-3 p.m. in the month of May. The average temperature was about 35-40 • C and solar flux was 5-6 kW h/m 2 . The extent of reduction of Cr(VI) from the treated samples was measured by using a UV/Vis spectrophotometer, while percentage reduction was measured in terms of the absorbance of Cr(VI) before and after photocatalytic reduction using Equation (1) [31]. The equipment used for the measurement of absorption was a UV/Vis spectroscope (CE Cecil 7200, Isernhagen, Germany). where A 0 is the initial absorbance and A f is the final absorbance. Calculated values were further used for the application of RSM to determine the results and relationship between independent variables. 2 (2) The model suggested by the central composite design of RSM was applied on the abovementioned values of variables and 20 runs were conducted by software with different values of all three variables. The response (Y) was taken as percentage reduction of Cr(VI). The ANOVA table was obtained by inserting responses into the software. Solar Photocatalytic Treatment of Real Wastewater Under the optimized reaction conditions, the real wastewater containing Cr(VI) was treated with the ZnO/CuO/PE-based PMR. The working volume was 50 mL and solar flux was 5-6 kW h/m 2 . The extent of reduction was measured by UV/Vis spectroscopy (CECIL CE 7200, Germany) and the complexation of Cr(VI) and Cr(III) before and after solar photocatalytic reduction. Characterization of the ZnO/CuO/PF-Based PMR The results of the structural morphological, optical, and thermal characterization of ZnO/CuO collected from the surface of the PMR and ZnO/CuO/PFPMR are discussed below. The polyester fabric grafted with ZnO/CuO composite was observed to be densely covered with flake-like nanostructures (Figure 2a,b). The average dimensions of the structures calculated by ImageJ software were 67 × 56 × 12 nm 3 . The magnified micrograph of a few strands of fabric showed nanoflakes grafted densely onto the strands of polyester fabric (Figure 2c). The even and dense covering of the functionalized polyester fabric can be attributed to an alkali treatment as pores had appeared on the surface of the fabric due to etching. The alkali treatment of the fabric resulted in surface roughness, which might have caused the generation of sites for the easy penetration of nanoparticles. The pointed tapering ends of the thin flakes consist of ZnO/CuO composite. The polyester fabric grafted with ZnO/CuO composite was observed to be densely covered with flake-like nanostructures (Figure 2a,b). The average dimensions of the structures calculated by ImageJ software were 67 × 56 × 12 nm 3 . The magnified micrograph of a few strands of fabric showed nanoflakes grafted densely onto the strands of polyester fabric (Figure 2c). The even and dense covering of the functionalized polyester fabric can be attributed to an alkali treatment as pores had appeared on the surface of the fabric due to etching. The alkali treatment of the fabric resulted in surface roughness, which might have caused the generation of sites for the easy penetration of nanoparticles. The pointed tapering ends of the thin flakes consist of ZnO/CuO composite. Morphological and surface analyses of ZnO/CuO/PF nanocomposite were studied through SEM images. Flake-like monomorphous nanostructures were obtained ( Figure 2d), showing a geometry suitable for photocatalytic applications as it provides a high surface area with a large number of active sites, offering channels for electron movement during photocatalytic reactions [33]. The elemental analysis and purity of ZnO/CuO/PF nanocomposite were studied through the EDX spectrum, giving peaks of Zn, Cu, and O with Zn and Cu intensities in the 1:1 ratio as both participate equally in the synthesis of the composite; a greater oxygen intensity was also revealed in ZnO/CuO/PF composite fabrication. The EDX results confirmed the authenticity of the composite formation and the high purity of the nanocomposite shown in Figure 1e. According to atomic percentage and weight percentage data obtained from EDX, almost the same concentrations of Cu +2 and Zn +2 were present owing to their similar cationic sizes. The higher concentration of Morphological and surface analyses of ZnO/CuO/PF nanocomposite were studied through SEM images. Flake-like monomorphous nanostructures were obtained (Figure 2d), showing a geometry suitable for photocatalytic applications as it provides a high surface area with a large number of active sites, offering channels for electron movement during photocatalytic reactions [33]. The elemental analysis and purity of ZnO/CuO/PF nanocomposite were studied through the EDX spectrum, giving peaks of Zn, Cu, and O with Zn and Cu intensities in the 1:1 ratio as both participate equally in the synthesis of the composite; a greater oxygen intensity was also revealed in ZnO/CuO/PF composite fabrication. The EDX results confirmed the authenticity of the composite formation and the high purity of the nanocomposite shown in Figure 1e. According to atomic percentage and weight percentage data obtained from EDX, almost the same concentrations of Cu +2 and Zn +2 were present owing to their similar cationic sizes. The higher concentration of oxygen than the two component cations revealed that an oxide of both zinc and copper was formed as a composite. Moreover, a high concentration of surface adhered oxygen as hydroxyl groups was indicated. Structural Characterization of ZnO/CuO Grafted on PMR The crystal structure of ZnO/CuO nanocomposite was studied through XRD and the observed diffraction design was consistent with the information accessible in JCPDS cards (JCPDS; 36-1451 for ZnO, JCPDS; 05-0661 for CuO), demonstrating the fabricated sample as crystalline, and the phase purity and phase separation between ZnO and CuO is clearly visible (Figure 2f), proving the presence of both ZnO and CuO. The diffractogram shows peaks at the 33.96 • (002) plane, 31.31 • (100) plane, and 35.79 • (101) plane, revealing the presence of ZnO and peaks at the 38.29 • (111) and 35.05 • (111) plane are due to CuO, proving its consistency with the standards values [34]. The average crystallite size of ZnO/CuO/PF nanocomposite was calculated numerically as (L) by using the Debye-Scherrer formula as given in the following equation [35,36]. Taking the value of β = FWHM from the graph, the calculated crystallite size of ZnO/CuO/PE was found to be 13.5 nm. Optical Properties of ZnO/CuO Grafted onto the PMR The diffused reflectance spectrum indicated that almost 50% of the sunlight was reflected, confirming the high absorption of solar radiation by the composite of the fabricated material ( Figure 3a). The bandgap edge was observed to be at 365 nm according to the DRS spectrum of ZnO/CuO composite. Intrinsic ZnO, having a band gap at 3.2 eV, cannot absorb visible radiation [37,38]. On the other hand, ZnO/CuO/PF exhibited an enhanced light harvesting capability in the solar spectrum. The band gap energy of ZnO/CuO was calculated by a Kubelka-Munk plot using the relation given below [39]. where the absorption coefficient (A) is related through Taucs relation, R∞ is the diffused reflectance, (R∞) is the Kubelka-Munk function, and Eg represents the band gap energy. Using this Kubelka-Munk relation, the band gap energy for ZnO/CuO/PF was found to be 2.9 eV (Figure 3b). The decrease in the bandgap can be attributed to the induction of inter-band energy states below the conduction band (CB). Consequently, an electron as a charge carrier requires less energy to become excited from the valence band (VB) compared to the CB. Polymers 2021, 13, x 7 of 18 oxygen than the two component cations revealed that an oxide of both zinc and copper was formed as a composite. Moreover, a high concentration of surface adhered oxygen as hydroxyl groups was indicated. Structural Characterization of ZnO/CuO Grafted on PMR The crystal structure of ZnO/CuO nanocomposite was studied through XRD and the observed diffraction design was consistent with the information accessible in JCPDS cards (JCPDS; 36-1451 for ZnO, JCPDS; 05-0661 for CuO), demonstrating the fabricated sample as crystalline, and the phase purity and phase separation between ZnO and CuO is clearly visible (Figure 2f), proving the presence of both ZnO and CuO. The diffractogram shows peaks at the 33.96° (002) plane, 31.31° (100) plane, and 35.79° (101) plane, revealing the presence of ZnO and peaks at the 38.29° (111) and 35.05° (111) plane are due to CuO, proving its consistency with the standards values [34]. The average crystallite size of ZnO/CuO/PF nanocomposite was calculated numerically as (L) by using the Debye-Scherrer formula as given in the following equation [35,36]. Taking the value of β = FWHM from the graph, the calculated crystallite size of ZnO/CuO/PE was found to be 13.5 nm. Optical Properties of ZnO/CuO Grafted onto the PMR The diffused reflectance spectrum indicated that almost 50% of the sunlight was reflected, confirming the high absorption of solar radiation by the composite of the fabricated material (Figure 3a). The bandgap edge was observed to be at 365 nm according to the DRS spectrum of ZnO/CuO composite. Intrinsic ZnO, having a band gap at 3.2 eV, cannot absorb visible radiation [37,38]. On the other hand, ZnO/CuO/PF exhibited an enhanced light harvesting capability in the solar spectrum. The band gap energy of ZnO/CuO was calculated by a Kubelka-Munk plot using the relation given below [39]. where the absorption coefficient (A) is related through Taucs relation, ∞ is the diffused reflectance, ( ∞) is the Kubelka-Munk function, and Eg represents the band gap energy. Using this Kubelka-Munk relation, the band gap energy for ZnO/CuO/PF was found to be 2.9 eV (Figure 3b). The decrease in the bandgap can be attributed to the induction of inter-band energy states below the conduction band (CB). Consequently, an electron as a charge carrier requires less energy to become excited from the valence band (VB) compared to the CB. The thermal stability of ZnO/CuO nanocomposite was measured by TGA and the curve obtained indicated that the initial weight loss for the nanocomposite was observed between 100-200 • C due to the evaporation of adsorbed water molecules from the surface of the composite (Figure 4). Moreover, dehydration of Cu(OH) 2 to CuO was found to occur between 190 and 210 • C. The second gradual weight loss of 1.27-1.61% took place from 200-410 • C, then a greater weight loss of 2.00% appeared in the temperature range of 410-480 • C due to the evaporation of chemically bound water, indicating the decomposition of Zn(OH) 2 . The third and most prominent weight loss was observed at 610-700 • C (weight loss of 4.38%) due to the complete loss of water from the sample. The stability in the overall weight loss up to 800 • C can be deduced from only a 4.38% total weight loss. The thermal stability is attributed to the efficient catalytic activity of the composite grafted on the ZnO/CuO/PF PMR. Previous studies have reported similar results [40]. The thermal stability of ZnO/CuO nanocomposite was measured by TGA and the curve obtained indicated that the initial weight loss for the nanocomposite was observed between 100-200 °C due to the evaporation of adsorbed water molecules from the surface of the composite (Figure 4). Moreover, dehydration of Cu(OH)2 to CuO was found to occur between 190 and 210 °C. The second gradual weight loss of 1.27-1.61% took place from 200-410 °C, then a greater weight loss of 2.00% appeared in the temperature range of 410-480 °C due to the evaporation of chemically bound water, indicating the decomposition of Zn(OH)2. The third and most prominent weight loss was observed at 610-700 °C (weight loss of 4.38%) due to the complete loss of water from the sample. The stability in the overall weight loss up to 800 °C can be deduced from only a 4.38% total weight loss. The thermal stability is attributed to the efficient catalytic activity of the composite grafted on the ZnO/CuO/PF PMR. Previous studies have reported similar results [40]. Characterization of the ZnO/CuO/PF-Based PMR for Surface Functionalization The untreated polyester fabric, mercerized polyester, and ZnO/CuO/PF-based PMR were subjected to FTIR analysis. According to the spectra obtained, the band observed at 1940 cm −1 appeared due to carbonyl groups. The bands at 1575 cm −1 ( Figure 5) correspond to C = N and C-N groups present in benzenoid and quinoid structures and OH bending in the COOH group. Peaks at 964, 941, and 943 cm −1 exhibited in all spectra are due to the vinyl (C-H) group. No peak appeared at 630 cm −1 for the untreated and mercerized polyester, but it did for ZnO/CuO grafted on mercerized polyester. This small and broad peak indicated the grafting of the nanocomposite onto the surface of mercerized polyester in a small amount, appearing due to stretching of the Zn-O and Cu-O bond. Similar results have been reported for polymers [41]. Characterization of the ZnO/CuO/PF-Based PMR for Surface Functionalization The untreated polyester fabric, mercerized polyester, and ZnO/CuO/PF-based PMR were subjected to FTIR analysis. According to the spectra obtained, the band observed at 1940 cm −1 appeared due to carbonyl groups. The bands at 1575 cm −1 ( Figure 5) correspond to C = N and C-N groups present in benzenoid and quinoid structures and OH bending in the COOH group. Peaks at 964, 941, and 943 cm −1 exhibited in all spectra are due to the vinyl (C-H) group. No peak appeared at 630 cm −1 for the untreated and mercerized polyester, but it did for ZnO/CuO grafted on mercerized polyester. This small and broad peak indicated the grafting of the nanocomposite onto the surface of mercerized polyester in a small amount, appearing due to stretching of the Zn-O and Cu-O bond. Similar results have been reported for polymers [41]. Solar Photocatalytic Reduction of Chromium by RSM The response of the solar photocatalytic reduction of Cr(VI) was collected by executing the experimental runs provided by a complete polynomial quadratic model as the central composite design (CCD). Design Expert V.7.0.0 (Stat-Ease Inc., Minneapolis, MN, USA) was used to calculate the interactive effect of all three variables while using ZnO/CuO/PF Solar Photocatalytic Reduction of Chromium by RSM The response of the solar photocatalytic reduction of Cr(VI) was collected by executing the experimental runs provided by a complete polynomial quadratic model as the central composite design (CCD). Design Expert V.7.0.0 (Stat-Ease Inc., Minneapolis, MN, USA) was used to calculate the interactive effect of all three variables while using ZnO/CuO/PF as a PMR under natural sunlight. After the solar photocatalytic treatment, all 20 samples were collected and absorbance was measured by UV/Vis spectroscopy (Table 1) [42]. Optimization of Operational Parameters by Response Surface Methodology While taking the photocatalyst load constant (58 micro g/cm 3 ) as the solar photocatalytic membrane reactor and solar flux (5-6 kW h/m 2 ), variable operational parameters, i.e., pH (5-9), initial Cr(VI)concentration (10-50 ppm), and solar irradiation time (2-6 h), were statistically optimized through response surface methodology (RSM), a statistical and mathematical tool for achieving the maximum reduction of Cr(VI). The basic equation for the selected quadratic model that represents the relationship between variables and their interactive effect on the percentage reduction of Cr(VI) is given in Equation (1). Furthermore, percentage reduction was calculated from absorbance to get the maximum Cr(VI) percentage reduction of standard solutions ( Table 1). The ANOVA table includes a lower value of Prob F > 0.005 as the F value of 20.67 with 0.01% chances of noise, ensuring the significant effect of variables on the response. Moreover, the value of p < 0.0001 with Pred. R 2 = 0.7537 and Adj. R 2 = 0.9031 agreement ensures that the model is greatly significant, and the best predictability was shown by the insignificant lack of the fit test ( Table 2). The parameter optimization expression for response surface methodology is given in Equation (5). The above expression represents Y as percentage reduction and X1, X2, and X3 are the variables, i.e., oxidant concentration, pH, and irradiation time. Interactive Effects of Operational Variables of Solar Photocatalytic Reaction Three-dimensional graphics represent the regression equation for optimizing reaction conditions. The 3D response surface interaction plots along with the interactive effect of dependent variables' interactions are represented below (Figure 6). Interactive Effects of Operational Variables of Solar Photocatalytic Reaction Three-dimensional graphics represent the regression equation for optimizing reaction conditions. The 3D response surface interaction plots along with the interactive effect of dependent variables' interactions are represented below (Figure 6). Interactive Effect of Initial Concentration of Cr(VI) and Irradiation Time The initial concentration of Cr(VI) and solar irradiation time showed an interactive effect, proving the linear relationship between dependent variables in the case of percentage reduction of Cr(VI) as shown in Figure 6a. An increase in the initial concentration offered a higher percentage reduction up to 50 ppm of concentration due to the rapid coverage of available active sites by ions and the enhanced rate of photocatalysis. Furthermore, the photocatalyst surface initially had a larger amount of active sites for Cr(VI) adsorption followed by reduction, but the reduction efficiency decreased slightly after some time of increasing metal ion concentration adsorbed on vacant sites; they all became occupied in 2-4 h according to the 3D surface plot [43]. Interactive Effect of Initial Concentration of Cr(VI) and pH The initial concentration of Cr(VI) and pH interactive effect has exhibited an indirect relationship in the case of the percentage reduction of Cr(VI) as represented in Figure 6b. The estimated Cr(VI) reduction demonstrated a lesser reduction with an increasing concentration of Cr(VI), as exceeding the optimum Cr(VI) concentration prohibited the approach of more Cr(VI) ions to the active sites of the photocatalyst. Another important variable is pH, which affects the reduction rate of Cr(VI) by controlling the surface charge of the photocatalyst during the photocatalytic reaction. The rate of photocatalysis decreased Interactive Effect of Initial Concentration of Cr(VI) and Irradiation Time The initial concentration of Cr(VI) and solar irradiation time showed an interactive effect, proving the linear relationship between dependent variables in the case of percentage reduction of Cr(VI) as shown in Figure 6a. An increase in the initial concentration offered a higher percentage reduction up to 50 ppm of concentration due to the rapid coverage of available active sites by ions and the enhanced rate of photocatalysis. Furthermore, the photocatalyst surface initially had a larger amount of active sites for Cr(VI) adsorption followed by reduction, but the reduction efficiency decreased slightly after some time of increasing metal ion concentration adsorbed on vacant sites; they all became occupied in 2-4 h according to the 3D surface plot [43]. Interactive Effect of Initial Concentration of Cr(VI) and pH The initial concentration of Cr(VI) and pH interactive effect has exhibited an indirect relationship in the case of the percentage reduction of Cr(VI) as represented in Figure 6b. The estimated Cr(VI) reduction demonstrated a lesser reduction with an increasing concentration of Cr(VI), as exceeding the optimum Cr(VI) concentration prohibited the approach of more Cr(VI) ions to the active sites of the photocatalyst. Another important variable is pH, which affects the reduction rate of Cr(VI) by controlling the surface charge of the photocatalyst during the photocatalytic reaction. The rate of photocatalysis decreased with an increase in pH. Furthermore, Cr(VI) to Cr(III) reduction generated hydroxyls in the alkaline medium and utilized protons. It can be observed that in the acidic medium, the best results were obtained in the pH range of 5-7 [44]. Interactive Effect of pH and Irradiation Time The pH and irradiation time interactive effect on Cr(VI) percentage reduction is represented in Figure 6c, depicting a linear relationship between these independent variables. In alkaline medium, the surface of ZnO/CuO holds additional negative charges, since it is electrostatically repelled by the negatively charged species, such as Cr 2 O 4 2− , HCrO 4 − , CrO 4 2− , and neutral H 2 CrO 4 species, showing a decreased extent of Cr(VI) adsorption on the ZnO/CuO/PE surface at a higher pH range, whereas a positively charged photocatalyst at neutral pH acts as a Lewis acid, thus promoting the rate of photocatalytic reduction of Cr (VI) to Cr (III), also confirmed from the 3D RSM plot. The reduction efficiency of Cr(VI) was enhanced by a higher irradiation time, but after a certain limit, all active sites were blocked, decreasing the reduction [45]. Since the best results were obtained at pH 5-7 and a sunlight irradiation time of 6 h, it can be concluded that in a slightly acidic medium, up to 95% reduction of Cr(VI) can be obtained. Complexation of Cr(VI) and Cr(III) Complexes The standard solution of Cr(VI) in the concentration range of 10-50 ppm and wastewater were subjected to complexation, and the variation in intensity of the reddish violet color indicated the differences in concentration of Cr(VI). Similarly, Cr(VI) in wastewater was also subjected to a complexation reaction. The steps given in Figure 7a show the formation of a hexavalent chromium complex with 1,5-diphenylcarbazide. After executing a solar photocatalytic reduction reaction using the ZnO/CuO/PF PMR, Cr(VI) was reduced to the trivalent chromium, which was further reacted with complexing reagents as given in Figure 7b. The end product was a hexahydroxy chromium III complex of greenish yellow color which reduced the concentration of Cr(III) from standard solutions of Cr(VI) and treated wastewater. UV/Vis spectroscopy analysis was used for the determination of hexavalent and trivalent chromium complexes. Cr(VI) formed a red-violet color complex with 1,5-diphenylcarbazide, showing a maximum absorption at 547 nm wavelength [46][47][48][49]. The high peak of the Cr(VI) complex delineated the large concentration of Cr(VI) in the wastewater sample. After the solar photocatalytic reduction reaction, no reddish violet color appeared upon complexation, indicating an almost complete conversion of Cr(VI) to Cr(III). On executing the complexation reaction of Cr(III) in treated wastewater, trivalent chromium formed a octahedral complex of a greenish yellow color and showed the maximum absorbance at 320 nm. The very low peak of Cr(III) indicated the reduced concentration of trivalent chromium in the treated wastewater [50]. Figure 8a,b shows the UV/Vis spectra of the Cr(VI) and Cr(III) complexes, respectively. The photocatalytic reduction of Cr (VI) to Cr (III) was confirmed by the formation of complexes of chromium. Hexavalent chromium exists in chromate and dichromate forms. The potassium dichromate (orange color) was first reduced to Cr 3+ . Then, after the addition of Na 2 CO 3 and NaOH, a greenish yellow colored hexa-aqua chromium(III) ion, a complex formed by Cr 3+ in aqueous solution, was obtained [51] Photocatalytic Reduction of Cr(VI) to Cr(III) in Real Wastewater The concentration of Cr(VI) in the real wastewater sample was measured by UV/Vis spectroscopy to be 200 ppm, taking the average. Under the optimized conditions, a solar photocatalytic reaction was carried out using the ZnO/CuO/PF PMR. The real wastewater was yellow in color due to Cr(VI); after the photocatalytic reduction reaction under natural sunlight, the color of the wastewater changed, indicating the decrease in the concentration of Cr(VI). The continuous exposure of the real wastewater to solar flux (5-6 kW h/m 2 ) for six hours (10 a.m. to 4 p.m.) with a photocatalyst load (58 ± 2 µg/cm 3 ) as the solar photocatalytic membrane reactor showed an impressive decrease in Cr(VI), which was observed to be 24 ppm. The large decrease in the concentration of Cr(VI) rendered the water reusable for irrigation and industrial processes (Figure 9). The photocatalytic reduction of Cr (VI) to Cr (III) was confirmed by the formation of complexes of chromium. Hexavalent chromium exists in chromate and dichromate forms. The potassium dichromate (orange color) was first reduced to Cr 3+ . Then, after the addition of Na2CO3 and NaOH, a greenish yellow colored hexa-aqua chromium(III) ion, a complex formed by Cr 3+ in aqueous solution, was obtained [51]. Thus, it is clear from the results that Cr 3+ forms Cr(H2O) 3+ 6, {Cr(H2O)3(OH)3}, and {Cr(OH)6} 3− octahedral complexes in the aqueous solution [52,53]. Photocatalytic Reduction of Cr(VI) to Cr(III) in Real Wastewater The concentration of Cr(VI) in the real wastewater sample was measured by UV/Vis spectroscopy to be 200 ppm, taking the average. Under the optimized conditions, a solar photocatalytic reaction was carried out using the ZnO/CuO/PF PMR. The real wastewater was yellow in color due to Cr(VI); after the photocatalytic reduction reaction under natural sunlight, the color of the wastewater changed, indicating the decrease in the concentration of Cr(VI). The continuous exposure of the real wastewater to solar flux (5-6 kW h/m 2 ) for six hours (10 a.m. to 4 p.m.) with a photocatalyst load (58 ± 2 μg/cm 3 ) as the solar photocatalytic membrane reactor showed an impressive decrease in Cr(VI), which was observed to be 24 ppm. The large decrease in the concentration of Cr(VI) rendered the water reusable for irrigation and industrial processes (Figure 9). The concentration of chromium in real wastewater as percentage removal was estimated by AAS for the decrease in concentration of Cr(VI) after the solar photocatalytic reduction reaction. The concentration of Cr as Cr(VI) and Cr(III) decreased with respect to time. The samples collected from the reactor mixture exposed to natural sunlight, after regular intervals, were aspirated in AAS for determination of the Cr concentration (Table 3). It is obvious from the results obtained that Cr concentration declined sharply after 3 h and become almost constant after 5 h. The increase in the concentration of H + on proceeding the reaction caused an increase in the reduction of Cr(VI). Furthermore, Cr(III) as a product of the solar photocatalytic reaction adsorbed onto the surface of ZnO/CuO/PF, decreasing the overall concentration of Cr in the wastewater. The maximum decrease in percentage removal of Cr was observed to be 25 ppm. The concentration of chromium in real wastewater as percentage removal was estimated by AAS for the decrease in concentration of Cr(VI) after the solar photocatalytic reduction reaction. The concentration of Cr as Cr(VI) and Cr(III) decreased with respect to time. The samples collected from the reactor mixture exposed to natural sunlight, after regular intervals, were aspirated in AAS for determination of the Cr concentration (Table 3). It is obvious from the results obtained that Cr concentration declined sharply after 3 h and become almost constant after 5 h. The increase in the concentration of H + on proceeding the reaction caused an increase in the reduction of Cr(VI). Furthermore, Cr(III) as a product of the solar photocatalytic reaction adsorbed onto the surface of ZnO/CuO/PF, decreasing the overall concentration of Cr in the wastewater. The maximum decrease in percentage removal of Cr was observed to be 25 ppm. Reusability of ZnO/CuO/PF The basic purpose of the immobilization of the photocatalyst (ZnO/CuO) onto the polyester substrate is to make it easily reusable and cost-effective. Reusability of ZnO/CuO/PF was determined as shown in Figure 8. The efficiency of ZnO/CuO/PF was retained up to seven cycles and then a gradual decrease in the performance of the photocatalyst occurred ( Figure 10). Other researchers have reported that, in the case of using PMR consisting of pure ZnO, only an 81% reduction of Cr (VI) to Cr (III) could be obtained [54]. Moreover, even on doping of ZnO with tin (Sn/ZnO), no measurable increase (80% removal) in the photocatalytic activity of the material was observed [55]. Conclusively, in comparison to the results of other research, it is obvious that ZnO/CuO/PF is a more effective PMR, as a 95% reduction of Cr(VI) was obtained, which decreased to 78% after 15 cycles. Reusability of ZnO/CuO/PF The basic purpose of the immobilization of the photocatalyst (ZnO/CuO) on polyester substrate is to make it easily reusable and cost-effective. Reusabil ZnO/CuO/PF was determined as shown in Figure 8. The efficiency of ZnO/CuO/P retained up to seven cycles and then a gradual decrease in the performance of the p catalyst occurred (Figure 10). Other researchers have reported that, in the case of PMR consisting of pure ZnO, only an 81% reduction of Cr (VI) to Cr (III) could be obt [54]. Moreover, even on doping of ZnO with tin (Sn/ZnO), no measurable increase removal) in the photocatalytic activity of the material was observed [55]. Conclusiv comparison to the results of other research, it is obvious that ZnO/CuO/PF is a mo fective PMR, as a 95% reduction of Cr(VI) was obtained, which decreased to 78% af cycles. Conclusions ZnO/CuO/PF was successfully designed by the SILAR method, as ZnO/CuO composite was grafted onto polyester fabric as a highly durable and chemical res photocatalytic membrane reactor. The surface binding forces of polyester with the composite were enhanced by mercerization using 40 g/L of NaOH. The fabricated P a novel material to enhance the photocatalytic reduction application, suitable for the of wastewater after reduction of Cr (VI) to Cr (III). The characterization of synthe CuO/ZnO nanocomposite proved its high purity and crystallinity, whereas its flak Conclusions ZnO/CuO/PF was successfully designed by the SILAR method, as ZnO/CuO nanocomposite was grafted onto polyester fabric as a highly durable and chemical resistant photocatalytic membrane reactor. The surface binding forces of polyester with the nanocomposite were enhanced by mercerization using 40 g/L of NaOH. The fabricated PMR is a novel material to enhance the photocatalytic reduction application, suitable for the reuse of wastewater after reduction of Cr (VI) to Cr (III). The characterization of synthesized CuO/ZnO nanocomposite proved its high purity and crystallinity, whereas its flake-like morphology and rough texture show a great potential for photocatalytic applications. The optical properties of CuO/ZnO nanocomposite indicated the high harvesting power of solar radiation due to the optimized low band gap energy of 2.9 eV. The thermal stability of ZnO/CuO was also determined to be high, with only a 4.38% weight loss. The surface characterization of ZnO/CuO/PF PMR was executed to confirm the grafting of CuO/ZnO nanocomposite onto the surface of functionalized polyester fabric. The extent of Cr(VI) reduction was maximum at pH 6, when the initial concentration of Cr(VI) was 30 ppm and the solution was irradiated for 4 h in natural sunlight. The real wastewater was treated under optimized conditions, and up to a 97% reduction in the concentration of Cr(VI) was observed. The extent of the reduction of the concentration of Cr(VI) to Cr(III) was evaluated by UV/Vis spectrophotometry. Complexation of Cr(VI) and Cr(III) with 1,5-diphenylcarbazide and their estimation of concentration at 547 nm at 320 nm, respectively, confirmed the reduction of Cr(VI) to Cr(III). A further decrease in the concentration of Cr(III) in the treated wastewater
8,958.6
2021-11-01T00:00:00.000
[ "Environmental Science", "Materials Science", "Chemistry" ]
Integrated Double-Sided Random Microlens Array Used for Laser Beam Homogenization Double microlens arrays (MLAs) in series can be used to divide and superpose laser beam so as to achieve a homogenized spot. However, for laser beam homogenization with high coherence, the periodic lattice distribution in the homogenized spot will be generated due to the periodicity of the traditional MLA, which greatly reduces the uniformity of the homogenized spot. To solve this problem, a monolithic and highly integrated double-sided random microlens array (D-rMLA) is proposed for the purpose of achieving laser beam homogenization. The periodicity of the MLA is disturbed by the closely arranged microlens structures with random apertures. And the random speckle field is achieved to improve the uniformity of the homogenized spot by the superposition of the divided sub-beams. In addition, the double-sided exposure technique is proposed to prepare the rMLA on both sides of the same substrate with high precision alignment to form an integrated D-rMLA structure, which avoids the strict alignment problem in the installation process of traditional discrete MLAs. Then the laser beam homogenization experiments have been carried out by using the prepared D-rMLA structure. The laser beam homogenized spots of different wavelengths have been tested, including the wavelengths of 650 nm (R), 532 nm (G), and 405 nm (B). The experimental results show that the uniformity of the RGB homogenized spots is about 91%, 89%, and 90%. And the energy utilization rate is about 89%, 87%, 86%, respectively. Hence, the prepared structure has high laser beam homogenization ability and energy utilization rate, which is suitable for wide wavelength regime. Introduction Gaussian Laser beam has been widely used in the field of lighting [1], detection [2], and satellite communication [3]. However, for the applications of optical therapy, laser projection [4], and lithography [5], etc., it is necessary to homogenize Gaussian beam into a flat-topped beam. Aspheric lens group method [6][7][8], free-form lens method [9][10][11][12], diffracted optical elements (DOEs) [13][14][15], and microlens array (MLA) [16][17][18][19] are usually used to achieve laser beam homogenization. For the aspheric lens group method, the energy distribution of Gaussian beam is spatially modulated to form a uniform distribution in a specific position. The energy utilization rate and spot uniformity are both high up to 90%, which can be applied to lasers with high power [8]. In 2008, Oliker, V. [9] proposed a design method for the beam shaping of double free-form surface lens, which shapes the collimated incident laser beam into the collimated outgoing beam with the required energy distribution. In 2013, Feng, Z. proposed that the inhomogeneity of the laser beam can be eliminated by energy mesh dividing, expansion, and superposition [12]. This method has a high freedom of design which can effectively realize the high uniformity of the laser beam. However, the aspheric lens group is composed of two traditional aspheric lenses: concave lens and convex lens. Due to the large appearance size of the overall homogenization structure, the miniaturization, and integration of the optical system cannot be realized. For the free-form lens, the mesh division of lens surface is fine and each mesh surface will be designed separately, which brings high machining accuracy requirements and increase the machining difficulty. Diffractive optical elements (DOEs) with high integration can control the intensity distribution accurately with high diffraction efficiency. When the DOEs are used to homogenize the laser beam, the more the number of steps in the structure, the better the homogenization effect will, while the processing difficulty will increase as well. Meanwhile, the energy utilization rate, that is, diffraction efficiency will depend on the number of steps. For example, when the number of steps is two, four, eight, or sixteen, the energy utilization rate will be 40.5%, 81%, 94.9%, and 98.6%, respectively [20]. In addition, the DOEs operate in a very narrow wavelength band and is sensitive to the change of the wavelength, which limits the applicability of lasers with different wavelengths. The binary DOEs is always designed for a single wavelength. When the other wavelength is used to irradiate the DOEs, the central zero order strong intensity will be produced, which greatly reduces the uniformity of the homogenization spot. The method of MLA has the advantages of high energy utilization rate, small volume and high integration. In addition, it is not sensitive to the intensity distribution of incident light. The incident laser beam is divided into a series of sub-beams, which are superimposed on each other in the far field to eliminate the inhomogeneity between different sub-beams and form a homogenized spot [16,17]. The MLA is a refractive continuous surface structure with less stray light and it is suitable for beam homogenization of different lasers with a high energy utilization rate. In recent years, a lot of related research work on the method of laser beam homogenization by means of using refractive MLA have been carried out. The imaging MLA (double periodic MLAs) beam integrator system was designed by Dickey F. M. for fiber injection to obtain uniform speckle pattern at the output end of the fiber [21] and the research results are applied in industry [22]. The higher the Fresnel number of the MLA, the sharper the edge of the homogenized spot will be, which is diffracted by the MLA. The uniformity of the homogenized spot is related to the number of sub-beams, which are divided by the MLA. However, periodic MLA is only suitable for the laser beam with poor coherence. In view of the laser beam with high coherence, the interference will occur between the sub-beams due to the periodic structure of the MLA, resulting in interference fringes in the obtained homogenized spots. Therefore, the periodic lattice phenomenon will appear, reducing the homogeneity of the spot greatly. In order to eliminate the influence of interference on the homogenized spot, researchers propose to employ the random phase plate (RPP) or the multifocal MLA to modulate the laser beam. Both of these methods disturb the coherence between sub-beams by modulating the phase, but they are implemented in different ways. The RPP is a typical optical birefringent material by randomly etching a specific depth of the unit structure on the surface. Thus, a random phase shift is introduced to modulate the polarization direction of the incident beam to disturb the coherence condition of the beam. In addition, a smooth uniform spot can be obtained [23]. The implementation scheme of multifocal MLA is the focus of this paper. Jin et al. [24] proposed to replace the first MLA with free-form MLA in the homogenizing system to improve the homogeneity in 2016. Each free-form surface in the MLA introduced an appropriate aberration in the wavefront to redistribute the irradiance of the beam. Compared to the traditional MLA homogenization system, the structure greatly reduced the diffraction effect and achieved a highly uniform beam profile. However, this method is only applicable to the laser beam with a larger diameter, since the aperture of the sub-lens is on the order of millimeter. To achieve the same uniformity for laser beams with small diameter, it is necessary to reduce the aperture of the MLA to the scale of micron and increase the number of MLA, which bringing in huge manufacturing difficulty of the MLA with free-form surface. Our research group have studied the laser beam homogenization using MLA. In 2016, Cao et al. [25] proposed a laser beam homogenization method using central off-axis random MLA. The periodicity of the MLA was broken by the off-axis quantity of the center to eliminate the periodic lattice effect at the target surface. However, the fabrication will become difficult due to the sharp change of the surface between the sub-lens units. Meanwhile, it is hard to install and align the two microlenses during the practical application. In 2020, Xue et al. [26] proposed a monolithic random MLA to homogenize the laser beam. In the process of beam homogenization, the coherence between sub-beams was completely broken, and the homogenized spot with high energy utilization rate was obtained. However, there is still a problem of adjustment and installation between the two plates when using the double MLA for beam homogenization. Considering the problems and shortages of the previous research, a high integrated D-rMLA used to laser beam homogenization is proposed, which can be used to homogenize the laser beam with small diameter. D-rMLA is fabricated on a single substrate by double exposure and chemical etching technique. In the process of beam homogenization, the interference fringes between sub-beams are disturbed to obtain homogenized spots with high uniformity and energy utilization rate. In this paper, the method is used to carry out laser beam homogenization experiments and its feasibility is verified. The main arrangements of this paper are as follows: Section 2 describes the principle and simulation analysis of the beam homogenization based on the proposed D-rMLA, while Section 3 demonstrates the fabrication process of D-rMLA. Section 4 shows experimental results and verifies the validity of our method, and Section 5 is a summary of the whole paper. Structure Design of D-rMLA 2.1. Principle of Beam Homogenization The principle of laser beam homogenization based on D-rMLA is shown in Figure 1a. The collimated laser beam is incident on the front surface of the D-rMLA. Then the laser beam is divided into several sub-beams by the sub-lens in the D-rMLA. After that it is modulated by the structure of the back surface of the D-rMLA and emitted to the target surface. The distance between the back side of the D-rMLA and the target surface is Z. Due to the randomness of the aperture size, radius of curvature, and arrangement of each sub-lens of the D-rMLA, the interference conditions between the sub-beams are disturbed to improve the uniformity of the homogenized laser spot. using MLA. In 2016, Cao et al. [25] proposed a laser beam homogenization method u central off-axis random MLA. The periodicity of the MLA was broken by the off quantity of the center to eliminate the periodic lattice effect at the target surface. How the fabrication will become difficult due to the sharp change of the surface betwee sub-lens units. Meanwhile, it is hard to install and align the two microlenses durin practical application. In 2020, Xue et al. [26] proposed a monolithic random MLA t mogenize the laser beam. In the process of beam homogenization, the coherence betw sub-beams was completely broken, and the homogenized spot with high energy ut tion rate was obtained. However, there is still a problem of adjustment and install between the two plates when using the double MLA for beam homogenization. Considering the problems and shortages of the previous research, a high integ D-rMLA used to laser beam homogenization is proposed, which can be used to hom nize the laser beam with small diameter. D-rMLA is fabricated on a single substra double exposure and chemical etching technique. In the process of beam homogeniza the interference fringes between sub-beams are disturbed to obtain homogenized s with high uniformity and energy utilization rate. In this paper, the method is used to out laser beam homogenization experiments and its feasibility is verified. The mai rangements of this paper are as follows: Section 2 describes the principle and simul analysis of the beam homogenization based on the proposed D-rMLA, while Sect demonstrates the fabrication process of D-rMLA. Section 4 shows experimental re and verifies the validity of our method, and Section 5 is a summary of the whole pap Principle of Beam Homogenization The principle of laser beam homogenization based on D-rMLA is shown in Fi 1a. The collimated laser beam is incident on the front surface of the D-rMLA. Then laser beam is divided into several sub-beams by the sub-lens in the D-rMLA. After t is modulated by the structure of the back surface of the D-rMLA and emitted to the t surface. The distance between the back side of the D-rMLA and the target surface Due to the randomness of the aperture size, radius of curvature, and arrangement of sub-lens of the D-rMLA, the interference conditions between the sub-beams are distu to improve the uniformity of the homogenized laser spot. The beam propagation is shown in Figure 1b, and the difference of the transmission optical path of the sub-beams can be analyzed. The optical path calculation method is expressed by Equation (1). l i = n 1 (s i1 + s i2 ) + n 2 s i3 (1) where n 1 and n 2 are the refractive index of the different transmission mediums, and s i1 , s i2 and s i3 the transmission distance of one arbitrarily light path. Assume the refractive index is n 1 when the laser beam transmits in the air, and the refractive index is n 2 when the laser beam transmits in the glass medium. Because of the different structure parameters of different sub-lens, the different sub-beams transmit through different sub-lens units at the same location (such as the optical axis) will have different optical paths. Therefore, the different optical path differences ∆l n will be generated between the sub-beams after the beam passing through the D-rMLA. The optical path difference can be expressed as Equation (2). where ∆Φ n is the phase difference, and λ the wavelength of the incident laser beam. Due to the different ∆l n , As the different ∆Φ n will exist between the sub-beams after they are emitted through the D-rMLA. The conditions for coherent light are the same frequency, the parallel direction of vibration and the same phase or constant phase difference. The conditions for the coherence of the incident beam are disturbed due to the random phase difference. Thus, a spot with uniform energy distribution on the target surface can be obtained. Simulation The effects of beam homogenization of double periodic MLA and D-rMLA were simulated and calculated by means of the numerical analysis software of Matlab (Version 7.1, MathWorks, Natick, MA, USA). The beam homogenization ability of different structures of MLA is compared. The aperture, radius of curvature and array number of the sub-lens in the periodic MLA are designed as 30 µm × 30 µm, 28 µm, and 8 × 8, respectively. Diffraction distance is designed as 20 mm. According to the design parameters, the surface function Formula (3) is used to establish the MLA model, so that each microlens can be expressed numerically. Meanwhile, the transfer function of the MLA can be calculated by Formulas (4) and (5). The phase component of the MLA can be decomposed from the transfer function to calculate the phase distribution of each microlens. According to Fresnel diffraction theory, the homogenized spot with diffraction distance of 20 mm can be calculated. The three-dimensional (3D) structure of the MLA is shown in Figure 2a. Figure 2b is the phase distribution of the periodic MLA. The obtained homogenized spot is shown in Figure 2c. It can be seen that the homogenized spot of the periodic MLA is consisted of periodic dot matrix. In order to compare the periodic MLAs, the side length of quadrilateral apertures of the sub-lenses in D-rMLA are designed to vary randomly from 25 μm to 45 μm, that is (30 -5 × rand (0, 1), 30 + 15 × rand (0, 1)) μm. The array number and the average radius of curvature of the D-rMLA is 8 × 8 and 25 μm, respectively. The 3D structure of the rMLA is shown in Figure 2d. Figure 2e is the phase distribution of the rMLA. The homogenized spot of the laser beam transmitting through the D-rMLA at the distance of 20 mm is shown in Figure 2f. Simulation results show that the D-rMLA can eliminate the interference phenomenon caused by the periodicity of the traditional MLA with better homogenization effect. The homogenized spot of periodic MLA presents periodic lattice distribution, and the energy is only strong diffraction orders. The energy of the homogenized spot of the D-rMLA is more discrete and uniform. Fabrication of D-rMLA The fabrication methods of MLA mainly include thermal reflow technique [27], laser direct writing (LDW) [28], 3D printing [29], and gray-scale lithography [30]. Thermal reflow technique is an efficient way to produce MLA by preparing an array of photoresist polymer cylinders regularly distributed on a substrate and melting the cylinders into a hemispherical shape [27]. However, the filling factor of this preparation method cannot reach 100%. When it is used in beam homogenization, some lights will not be modulated resulting in a central bright spot, which reduces the quality of beam homogenization. LDW is to use the laser beam with variable intensity to expose the resist material on the surface of the substrate. After development, the required relief profile will be formed on the surface of the resist layer [28]. Two-photon polymerization (TPP) 3D printing can also achieve the fabrication of MLA with hundred-nanometer-scale or sub-micrometer-scale resolution [29]. Nevertheless, the LDW and TPP 3D printing techniques are based on point-by-point structural modification and require a long fabrication time for producing large-sized components. Thus, it is time-consuming and costly to fabricate large-scale components. Furthermore, gray-scale lithography is proposed to fabricate the MLA [30]. In the process of exposure, MLA with continuous surface is fabricated by moving the mask dynamically. The fabrication method is flexible and does not need expensive cost. However, for the fabrication of MLA with small apertures, the number of sampling and quantizing is limited, which will reduce the surface quality of the profile. In order to compare the periodic MLAs, the side length of quadrilateral apertures of the sub-lenses in D-rMLA are designed to vary randomly from 25 µm to 45 µm, that is (30 − 5 × rand (0, 1), 30 + 15 × rand (0, 1)) µm. The array number and the average radius of curvature of the D-rMLA is 8 × 8 and 25 µm, respectively. The 3D structure of the rMLA is shown in Figure 2d. Figure 2e is the phase distribution of the rMLA. The homogenized spot of the laser beam transmitting through the D-rMLA at the distance of 20 mm is shown in Figure 2f. Simulation results show that the D-rMLA can eliminate the interference phenomenon caused by the periodicity of the traditional MLA with better homogenization effect. The homogenized spot of periodic MLA presents periodic lattice distribution, and the energy is only strong diffraction orders. The energy of the homogenized spot of the D-rMLA is more discrete and uniform. Fabrication of D-rMLA The fabrication methods of MLA mainly include thermal reflow technique [27], laser direct writing (LDW) [28], 3D printing [29], and gray-scale lithography [30]. Thermal reflow technique is an efficient way to produce MLA by preparing an array of photoresist polymer cylinders regularly distributed on a substrate and melting the cylinders into a hemispherical shape [27]. However, the filling factor of this preparation method cannot reach 100%. When it is used in beam homogenization, some lights will not be modulated resulting in a central bright spot, which reduces the quality of beam homogenization. LDW is to use the laser beam with variable intensity to expose the resist material on the surface of the substrate. After development, the required relief profile will be formed on the surface of the resist layer [28]. Two-photon polymerization (TPP) 3D printing can also achieve the fabrication of MLA with hundred-nanometer-scale or sub-micrometer-scale resolution [29]. Nevertheless, the LDW and TPP 3D printing techniques are based on point-by-point structural modification and require a long fabrication time for producing large-sized components. Thus, it is time-consuming and costly to fabricate large-scale components. Furthermore, gray-scale lithography is proposed to fabricate the MLA [30]. In the process of exposure, MLA with continuous surface is fabricated by moving the mask dynamically. The fabrication method is flexible and does not need expensive cost. However, for the fabrication of MLA with small apertures, the number of sampling and quantizing is limited, which will reduce the surface quality of the profile. Although a lot of research work about the fabrication of MLA has been demonstrated, only few multi-focus MLA which consist of lens lets of different focal lengths were presented in literature. In 2012, Ferraro et al. group proposed the pyroelectric continuous printing method to fabrication graded size MLA, which has different focal lengths [31,32]. The technology limitation strongly depend on the viscosity of the polymeric material. The higher the viscosity, the higher the diameter of the lenses will be. It is difficult to achieve high fill factor. Moreover, the fabrication method which combining lithography and chemical etching has been proposed to prepare rMLA in the previous study [26]. It can realize the fabrication of randomly distributed MLA with small aperture and multi focal length at low cost. The pattern on the mask was transferred to the glass substrate by lithography technology, then the random microlens structure was etched on the glass substrate through the chemical etching method. The double-sided exposure technology is proposed on the basis of the fabrication of rMLA to realize the high precision alignment of the double-sided sub-lens unit structure on both sides of the substrate in this paper to obtain the integrated D-rMLA. In order to achieve the spatial alignment of the microstructures on both sides of the glass substrate, it is necessary to prepare two masks, M 1 and M 2 , with random hole arrays distributed in mirror image, as shown in Figure 3. The diameter of the hole is 3.6 µm. The interval of the central position in the holes vary from 25 µm to 45 µm, that is (30 − 5 × rand (0, 1), 30 + 15 × rand (0, 1)) µm. Moreover, an alignment mark, which is a crosshair, was designed on each mask to align the two sides of the sub-lens at the process of exposure. The crosshair on the mask M 1 for the first exposure is a wider crosshair with a line width of 12 µm. Furthermore, the crosshair on the mask M 2 for the second exposure is a thinner crosshair with a line width of 4 µm. Although a lot of research work about the fabrication of MLA has been strated, only few multi-focus MLA which consist of lens lets of different focal leng presented in literature. In 2012, Ferraro et al. group proposed the pyroelectric con printing method to fabrication graded size MLA, which has different focal length The technology limitation strongly depend on the viscosity of the polymeric mate higher the viscosity, the higher the diameter of the lenses will be. It is difficult to high fill factor. Moreover, the fabrication method which combining lithogra chemical etching has been proposed to prepare rMLA in the previous study [2 realize the fabrication of randomly distributed MLA with small aperture and m length at low cost. The pattern on the mask was transferred to the glass substr thography technology, then the random microlens structure was etched on the g strate through the chemical etching method. The double-sided exposure techn proposed on the basis of the fabrication of rMLA to realize the high precision al of the double-sided sub-lens unit structure on both sides of the substrate in this obtain the integrated D-rMLA. In order to achieve the spatial alignment of the microstructures on both sid glass substrate, it is necessary to prepare two masks, M1 and M2, with random ho distributed in mirror image, as shown in Figure 3. The diameter of the hole is 3.6 interval of the central position in the holes vary from 25 μm to 45 μm, that is (30 -(0, 1), 30 + 15 × rand (0, 1)) μm. Moreover, an alignment mark, which is a crossh designed on each mask to align the two sides of the sub-lens at the process of e The crosshair on the mask M1 for the first exposure is a wider crosshair with a li of 12 μm. Furthermore, the crosshair on the mask M2 for the second exposure is a crosshair with a line width of 4 μm. Firstly, the single-sided rMLA is prepared on one side of the substrate. The flow is shown in Figure 4, including exposure (using M1), development, dechrom etching. At this time, the crosshair with a line width of 12 μm is etched on the edg which is near the structure of rMLA. Firstly, the single-sided rMLA is prepared on one side of the substrate. The process flow is shown in Figure 4, including exposure (using M 1 ), development, dechroming, and etching. At this time, the crosshair with a line width of 12 µm is etched on the edge region which is near the structure of rMLA. Micromachines 2021, 12, x FOR PEER REVIEW 7 of 12 Secondly, another rMLA structure is fabricated on the other side of the substrate. The preparation method is described in detail in the paragraph below. The first step is the pretreatment of the substrate, including the protection of the prepared single-sided rMLA and the preparation of the mask layer namely the chromium film using chemical etching, as shown in Figure 5a. The fabricated rMLA is filled with photoresist and pasted with acid-fast as protective layer to protect the fabricated structure from corrosion by HF. At the same time, a chromium film with a thickness of 100 nm is plated on the polished surface of the substrate, which is used as a masking layer for chemical etching. The second step is photoresist coating and exposure. The photoresist of the type is AZ MIR-703 Photoresist (14 cp) (AZ Electronic materials, Somerville, MA, USA) which was spin-coated on the glass substrate with chromium film at the speed of 4000 r/min with time of 30 s. The thickness of the photoresist was about 700 nm. In order to ensure that there is no rotation error between both sides of the substrate rMLAs after etching, alignment marks were used to align during the exposure process. The crosshair structure M1 which had been etched on the substrate was aligned with the crosshair on the mask M2. In the process of exposure, the alignment marks on the mask M2 were captured with the microscope. Then the prepared rMLA was placed on the wafer supporting with the rMLA structure face down and the photoresist face up, as shown in Figure 5b. By translating or rotating the wafer supporting, the crosshair with 4 μm line width on the mask was nested in the crosshair with 12 μm line width which had been prepared on the back of the substrate, as shown in Figure 6a, so as to achieve strict alignment of the rMLA on both sides. Then lock the wafer supporting for exposure. The exposure power density was set as 3 mW/cm 2 with the center wavelength of 365 nm, and the exposure time was 20 s. In the process of alignment exposure, the alignment deviation of the crosshair mark is with ±1 μm. The third step is development. By the development with developer of AZ 300MIF DEVELOPER (AZ Electronic Materials, Somerville, MA, USA) for 35 s, the microporous structure on the mask was transferred to the photoresist, as shown in Figure 5c. The fourth step is chromium removing. The developed substrate is put into the chromium-removing solution for about 40 s to transfer the microporous structure on the photoresist to the chromium film, as shown in Figure 5d. The last step is etching. The protected substrate is etched in etching solution (H2O:HF:HNO3 = 5:2:2). In the etching process, the glass substrate is taken out every 3 min Secondly, another rMLA structure is fabricated on the other side of the substrate. The preparation method is described in detail in the paragraph below. The first step is the pretreatment of the substrate, including the protection of the prepared single-sided rMLA and the preparation of the mask layer namely the chromium film using chemical etching, as shown in Figure 5a. The fabricated rMLA is filled with photoresist and pasted with acid-fast as protective layer to protect the fabricated structure from corrosion by HF. At the same time, a chromium film with a thickness of 100 nm is plated on the polished surface of the substrate, which is used as a masking layer for chemical etching. The second step is photoresist coating and exposure. The photoresist of the type is AZ MIR-703 Photoresist (14 cp) (AZ Electronic materials, Somerville, MA, USA) which was spin-coated on the glass substrate with chromium film at the speed of 4000 r/min with time of 30 s. The thickness of the photoresist was about 700 nm. In order to ensure that there is no rotation error between both sides of the substrate rMLAs after etching, alignment marks were used to align during the exposure process. The crosshair structure M 1 which had been etched on the substrate was aligned with the crosshair on the mask M 2 . In the process of exposure, the alignment marks on the mask M 2 were captured with the microscope. Then the prepared rMLA was placed on the wafer supporting with the rMLA structure face down and the photoresist face up, as shown in Figure 5b. By translating or rotating the wafer supporting, the crosshair with 4 µm line width on the mask was nested in the crosshair with 12 µm line width which had been prepared on the back of the substrate, as shown in Figure 6a, so as to achieve strict alignment of the rMLA on both sides. Then lock the wafer supporting for exposure. The exposure power density was set as 3 mW/cm 2 with the center wavelength of 365 nm, and the exposure time was 20 s. In the process of alignment exposure, the alignment deviation of the crosshair mark is with ±1 µm. The third step is development. By the development with developer of AZ 300MIF DEVELOPER (AZ Electronic Materials, Somerville, MA, USA) for 35 s, the microporous structure on the mask was transferred to the photoresist, as shown in Figure 5c. The fourth step is chromium removing. The developed substrate is put into the chromium-removing solution for about 40 s to transfer the microporous structure on the photoresist to the chromium film, as shown in Figure 5d. The last step is etching. The protected substrate is etched in etching solution (H 2 O:HF: HNO 3 = 5:2:2). In the etching process, the glass substrate is taken out every 3 min and put into the ultrasonic oscillator to clean the chemical reaction residue. The rMLA on both sides of the substrate has the same distribution of diameters and sag heights after the same etching time of 30 min, as shown in Figure 5e. The microscope pattern of the rMLA can be obtained after etching. The structure of the front and back sides was mirror symmetrical, whose filling factors are 100%, as shown in Figure 6b,c. Then the roughness and apertures of the prepared D-rMLA were measured by step profilometer (Stylus Profiler System, Dektak XT, Bruker, Karlsruhe, Germany). The resolution of the step profilometer is 1 Å. The roughness is about 9 nm in average, and the apertures vary form 25 µm to 45 µm, which is consistent with the designed parameters of the masks. The profile of the sub-lens was also fitted, as shown in Figure 7. The profile of the fabricated sub-lens unit almost coincides with the ideal sphere, showing high profile accuracy. The depths of the sub-lenses vary from 2 µm to 4 µm. and put into the ultrasonic oscillator to clean the chemical reaction residue. The rMLA on both sides of the substrate has the same distribution of diameters and sag heights after the same etching time of 30 min, as shown in Figure 5e. The microscope pattern of the rMLA can be obtained after etching. The structure of the front and back sides was mirror symmetrical, whose filling factors are 100%, as shown in Figure 6b,c. Then the roughness and apertures of the prepared D-rMLA were measured by step profilometer (Stylus Profiler System, Dektak XT, Bruker, Karlsruhe, Germany). The resolution of the step profilometer is 1 Å. The roughness is about 9 nm in average, and the apertures vary form 25 μm to 45 μm, which is consistent with the designed parameters of the masks. The profile of the sublens was also fitted, as shown in Figure 7. The profile of the fabricated sub-lens unit almost coincides with the ideal sphere, showing high profile accuracy. The depths of the sublenses vary from 2 μm to 4 μm. and put into the ultrasonic oscillator to clean the chemical reaction residue. The rMLA on both sides of the substrate has the same distribution of diameters and sag heights after the same etching time of 30 min, as shown in Figure 5e. The microscope pattern of the rMLA can be obtained after etching. The structure of the front and back sides was mirror symmetrical, whose filling factors are 100%, as shown in Figure 6b,c. Then the roughness and apertures of the prepared D-rMLA were measured by step profilometer (Stylus Profiler System, Dektak XT, Bruker, Karlsruhe, Germany). The resolution of the step profilometer is 1 Å. The roughness is about 9 nm in average, and the apertures vary form 25 μm to 45 μm, which is consistent with the designed parameters of the masks. The profile of the sublens was also fitted, as shown in Figure 7. The profile of the fabricated sub-lens unit almost coincides with the ideal sphere, showing high profile accuracy. The depths of the sublenses vary from 2 μm to 4 μm. D-rMLA Testing The experimental optical paths are constructed on the basis of the fabricated D-rMLA to test the laser beam homogenization and verify the applicability of the lasers with different wavelengths. Firstly, the divergence angles of the homogenized spots generated by the D-rMLA of different wavelengths are measured. In the experiment, the laser beams with wavelengths of 650 nm (R), 532 nm (G), and 405 nm (B) were irradiated to the D-rMLA, respectively. The powers of the sources of RGB laser radiation are 100 mW, 100 mW, and 30 mW, respectively. The transverse mode of the RGB laser radiation is near TEM00, which has high coherence. By the phase modulation of the D-rMLA the homogenized spots were obtained. The propagation distance Z between the Charge Coupled Device (CCD, the resolution of 4896 × 3248 and pixel size of 7.4 μm × 7.4 μm) and D-rMLA was measured as 20 mm, and the incident spot size T0 was 4 mm. Therefore, if the size of the homogenized spot T can be measured, the half divergence angle θ of the homogenized spot at different wavelengths can be calculated by Equation (6). The homogenized spots of different wavelengths were captured by CCD, and the sizes of the homogenized spots were calculated by numerical analysis software MATLAB. The energy of the homogenized spots attenuates to 1/e 2 is counted as T, which can be obtained by multiplying the number of pixels of the spot by the pixel size. For the laser beam with wavelength of 650 nm (R), the pixel number of homogenized spots is accounted as 2093, and the T and the full divergence angle were calculated as 15.5 mm and 32°, respectively. For the laser beam with wavelength of 532 nm (G), the pixel number was 1929, and the T and the full divergence angle were calculated as 14.3 mm and 29°. For the laser beam with wavelength of 405 nm (B), the pixel number was 1799, and the T and the full divergence angle were calculated as 13.3 mm and 26°. For the fabricated D-rMLA, the central zero order strong intensity will not be produced by the incident beams with different wavelengths. The divergence angle of the homogenized spot will change when different wavelength beams are incident. According to the grating equation (d × sin θ = λ), the longer the wavelength is, the larger the diffraction angle is. Secondly, the uniformity of homogenized spots was tested. The homogenized spots of the laser with different wavelengths were captured after laser homogenization, as shown in Figure 8. The uniformity was calculated by Equation (7). The uniformity of 650 nm (R), 532 nm (G), and 405 nm (B) was 91%, 89%, and 90%, respectively. It can be seen that the uniformity of the homogenized spots for different wavelengths is very high with a little difference. Comparing the homogenization results of simulation and experiment, D-rMLA Testing The experimental optical paths are constructed on the basis of the fabricated D-rMLA to test the laser beam homogenization and verify the applicability of the lasers with different wavelengths. Firstly, the divergence angles of the homogenized spots generated by the D-rMLA of different wavelengths are measured. In the experiment, the laser beams with wavelengths of 650 nm (R), 532 nm (G), and 405 nm (B) were irradiated to the D-rMLA, respectively. The powers of the sources of RGB laser radiation are 100 mW, 100 mW, and 30 mW, respectively. The transverse mode of the RGB laser radiation is near TEM 00 , which has high coherence. By the phase modulation of the D-rMLA the homogenized spots were obtained. The propagation distance Z between the Charge Coupled Device (CCD, the resolution of 4896 × 3248 and pixel size of 7.4 µm × 7.4 µm) and D-rMLA was measured as 20 mm, and the incident spot size T 0 was 4 mm. Therefore, if the size of the homogenized spot T can be measured, the half divergence angle θ of the homogenized spot at different wavelengths can be calculated by Equation (6). The homogenized spots of different wavelengths were captured by CCD, and the sizes of the homogenized spots were calculated by numerical analysis software MATLAB. The energy of the homogenized spots attenuates to 1/e 2 is counted as T, which can be obtained by multiplying the number of pixels of the spot by the pixel size. For the laser beam with wavelength of 650 nm (R), the pixel number of homogenized spots is accounted as 2093, and the T and the full divergence angle were calculated as 15.5 mm and 32 • , respectively. For the laser beam with wavelength of 532 nm (G), the pixel number was 1929, and the T and the full divergence angle were calculated as 14.3 mm and 29 • . For the laser beam with wavelength of 405 nm (B), the pixel number was 1799, and the T and the full divergence angle were calculated as 13.3 mm and 26 • . For the fabricated D-rMLA, the central zero order strong intensity will not be produced by the incident beams with different wavelengths. The divergence angle of the homogenized spot will change when different wavelength beams are incident. According to the grating equation (d × sin θ = λ), the longer the wavelength is, the larger the diffraction angle is. Secondly, the uniformity of homogenized spots was tested. The homogenized spots of the laser with different wavelengths were captured after laser homogenization, as shown in Figure 8. The uniformity was calculated by Equation (7). The uniformity of 650 nm (R), 532 nm (G), and 405 nm (B) was 91%, 89%, and 90%, respectively. It can be seen that the uniformity of the homogenized spots for different wavelengths is very high with a little difference. Comparing the homogenization results of simulation and experiment, it is found that the homogenized spot is no longer composed of strong diffraction orders, the energy of the homogenized spot of the D-rMLA is more discrete and uniform. The experimental and simulation results are consistent. it is found that the homogenized spot is no longer composed of strong diffraction orders, the energy of the homogenized spot of the D-rMLA is more discrete and uniform. The experimental and simulation results are consistent. Finally, the energy utilization rate is tested. The power of the incident and outgoing laser beams is measured by the power meter (accuracy of 1 μW) and the energy utilization rate is calculated by Equation (8). The power Pin of the incident beam of 650 nm (R) is 0.98 mW, and the outgoing power Pout is0.87 mW. The energy utilization rate is calculated as 89%. The power Pin of the incident beam of 532 nm (G) is 2.05 mW and the outgoing power Pout is 1.8 mW. The energy utilization rate was calculated as 87%. The power Pin of the incident beam of 405 nm (B) is 4 mW, and the outgoing power Pout is 3.44 mW. The energy utilization rate is calculated as 86%. Therefore, it can be concluded that the structure of D-rMLA has a high energy utilization rate. The energy loss mainly occurs in the reflection of the substrate surface, the internal absorption or scattering because of the refraction type of the fabricated D-rMLA with a continuous surface. The above three types of the energy loss are very small, so the energy utilization rate of the prepared structure is very high. Conclusions This paper has proposed an integrated D-rMLA fabricated by double-sided exposure technique for laser beam homogenization. The experimental results show that the uniformity of the homogenized spot for the laser beams with wavelengths of 650 nm (R), 532 nm (G), and 405 nm (B) are 91%, 89%, and 90%, respectively. The energy utilization rate is 89%, 87%, and 86%, respectively. It is verified that the integrated D-rMLA can break the interference lattice phenomenon brought by the periodic MLA and improve the homogenization quality. Compared with the traditional double MLAs in series, the homogenizer greatly improves the system integration and can be used in laser projection, laser backlighting, and other fields. Finally, the energy utilization rate is tested. The power of the incident and outgoing laser beams is measured by the power meter (accuracy of 1 µW) and the energy utilization rate is calculated by Equation (8). The power P in of the incident beam of 650 nm (R) is 0.98 mW, and the outgoing power P out is 0.87 mW. The energy utilization rate is calculated as 89%. The power P in of the incident beam of 532 nm (G) is 2.05 mW and the outgoing power P out is 1.8 mW. The energy utilization rate was calculated as 87%. The power P in of the incident beam of 405 nm (B) is 4 mW, and the outgoing power P out is 3.44 mW. The energy utilization rate is calculated as 86%. Therefore, it can be concluded that the structure of D-rMLA has a high energy utilization rate. The energy loss mainly occurs in the reflection of the substrate surface, the internal absorption or scattering because of the refraction type of the fabricated D-rMLA with a continuous surface. The above three types of the energy loss are very small, so the energy utilization rate of the prepared structure is very high. RMSE = N ∑ j (I j − I) 2 /N (7) η = P in P out × 100% (8) Conclusions This paper has proposed an integrated D-rMLA fabricated by double-sided exposure technique for laser beam homogenization. The experimental results show that the uniformity of the homogenized spot for the laser beams with wavelengths of 650 nm (R), 532 nm (G), and 405 nm (B) are 91%, 89%, and 90%, respectively. The energy utilization rate is 89%, 87%, and 86%, respectively. It is verified that the integrated D-rMLA can break the interference lattice phenomenon brought by the periodic MLA and improve the homogenization quality. Compared with the traditional double MLAs in series, the homogenizer greatly improves the system integration and can be used in laser projection, laser backlighting, and other fields. Informed Consent Statement: Not applicable. Data Availability Statement: Data will be provided on request through the corresponding author (Axiu Cao) of this article. Conflicts of Interest: The authors declare no conflict of interest.
10,186.4
2021-06-01T00:00:00.000
[ "Engineering", "Physics" ]
Carbonic Anhydrase Inhibitors Targeting Metabolism and Tumor Microenvironment The tumor microenvironment is crucial for the growth of cancer cells, triggering particular biochemical and physiological changes, which frequently influence the outcome of anticancer therapies. The biochemical rationale behind many of these phenomena resides in the activation of transcription factors such as hypoxia-inducible factor 1 and 2 (HIF-1/2). In turn, the HIF pathway activates a number of genes including those involved in glucose metabolism, angiogenesis, and pH regulation. Several carbonic anhydrase (CA, EC 4.2.1.1) isoforms, such as CA IX and XII, actively participate in these processes and were validated as antitumor/antimetastatic drug targets. Here, we review the field of CA inhibitors (CAIs), which selectively inhibit the cancer-associated CA isoforms. Particular focus was on the identification of lead compounds and various inhibitor classes, and the measurement of CA inhibitory on-/off-target effects. In addition, the preclinical data that resulted in the identification of SLC-0111, a sulfonamide in Phase Ib/II clinical trials for the treatment of hypoxic, advanced solid tumors, are detailed. Introduction The microenvironment of tumor cells is different from that of normal cells and plays a crucial role in shaping the behavior of tumors, which, in turn, frequently influences treatment outcomes as well as treatment strategies [1,2]. Many decades ago, Warburg [3] recognized this phenomenon, which later became known as the "Warburg effect" and constitutes a hallmark of many cancers: these cells are hypoxic, more acidic than normal, and possess a dysregulated glucose (and not only glucose) metabolism [3,4]. In the last 20 years, it became obvious that the orchestrators of all these phenomena are transcription factors called hypoxia-inducible factor 1 and 2 (HIF-1/2). The understanding of the intricate biochemical and physiological processes by which HIF-1/2 sense tumor oxygen levels and regulate genes involved in metabolism (such as the glucose transporters), pH regulation (such as monocarboxylate transporters, MCTs; and carbonic anhydrases, CAs), and angiogenesis (vascular endothelial growth factor (VEGF)) led to the award of the 2019 medicine Nobel prize to three scientists who contributed significantly to the field, Kaelin [5], Ratcliffe [6], and Semenza [7,8]. Their crucial discoveries and those from many other laboratories [9][10][11][12][13][14][15][16][17] established the basis for exploiting the tumor microenvironment abnormalities for developing a new generation of antitumor therapies and drugs, which should specifically target tumor cells without relevant toxicity to normal cells and tissues [18,19]. As a result of HIF-1/2 activation, two of the proteins that were identified to be highly overexpressed in many tumor types were the CA isoforms, CA IX and XII [9][10][11][12][13][14][15][16][17][18][19]. CAs are a superfamily of metalloenzymes present in all kingdoms of life that catalyze the conversion of CO 2 to bicarbonate and protons [20][21][22][23][24][25][26]. There are 15 CA isoforms that are known in humans (h); i.e., hCA I -hCA XIV, including two V-type isoforms, hCA VA and hCA VB) [27][28][29][30]. The field of CAs and their inhibitors were recently reviewed and will not be discussed in detail here [19][20][21][22][23][24]. Briefly, the X-ray crystal structures are available for the two tumor-associated isoforms hCA IX and XII [25,26], as well as for many other members of the human CA family [27][28][29][30]. Such structures of the enzymes alone and in complex with many types of inhibitors have been highly relevant for the design of compounds with a range of applications, not only in the anticancer field but also in the renal, central nervous system, ophthalmologic, obesity, and other medical fields [31][32][33][34][35][36][37][38][39]. Furthermore, recently, CA inhibitors were shown to be of potential use in the management of cerebral ischemia, neuropathic pain, and arthritis [40][41][42][43][44], which are conditions for which this class of pharmacologic agents was previously considered inappropriate. However, the attentive search for novel classes of compounds with efficacy and selectivity for the different isoforms involved in these quite diverse conditions resulted in proof of concept studies, which have suggested that all catalytically active hCA isoforms may be considered interesting drug targets. Sulfonamides and Other Classes of CA inhibitors (CAIs): Selectivity for Tumor-Associated vs. Cytosolic Isoforms hCA IX was discovered in 1994 by Pastorekova's group [45] and hCA XII in 1998 by Türeci et al. [46]. Both isoforms were shown to be extracellular, multi-domain proteins, with the enzyme active site situated outside the cell [45,46]. Years later, with the demonstration that both enzymes are activated through the HIF-1/2 pathway, a reason for this localization became obvious: both enzymes, together with a range of other proteins that will be not discussed here, are involved in the pH regulation and metabolism of the cancer cell [1,2,[47][48][49][50][51][52][53]. Both enzymes possess significant catalytic activity for the CO 2 hydration reaction [19][20][21]. The hypothesis that interfering with the activity of such proteins may have anticancer effects was proposed by Pouysségur in 2006 [18], but no selective inhibitors for cancer-associated CAs or other proteins involved in the regulation of the tumor microenvironment (MCTs, bicarbonate transporters, or Na+/H+-exchangers, etc.) were available at that time. Thus, a program for developing other proteins involved in the regulation of the tumor microenvironment (MCTs, bicarbonate transporters, or Na+/H+-exchangers, etc.) were available at that time. Thus, a program for developing CA IX/XII selective compounds was initiated in some of our (CT Supuran and S Dedhar) laboratories. Acetazolamide 1 (Figure 1), the classical sulfonamide inhibitor, was the starting point [54,55]. Indeed, primary sulfonamides [56][57][58] such as acetazolamide were known for decades to potently inhibit CAs, as they bind as anions (RSO2NH − ) to the metal ion in the enzyme active site, as shown in a classical crystallographic study from Liljas' group [59]. However, acetazolamide is a non-selective CAI [20], which potently inhibits most human CA isoforms and is, thus, not an appropriate compound for designing isoform-selective compounds. The tail approach was thus employed for developing CA IX/XII-selective inhibitors [60]. The idea is very simple: attach moieties that may induce the desired properties (e.g., enhanced hydrosolubility) to scaffolds of simple sulfonamides (e.g., sulfanilamide, metanilamide, and their derivatives; 5-amino-1,3,4-thiadiazole-2-sulfonamide, etc.) that may lead to interactions with the external part of the CA isoforms active site, which is the region at the entrance of the cavity that is more diverse than the active site residues between the 12 catalytically active human isoforms [60,61]. Detailed kinetic and crystallographic studies were performed on a large number of derivatives (more than 10,000 sulfonamides were synthesized and investigated in our laboratories). This research demonstrated that it was possible to design CAIs selective for the various isoforms by tailoring the dimensions (length, bulkiness, etc.) as well as the chemical nature of the various tails [60][61][62][63][64][65][66][67][68][69]. For example, the fluorescein-derivatized sulfonamides 2 and 3 were only mildly selective for CA IX vs. CA II [47,48], but they were highly useful for understanding the role of CA IX in tumor pH regulation [47][48][49][50]. In contrast, the ureido-substituted sulfonamides 4 and 5 were members of a congeneric series in which many CA IX/XII selective inhibitors were discovered [51,52], and their selectivity for the tumor-associated vs. the cytosolic isoforms was explained by detailed X-ray crystallographic data [70,71]. Another highly interesting approach was reported by Neri's group [72][73][74], who designed DNA-encoded libraries of sulfonamides and identified highly potent in vitro CA IX inhibitors in a series of very interesting papers [75][76][77]. Carbonic anhydrase inhibitors (CAIs) that were crucial for validation of CA IX/XII as antitumor targets: acetazolamide 1 is the classical inhibitor in clinical use for decades; the fluoresceinderivatized sulfonamides 2 and 3 were used in the proof of concept studies to demonstrate the involvement of CA IX in acidification of the external pH of the tumor cell [47][48][49][50], the ureidosubstituted-sulfonamides 4 and 5 were among the first CAIs to show significant antitumor effects in animal models of hypoxic tumors [51,52], together with coumarin inhibitors 6 and 7 [53]. Thus, by 2009-2010, sulfonamide CAIs targeting CA IX with high specificity were available [78], which fostered renewed interest in targeting not only primary tumors but also metastases [79][80][81]. However, in parallel with these studies mentioned above, the last decade also brought important developments in the discovery of non-sulfonamide CAIs. The first highly relevant new class of CAIs is represented by the coumarins [82][83][84][85]. The coumarins were shown to act as "prodrug inhibitors": they undergo CA-mediated hydrolysis of the lactone ring (as in the compounds 6 and 7 of Figure 1), which generates the de facto inhibitor, belonging to the 2-hydroxycinnamic acid class of compounds. [47][48][49][50], the ureido-substituted-sulfonamides 4 and 5 were among the first CAIs to show significant antitumor effects in animal models of hypoxic tumors [51,52], together with coumarin inhibitors 6 and 7 [53]. Thus, by 2009-2010, sulfonamide CAIs targeting CA IX with high specificity were available [78], which fostered renewed interest in targeting not only primary tumors but also metastases [79][80][81]. However, in parallel with these studies mentioned above, the last decade also brought important developments in the discovery of non-sulfonamide CAIs. The first highly relevant new class of CAIs is represented by the coumarins [82][83][84][85]. The coumarins were shown to act as "prodrug inhibitors": they undergo CA-mediated hydrolysis of the lactone ring (as in the compounds 6 and 7 of Figure 1), which generates the de facto inhibitor, belonging to the 2-hydroxycinnamic acid class of compounds. Extensive kinetic, crystallographic, and synthetic efforts led to a thorough understanding of this innovative inhibition mechanism [82][83][84][85]. In fact, the 2-hydroxycinnamic acids formed by coumarin hydrolysis do not interact with the catalytic metal ion but instead bind at the entrance of the active site cavity, in a highly variable region of the different CA isoforms, which explains the very isoform-selective inhibitory effects of many representatives of this class of CAIs [82][83][84][85]. Furthermore, this new CA inhibitory class inspired the discovery of many other chemotypes, such as the sulfocoumarins and the homosulfocoumarins by Zalubovskis group [86][87][88], the homocoumarins [89], the thiocoumarins by Ferraroni et al. [90], etc. Measurement of Inhibition Efficacy All these inhibitors discussed above and many others that are not mentioned here have been profiled for their inhibition of many CA isoforms by using a stopped-flow CO 2 hydrase assay, which was originally reported in 1971 by Khalifah [112] and validated by others as a rapid method for determining the enzyme kinetics and inhibitory constants of various classes of compounds against different CAs [113][114][115][116][117][118]. In a recent paper by Jonsson and Liljas [119], some inhibition data from several papers from the Supuran group were considered. Jonsson and Liljas queried the enzyme and substrate concentrations that were used in some experiments. Many of the early stopped-flow assay papers cited above [113][114][115][116][117][118], which are not from Supuran's group but from several well-established laboratories, provided exactly the same level of information. That is, the range of CO 2 concentrations at which the experiments were performed was reported, without detailing enzyme concentrations. On the contrary, in many of our papers, the enzyme concentrations are reported either for the CO 2 hydrase or esterase assays [60,62,68,69,[120][121][122][123][124][125][126][127], and they range between 3.5 and 14 nM. The vast majority of the analyses performed by Jonsson and Liljas [119] are based on supplementary information that was reported in six of our papers (two of the other papers that they considered in the analysis contained CA inhibitory data obtained from a completely different laboratory). The presumed lack of precision results from erroneously typed concentrations for the enzymes in one of the analyzed paper's supplementary information [128]. That is, a concentration of 10 −7 M was erroneously written, which is, in fact, the stock solution of enzyme and not the enzyme concentration at which the measurements were performed. As mentioned above, in most of our experiments, we work at enzyme concentrations ranging from 3.5 and 14 nM, and sometimes even lower when exceedingly strong inhibitors are analyzed. Jonsson and Liljas [119] also raised a query about the enzyme inhibition curves from the supplementary information of another paper [129]. For these figures, the uncatalyzed reaction was not subtracted in the curves plotted in the supplementary figures, but those values were automatically subtracted when the IC 50 was calculated due to the use of the algorithm in the computer software that is used for the stopped-flow instrument. Despite these minor errors (now corrected through submission of errata), the inhibitory constants (K i s) determined by the University of Florence stopped-flow kinetic measurements of CA enzyme inhibition have been validated by native mass spectrometry (MS) measurements performed in another laboratory on a number of sulfonamides, which gave results in excellent agreement with the kinetic inhibition data [130,131]. Dissociation constants (K d s) were obtained from native mass spectrometry measurements of buffered aqueous solutions containing hCA I and hCA II with acetazolamide, ethoxzolamide, brinzolamide, furosemide, dichlorophenamide, and indapamide, which are well known CAIs [20]. K d values were obtained by either measuring individual ligand-protein interactions in single experiments or simultaneously in competition experiments (Table 1) [130]. K d values are equivalent to K i values for inhibitors that bind at the site of the substrate, which is the case for sulfonamide CAIs. The agreement between the native MS measurements and those for the stopped-flow kinetic assay was excellent. For example, the measured K d values of all nanomolar inhibitors were within 30% of the K i values obtained by stopped-flow kinetic experiments. Moreover, K d values obtained by native MS for the micromolar inhibitors were within 50% of the K i values. More recently, native MS was used to measure the K d values of 15 perfluoroalkyl substances with either carboxylate, sulfate, or sulfonamide zinc-binding groups to hCA I and hCA II [131]. The 30 measured K d values obtained from native MS measurements were also in excellent agreement with the corresponding K i measurements (data not shown here) [131]. In fact, the K d values were within 37% of the K i values. The native MS approach to measuring K d values has also been validated for ligand-DNA interactions compared to other well-established biophysical methods (isothermal titration calorimetry and differential scanning calorimetry) [132]. Table 1. Measured dissociation constants (µM) of six sulfonamides to human (h)CA I and hCA II using native mass spectrometry with nanoscale ion emitter tips in individual ligand-protein binding experiments (Single) and simultaneously in a competition experiment (Competition). Data adapted from Reference [130]. Overall, these native MS data strongly support that Ki values can be obtained using the stoppedflow kinetic method for a range of different types of CAIs with sufficient accuracy for drug discovery and development applications. Overall, these native MS data strongly support that K i values can be obtained using the stopped-flow kinetic method for a range of different types of CAIs with sufficient accuracy for drug discovery and development applications. Drug Design Studies Using SLC-0111 as Lead Compound Compound 5, SLC-0111, possesses a variable affinity for the diverse hCA isoforms and selective inhibitory action toward tumor expressed CA isoforms IX and XII over the off-target ubiquitous hCA I and hCA II [51,52,70,81]. Compound 5 is characterized by the presence of the ureido Metabolites 2020, 10, 412 6 of 21 functionality as a linker between the benzenesulfonamide fragment and the tail of the inhibitor. By means of X-ray crystallography on SLC-0111, it was demonstrated that the reason for the isoform selectivity is the ureido linker, which allows great flexibility to the CAI "tails" that may adopt a range of conformations and participate in different interactions within the enzyme active site [52,70]. Such different favorable/unfavorable contacts between the inhibitor tail and the enzyme active site lead to different inhibition profiles for the entire class of sulfonamides to which SLC-0111 belongs [52,70]. The different tail orientations allow the specific interactions between the inhibitor tail and amino acid residues at the entrance of the active site cavity, which is the most variable region in the various α-CA isoforms with medicinal chemistry applications, such as, for example, CA I, II, IX, and XII [19][20][21][138][139][140][141][142]. SLC-0111 binds selectively to hCA IX coordinating through the SO 2 NH − moiety (the deprotonated form of sulfonamide group) to the positively charged Zn(II) ion in the CA active site. In addition, a second strong contact with the Zn(II) ion involves oxygen of the sulfonamide. In contrast, these interactions are either weak or absent in the case of the hCA II isoform. The large difference in the electrostatic interactions accounts for the selectivity of the ligand to hCA IX. It was suggested that the potency of SLC-0111 against isoform IX is due to the hydrophobic contacts, whereas the selectivity is due to the electrostatic interactions, by means of computational approaches [143]. As a front-runner selective inhibitor for the tumor-associated isoform CA IX, and being currently in Phase Ib/II clinical trials, SLC-0111 has been utilized as a lead CAI for the development of novel promising small molecules with selective inhibitory activity toward CA IX, and with good druggability and lead-likeness characters. Several drug design approaches have been utilized to develop a range of new SLC-0111 analogs. The following subsection will present an overview of most of these drug design approaches. Figure 2). The reported sulfamate derivatives in this study displayed selective CA IX/XII inhibition, as well as inhibited migration and spreading of breast cancer MDA-MB-231 cells. One of these sulfamate-based CAIs efficiently inhibited the development of MDA-MB-231 metastases in the lung without signs of toxicity [144]. I and hCA II [51,52,70,81]. Compound 5 is characterized by the presence of the ureido functionality as a linker between the benzenesulfonamide fragment and the tail of the inhibitor. By means of X-ray crystallography on SLC-0111, it was demonstrated that the reason for the isoform selectivity is the ureido linker, which allows great flexibility to the CAI "tails" that may adopt a range of conformations and participate in different interactions within the enzyme active site [52,70]. Such different favorable/unfavorable contacts between the inhibitor tail and the enzyme active site lead to different inhibition profiles for the entire class of sulfonamides to which SLC-0111 belongs [52,70]. The different tail orientations allow the specific interactions between the inhibitor tail and amino acid residues at the entrance of the active site cavity, which is the most variable region in the various α-CA isoforms with medicinal chemistry applications, such as, for example, CA I, II, IX, and XII [19][20][21][138][139][140][141][142]. SLC-0111 binds selectively to hCA IX coordinating through the SO2NH − moiety (the deprotonated form of sulfonamide group) to the positively charged Zn(II) ion in the CA active site. In addition, a second strong contact with the Zn(II) ion involves oxygen of the sulfonamide. In contrast, these interactions are either weak or absent in the case of the hCA II isoform. The large difference in the electrostatic interactions accounts for the selectivity of the ligand to hCA IX. It was suggested that the potency of SLC-0111 against isoform IX is due to the hydrophobic contacts, whereas the selectivity is due to the electrostatic interactions, by means of computational approaches [143]. As a front-runner selective inhibitor for the tumor-associated isoform CA IX, and being currently in Phase Ib/II clinical trials, SLC-0111 has been utilized as a lead CAI for the development of novel promising small molecules with selective inhibitory activity toward CA IX, and with good druggability and lead-likeness characters. Several drug design approaches have been utilized to develop a range of new SLC-0111 analogs. The following subsection will present an overview of most of these drug design approaches. Modification of the SLC-0111 Benzenesulfonamide Moiety In 2012, Gieling et al. developed new sulfamate-based SLC-0111 analogs via replacement of the sulfamoyl zinc-binding group (ZBG) in SLC-0111 with the sulfamate functionality (Compound 8, Figure 2). The reported sulfamate derivatives in this study displayed selective CA IX/XII inhibition, as well as inhibited migration and spreading of breast cancer MDA-MB-231 cells. One of these sulfamate-based CAIs efficiently inhibited the development of MDA-MB-231 metastases in the lung without signs of toxicity [144]. In another study, Carta et al. reported new SLC-0111 regioisomers (Compound 9, Figure 2). The obtained results revealed that shifting of the sulfamoyl ZBG in SLC-0111 from the para-to metaposition elicited a decrease in the effectiveness toward hCA IX with significant improvement of the inhibitory action against hCA II, which was detrimental to the selectivity profile for this regioisomer [145]. On the other hand, Bozdag et al. reported the design and synthesis of new SLC-0111 congeners incorporating a 2-aminophenol-4-sulfonamide moiety to control the tail flexibility, (Compound 10, Figure 2). The phenolic OH was able to establish an intra-molecular five-membered ring with the In another study, Carta et al. reported new SLC-0111 regioisomers (Compound 9, Figure 2). The obtained results revealed that shifting of the sulfamoyl ZBG in SLC-0111 from the para-to meta-position elicited a decrease in the effectiveness toward hCA IX with significant improvement of the inhibitory action against hCA II, which was detrimental to the selectivity profile for this regioisomer [145]. On the other hand, Bozdag et al. reported the design and synthesis of new SLC-0111 congeners incorporating a 2-aminophenol-4-sulfonamide moiety to control the tail flexibility, (Compound 10, Figure 2). The phenolic OH was able to establish an intra-molecular five-membered ring with the ureido NH group, which may provoke a C2-N'rotational restriction leading to a 20-fold enhanced hCA II/hCA IX selectivity ratio in comparison to SLC-0111 [146]. the synthesis of novel sets of thioureido and selenoureido CAIs. In these studies, the SLC-0111 ureido oxygen was replaced with a sulfur or selenium atom (Compound 13, Figure 3) [150,151]. The inhibition profile for both thioureido and selenoureido SLC-0111 congeners showed a loss of selectivity for the inhibition of the cancer-associated cytosolic hCA isoforms [150,151]. Moreover, Eldehna et al., in 2019, developed a series of 3/4-(3-aryl-3-oxopropenyl)aminobenzenesulfonamide derivatives as novel SLC-0111 enaminone analogs (Compound 14, Figure 3). All the reported enaminones exhibited good selectivity toward hCA IX over hCA I and II. The structure-activity relationship (SAR) outcomes highlighted the significance of the incorporation of a bulkier aryl tail such as the 2-naphthyl ring [151]. In order to manipulate the flexibility of the SLC-0111 ureido linker, the urea linker outer nitrogen atom was incorporated into a piperazine ring to produce rigidified SLC-0111 analogs (Compound 15, Figure 3). The rigid congeners displayed a reduction of CA IX/CA II selectivity, although some nanomolar CA IX inhibitors were obtained [152]. Furthermore, to better understand the importance of a rigid heterocyclic scaffold, the piperazine ring was substituted with piperidine (Compound 16, Figure 3). In this compound series, a hydrazinocarbonyl-ureido moiety was introduced for the tail of the inhibitors. The NH group of the hydrazide moiety may provide a supplementary H-bond donator, which can better interact with the aminoacidic residues in the hydrophilic region of the active site. Depending on the substitution pattern at the piperidino ring, several hydrazidoureidobenzensulfonamides inhibited CA IX at low nanomolar concentrations [153]. Modification of the SLC-0111 Tail The replacement of the 4-fluorophenyl tail with a 3-nitrophenyl group afforded low nanomolar CA IX/XII inhibitor with good selectivity for the transmembrane over the cytosolic isoforms (Compound 4, Figure 1). The same SLC-0111 analog significantly inhibited the formation of metastases by the highly aggressive 4T1 mammary tumor cells [52]. Furthermore, novel SLC-0111 congeners were synthesized either via grafting different substituents within the phenyl tail, rather than p-fluoro (Compound 17, Figure 4) or via replacement of the 4-fluorophenyl moiety with polycyclic tails (Compound 18, Figure 4) [52]. On the other hand, replacement of the 4-fluorophenyl The discovery of other SLC-0111 analogs was continued through two new studies that described the synthesis of novel sets of thioureido and selenoureido CAIs. In these studies, the SLC-0111 ureido oxygen was replaced with a sulfur or selenium atom (Compound 13, Figure 3) [150,151]. The inhibition profile for both thioureido and selenoureido SLC-0111 congeners showed a loss of selectivity for the inhibition of the cancer-associated cytosolic hCA isoforms [150,151]. Moreover, Eldehna et al., in 2019, developed a series of 3/4-(3-aryl-3-oxopropenyl)aminobenzenesulfonamide derivatives as novel SLC-0111 enaminone analogs (Compound 14, Figure 3). All the reported enaminones exhibited good selectivity toward hCA IX over hCA I and II. The structure-activity relationship (SAR) outcomes highlighted the significance of the incorporation of a bulkier aryl tail such as the 2-naphthyl ring [151]. In order to manipulate the flexibility of the SLC-0111 ureido linker, the urea linker outer nitrogen atom was incorporated into a piperazine ring to produce rigidified SLC-0111 analogs (Compound 15, Figure 3). The rigid congeners displayed a reduction of CA IX/CA II selectivity, although some nanomolar CA IX inhibitors were obtained [152]. Furthermore, to better understand the importance of a rigid heterocyclic scaffold, the piperazine ring was substituted with piperidine (Compound 16, Figure 3). In this compound series, a hydrazinocarbonyl-ureido moiety was introduced for the tail of the inhibitors. The NH group of the hydrazide moiety may provide a supplementary H-bond donator, which can better interact with the aminoacidic residues in the hydrophilic region of the active site. Depending on the substitution pattern at the piperidino ring, several hydrazidoureidobenzensulfonamides inhibited CA IX at low nanomolar concentrations [153]. Modification of the SLC-0111 Tail The replacement of the 4-fluorophenyl tail with a 3-nitrophenyl group afforded low nanomolar CA IX/XII inhibitor with good selectivity for the transmembrane over the cytosolic isoforms (Compound 4, Figure 1). The same SLC-0111 analog significantly inhibited the formation of metastases by the highly aggressive 4T1 mammary tumor cells [52]. Furthermore, novel SLC-0111 congeners were synthesized either via grafting different substituents within the phenyl tail, rather than p-fluoro (Compound 17, Figure 4) or via replacement of the 4-fluorophenyl moiety with polycyclic tails (Compound 18, Figure 4) [52]. On the other hand, replacement of the 4-fluorophenyl tail of SLC-0111 with 4-arylthiazole (Compound 19, Figure 4) and 5-arylthiadiazole (Compound 20, Figure 4) successfully improved the inhibitory activity toward hCA IX. Unfortunately, the most active thiadiazole analogs showed a decrease of the hCA IX/II selectivity as compared to SLC-0111 [154]. Another recent study has utilized the bioisosteric replacement approach to design and synthesize new SLC-0111 analogs featuring 3-methylthiazolo[3,2-a]benzimidazole moiety as a tail that connected to the zinc-anchoring benzenesulfonamide moiety via a ureido linker (Compound 21, Figure 4). Thereafter, the ureido linker was either elongated (Compound 22, Figure 4) or replaced by an enaminone linker (Compound 23, Figure 4) [155]. The results obtained from the stopped-flow CO 2 hydrase assay elucidated that three compounds possessed single-digit nanomolar CA IX inhibitory action with good selectivity profile toward hCA IX over hCA I and II. Moreover, these compounds exerted effective anti-proliferative and pro-apoptotic activities toward breast cancer MCF-7 and MDA-MB-231 cell lines. It is worth noting that the molecular docking analysis disclosed that thiazolo[3,2-a]benzimidazole moiety is a good bioisoster for the SLC-0111 phenyl tail due to its ability to establish many hydrophobic interactions within the hCA IX and XII active sites, as well as involving the sp 2 nitrogen of the tricyclic ring in hydrogen bonding. The fluorine atom from SLC-0111 was also replaced by metal-complexing polycyclic amines, which coordinate positron-emitting (PET) metal ions (e.g., 111 In and 90 Y) for PET imaging [156]. Another recent study has utilized the bioisosteric replacement approach to design and synthesize new SLC-0111 analogs featuring 3-methylthiazolo[3,2-a]benzimidazole moiety as a tail that connected to the zinc-anchoring benzenesulfonamide moiety via a ureido linker (Compound 21, Figure 4). Thereafter, the ureido linker was either elongated (Compound 22, Figure 4) or replaced by an enaminone linker (Compound 23, Figure 4) [155]. The results obtained from the stopped-flow CO2 hydrase assay elucidated that three compounds possessed single-digit nanomolar CA IX inhibitory action with good selectivity profile toward hCA IX over hCA I and II. Moreover, these compounds exerted effective anti-proliferative and pro-apoptotic activities toward breast cancer MCF-7 and MDA-MB-231 cell lines. It is worth noting that the molecular docking analysis disclosed that thiazolo[3,2-a]benzimidazole moiety is a good bioisoster for the SLC-0111 phenyl tail due to its ability to establish many hydrophobic interactions within the hCA IX and XII active sites, as well as involving the sp 2 nitrogen of the tricyclic ring in hydrogen bonding. The fluorine atom from SLC-0111 was also replaced by metal-complexing polycyclic amines, which coordinate positron-emitting (PET) metal ions (e.g., 111 In and 90 Y) for PET imaging [156]. Development of SLC-0111 Hybrids In 2016, Eldehna et al. has utilized a hybrid pharmacophore approach to merge the pharmacophoric elements of the isatin, a privileged scaffold in cancer drug discovery, and SLC-0111 in a single chemical framework to develop a new series of novel isatins-SLC-0111 hybrids (Compound 24, Figure 5) [157]. Whilst most of the prepared hybrids exerted excellent inhibition for hCA XII in the sub-to low-nanomolar range, they weakly inhibited hCA IX. Thereafter, a structural extension approach has been exploited through N-alkylation and N-benzylation of the isatin moiety in order to enhance the hydrophobic interactions of the tail within the CA IX binding site, with the prime goal of improving the activity and selectivity toward the CA IX isoform (Compound 25, Figure 5). As planned, an improvement of hCA IX inhibitory activity for the hybrids, in comparison to Nunsubstituted counterparts (Compound 24), was achieved. Furthermore, one of the developed Nsubstituted isatin-SLC-0111 hybrids showed potent VEGFR-2 inhibitory activity and good antiproliferative action toward breast cancer MDA-MB-231 and MCF-7 cell lines under hypoxic conditions. Furthermore, it disrupted the MDA-MB-231 cell cycle via alteration of the Sub-G1 phase Development of SLC-0111 Hybrids In 2016, Eldehna et al. has utilized a hybrid pharmacophore approach to merge the pharmacophoric elements of the isatin, a privileged scaffold in cancer drug discovery, and SLC-0111 in a single chemical framework to develop a new series of novel isatins-SLC-0111 hybrids (Compound 24, Figure 5) [157]. Whilst most of the prepared hybrids exerted excellent inhibition for hCA XII in the sub-to low-nanomolar range, they weakly inhibited hCA IX. Thereafter, a structural extension approach has been exploited through N-alkylation and N-benzylation of the isatin moiety in order to enhance the hydrophobic interactions of the tail within the CA IX binding site, with the prime goal of improving the activity and selectivity toward the CA IX isoform (Compound 25, Figure 5). As planned, an improvement of hCA IX inhibitory activity for the hybrids, in comparison to N-unsubstituted counterparts (Compound 24), was achieved. Furthermore, one of the developed N-substituted isatin-SLC-0111 hybrids showed potent VEGFR-2 inhibitory activity and good anti-proliferative action toward breast cancer MDA-MB-231 and MCF-7 cell lines under hypoxic conditions. Furthermore, it disrupted the MDA-MB-231 cell cycle via alteration of the Sub-G 1 phase and arrest of G 2 -M stage, as well as, it resulted in a significant increase in the percent of Annexin V positive apoptotic cells [158]. Metabolites 2020, 10, x FOR PEER REVIEW 9 of 21 and arrest of G2-M stage, as well as, it resulted in a significant increase in the percent of Annexin V positive apoptotic cells [158]. In Vivo Studies, Preclinical and Clinical Trials of CA IX/XII Inhibitors Overall, the methodology used to determine the inhibitory constants for the various compounds determined in the papers discussed above, and validated by independent technologies (see above), represent bona fide, accurate data, which in no way compromise the development of CA IX and CA XII-specific inhibitors for validation in preclinical cancer models and in clinical trials, based on the vast biological literature demonstrating CA IX/CA XII as promising cancer therapeutic targets in hypoxic solid tumors. The compounds from which SLC-0111, the lead clinical CA IX/CA XII inhibitor, was derived (5 in Figure 1), were identified by the Supuran group [51,52]. These ureidobenzene sulfonamide compounds were then assessed for "druggability" criteria, including ADME (Absorption, Distribution, Metabolism, Excretion) analysis, from which SLC-0111 was selected for further in vitro and in vivo analysis in appropriate cancer models. The evaluation of CA IX/CA XII inhibitors as anticancer compounds takes a different path from the development of other cytotoxic anticancer drugs. This is because the targets, CA IX/CA XII are only expressed within the hypoxic niches of solid tumors and may represent a minor portion of the total tumor cell population. However, these hypoxic cells have the properties for self-renewal [159][160][161], migration/invasion [162][163][164], and survival in an acidic tumor microenvironment [163][164][165][166][167][168] and significantly contribute to resistance to chemo-, radiation, and immunotherapies. Thus, CA IX/CA XII inhibitors are not likely to have a major effect on tumor growth and metastasis by themselves as mono-therapeutics but need to be used in combination with chemo-, radiation-, and immunotherapies to eliminate resistant populations and for maximum durable suppression of tumor growth and metastasis. In Vivo Studies, Preclinical and Clinical Trials of CA IX/XII Inhibitors Overall, the methodology used to determine the inhibitory constants for the various compounds determined in the papers discussed above, and validated by independent technologies (see above), represent bona fide, accurate data, which in no way compromise the development of CA IX and CA XII-specific inhibitors for validation in preclinical cancer models and in clinical trials, based on the vast biological literature demonstrating CA IX/CA XII as promising cancer therapeutic targets in hypoxic solid tumors. The compounds from which SLC-0111, the lead clinical CA IX/CA XII inhibitor, was derived (5 in Figure 1), were identified by the Supuran group [51,52]. These ureidobenzene sulfonamide compounds were then assessed for "druggability" criteria, including ADME (Absorption, Distribution, Metabolism, Excretion) analysis, from which SLC-0111 was selected for further in vitro and in vivo analysis in appropriate cancer models. The evaluation of CA IX/CA XII inhibitors as anticancer compounds takes a different path from the development of other cytotoxic anticancer drugs. This is because the targets, CA IX/CA XII are only expressed within the hypoxic niches of solid tumors and may represent a minor portion of the total tumor cell population. However, these hypoxic cells have the properties for self-renewal [159][160][161], migration/invasion [162][163][164], and survival in an acidic tumor microenvironment [163][164][165][166][167][168] and significantly contribute to resistance to chemo-, radiation, and immunotherapies. Thus, CA IX/CA XII inhibitors are not likely to have a major effect on tumor growth and metastasis by themselves as mono-therapeutics but need to be used in combination with chemo-, radiation-, and immunotherapies to eliminate resistant populations and for maximum durable suppression of tumor growth and metastasis. Indeed, the extensive preclinical models carried out by several independent groups with the lead CA IX/XII inhibitors, including SLC-0111, have demonstrated that the use of such inhibitors in combination with chemotherapy agents [51,[169][170][171], immunotherapy [172][173][174], and radiotherapy [173][174][175][176] is highly important and desirable for sustained therapeutic response. The extensive studies reported in these and other papers, utilizing multiple in-depth in vivo models, provided solid positive preclinical data to warrant the initiation of Phase 1 clinical trials in 2014, of which a Phase 1 safety trial with SLC-0111 (as a monotherapeutic agent), has been completed [177] and a Phase 1b trial is currently underway to evaluate SLC-0111 in combination with gemcitabine in metastatic pancreatic cancer patients whose tumors are CA IX positive (ClinicalTrials.gov Identifier: NCT03450018). A multitude of other combination therapy studies has been performed with CA IX inhibitors, including SLC-0111, in combination with proton pump inhibitors [178], antimetabolites [179], cisplatin [180], APE1-Ref-1 inhibitors [181], and histone deacetylase (HDAC) inhibitors [182]. All these studies showed a synergistic effect between the CAI and the second antitumor agent, as well as the lack of endothelial toxicity [183]. Other groups also used SLC-0111 in various biomedical studies in which selective inhibition of some CA isoforms was needed. These include the effects on prostate cancer cells of SLC-0111 alone or in combination with daunorubicin [184], radiobiological effects of CA IX inhibition in human breast cancer cells [185], microvascular endothelial cell pH regulation [186], glycolysis and migration suppression in pulmonary microvascular endothelial cells [187], and the involvement of CA isoforms in mitochondrial biogenesis and in the control of lactate production in human Sertoli cells [188]. All these studies confirm the usefulness of this clinical candidate in tumors and other biomedical conditions. Conclusions The tumor microenvironment is critical in cancer cell growth and can substantially influence the outcome of anticancer interventions. Transcription factors, such as HIF-1/2, can activate a number of key genes including those involved in tumor pH regulation. The human carbonic anhydrase isoforms CA IX and XII have active roles in regulating the extracellular pH in cancers including in advanced solid metastatic tumors. As a result of careful preclinical studies involving X-ray crystallography [189] and the screening of tens of thousands of potential inhibitors [190][191][192][193] by use of a functional kinetic enzyme inhibition assay based on stopped-flow kinetic measurements, key lead compounds for CA IX and XII, including the sulfonamide CA inhibitor SLC-0111, have been discovered, developed, and validated. The results from the stopped-flow kinetic enzyme inhibition assay are in excellent agreement with: (i) orthogonal native mass spectrometry measurements that can be used to directly measure protein-ligand interactions and (ii) kinetic inhibition measurements that do not involve a stopped-flow apparatus. Thus, the stopped-flow kinetic method has been used to screen tens of thousands of potential CA inhibitors with sufficiently high accuracy for drug discovery and development applications. Based on the vast literature demonstrating the development and validation of specific CA IX and CA XII inhibitors, including in preclinical cancer models and in clinical trials, these two enzymes are promising cancer therapeutic targets in advanced, hypoxic solid tumors. We anticipate that the use of selective CA IX/XII inhibitors, such as SLC-0111, will be beneficial in combination with chemo-, radiation-, and immunotherapies to eliminate resistant cancer cell populations and for maximum durable suppression of tumor growth and metastasis. Author Contributions: All authors contributed to the original draft preparation, review, and editing of the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: All authors except F.C., S.D. and C.T.S. declare no conflict of interest. F.C., S.D. and C.T.S. are inventors of SLC-0111. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
8,768.2
2020-10-01T00:00:00.000
[ "Biology", "Chemistry" ]
A Comparison Analysis of BLE-Based Algorithms for Localization in Industrial Environments : Proximity beacons are small, low-power devices capable of transmitting information at a limited distance via Bluetooth low energy protocol. These beacons are typically used to broadcast small amounts of location-dependent data (e Introduction In recent years, similarly to what is happening in cities and public spaces, smart and connected objects are increasingly being deployed also in industrial environments.One of the effects of the development of this so-called smart industry (also known as Industry 4.0) is an increase in the number and type of automated vehicles that move in industrial environments.In this context, indoor localization systems become important to improve logistics effectiveness and productiveness.In many industrial sectors, the quick and automatic localization of goods (components, instruments, partial assemblies, etc.) in large environments would provide a critical production speed up.For instance, a critical problem in the automotive industry is the quick localization of partially assembled vehicles which are temporarily removed from the pipeline and placed in large warehouses, e.g., while waiting for the availability of some of the parts required for their completion. In literature, several existing technologies like global positioning system (GPS) [1], Wi-Fi [2], and Bluetooth [3] are adapted to provide indoor localization services in domestic environments.While GPS performance indoor is poor [1], Wi-Fi and Bluetooth show promising results, with the latter being particularly interesting due to its low cost and energy consumption features [4]. The use of Wi-Fi for indoor positioning is well established and is known to provide errors in the order of few meters [5].However, the requirements in terms of topological distribution, number of Wi-Fi access points associated with the costs and power consumption make this solution unfeasible without a consistent retrofitting which is often undesirable in existing industrial plants (e.g., make power outlets available at deployment points). Bluetooth low-energy (BLE) uses the same frequency of Wi-Fi (2.4 GHz) but is designed as a short range energy-efficient communication protocol, allowing compatible devices to communicate through short messages with minimal overhead [6].Thanks to these features, BLE-compatible devices can transmit periodic messages for years within a single cycle of a small form-factor battery [7], thus being perfectly suited for easy installation in any environment. BLE-based localization is typically performed by installing a set of proximity beacons (i.e., BLE transmitters) at known locations.Receivers extract the RSSI (which is a proxy of the distance from the sender) from the nearest beacons and use these values to predict their own position [8]. In general, BLE-based localization algorithms can be split in two categories: distance-based and fingerprinting-based [9].Distance-based algorithms directly translate RSSI values into position coordinates for the object being localized.These methods require that at least three RSSI measurements from nearby beacons are available, and apply either trilateration or min-max to transform these measurements into position estimates [10].In contrast, fingerprinting-based algorithms exploit a vector of RSSI measurements in known fingerprint positions to create a so-called reference fingerprint map (RFM).Then, a machine-learning regressor is fed with the RFM data to build an association rule between new RSSI measurements and their corresponding position estimates [11]. Although these techniques have been proven effective in domestic settings, smart industry environments pose several new challenges that need to be addressed: the areas where objects have to be localized are much larger than in the domestic case, and they constitute a harsh environment for BLE-based localization, due to the presence of large metal objects (robots, carts, etc.) causing signal interference, multi-path propagation and shadowing.Despite the vast literature on BLE-based localization, very few works perform detailed comparisons between different algorithms under the same experimental conditions (i.e., target environment, type and number of beacons and receivers, etc.).In particular, to the best of our knowledge, no one focuses on applying these methods in real industrial environments. In this work, we perform such comparison considering two real industrial settings (i.e., a shed and a workshop).We evaluate the effectiveness of several state-of-the-art algorithms (both distance-based and fingerprinting-based) under varying experimental parameters, changing the number of installed beacons and the density of the RFM.Moreover, we assess the robustness of fingerprint-based algorithms to variations in the environment and to random noise. Our goal is to assess the feasibility of indoor localization in industrial environments using commercial off-the-shelf BLE transmitters and receivers, due to their low cost, ready availability and ease of installation and usage.The experiments performed are consequently tailored for this target, and are based on commercial devices and protocols currently available in the market. Similarly, the goal of this paper is to test the effectiveness of state-of-the-art localization algorithms for BLE localization in an industrial scenario.Therefore, we do not aim at designing a new specialized algorithm for this task, and we just optimize the hyper-parameters of existing solutions to the best values for our target. Our results confirm that fingerprinting-based methods reduce the positioning error compared to distance-based methods, even in an industrial setting.However, we could not identify a clear winner among the three fingerprinting-based algorithms considered in our experiments, as they all perform similarly both in terms of positioning error and of computational complexity.Therefore, we base our final selection on simplicity and flexibility, which leads us to choose k-NN, as explained below. The rest of this paper is organized as follows.Section 2 reviews relevant literature solutions.Section 3 presents the localization algorithms analyzed in this study and Section 4 discusses the experimental results.Finally, Section 5 offers our concluding remarks. Background and Related Work Indoor localization methods based on the received signal strength indication (RSSI) from BLE devices have become mainstream thanks to their low cost, wide coverage, and simple hardware required.In comparison, more accurate alternatives such as those based on the angle of arrival (AOA) or on the time of arrival (TOA) require significantly more expensive hardware, since they rely either on an array of BLE antennas or on a very accurate synchronization between the transmitter and receiver [12]. The RSSI is a measurement of the power present in a received radio signal.It is typically computed as the ratio between the received signal power and the transmission power, which is a known value set by manufacturer, and attached to each packet sent by the beacons [13].As mentioned above, RSSI is generally used for two types of localization methods: distance-based methods [10] and fingerprint-based methods [14].While the former try to directly convert RSSI values to distances, the latter use a RFM to build a model that associates new RSSI measurements to a position. The two main distance-based methods are trilateration [11] and min-max [15].Both convert RSSI values to distances between beacon and localized device using the quadratic decay law of power with respect to distance.Trilateration determines the location of an object combining measurements from three spatially-separated known locations (beacons).The position of the device is obtained as the intersection of the three circumferences having the location of one beacon as the center and the computed distance as radius [10].The min-max algorithm constructs a square bounding box around each beacon using its known coordinates as center and two times the distance from the target as side length of the box.Next, the intersection of three boxes is computed selecting the min-max coordinates.This approach is computationally simpler than trilateration, but it is also more prone to errors. In both cases, the error obtained by distance-based methods is influenced by the number of installed beacons and in particular by their density (number of beacons per square meter).Moreover, trilateration and min-max are influenced by multi-paths and fading.Multi-paths are propagation phenomena that occur when radio signals reach the receiving antenna by two or more paths.Multi-path propagation causes constructive or destructive interference and phase shifting of the signal, which in turn can vary the measured RSSI.Fading is a more generic variation of the attenuation rate of a signal due to obstacles, frequency disturbances or obstacles, which can result in a loss of signal power. Several works use distance-based positioning with BLE beacons in various domestic or public environments.In [16], the authors use trilateration in a hall of 7 m × 5 m where the maximum distance from the beacons to the target is less than 9 m.Despite the small size of the environment, their experiments using three beacons produce an average error of 2.766 m.In [17], the authors optimize the trilateration algorithm applying a Kalman filter to reduce noise in the RSSI.Park et al. [18] calculate the distance using the RSSI and then they refine the result applying two Kalman filters to reduce the error.Using six beacons in a polygonal living room (8.5 m × 13 m), they obtain an average error of 1.77 m.In [19], the authors use the min-max algorithm to obtain the object position in three environments: a classroom and two construction sites.The classroom has size 7 m × 6.4 m and the distance between beacons varies from 3 m to 4.8 m; the average error obtained is 1.55 m.The two construction sites have the same size (6.3 m × 5.1 m) and are tested with six and three beacons respectively, positioned with a spacing ranging from 1 m to 3.4 m; the average errors obtained are 1.22 m and 2.58 m. Fingerprinting refers to the type of algorithms that estimate the location of an object exploiting previously recorded data at known positions.Therefore, RSSI-based location fingerprinting requires a preliminary set-up stage.An inspection of the environment is performed to select reasonable data collection points to create the RFM.The coordinates of these points and the corresponding RSSI values from nearby beacons are collected.Then, these data are used to build a model that associates a new RSSI measurement to its corresponding location.Several types of models, typically based on machine-learning are considered in literature.Therefore, fingerprinting allows for partially accounting for multi-paths and fading due to fixed (not moving) elements of the environment, as these effects are measured also in the "training" RFM data. In [20], the authors use the k-nearest neighbors (k-NN) algorithm with k = 8 and different number of fingerprints points in a living room of 6.66 m × 5.36 m.With a RFM of 50 points, they obtain an average localization error of 1.03 m, while with an RFM of just eight points they obtain an average error of 1.91 m.Dierna and Machì in [21] an experiment with 31 beacons in a museum of 670 m 2 using two different sampling densities to create the RFM and a k-NN model with k = 3.In a first experiment, the density is varied from 0.7 reference points per square meter in museum halls to one point every four square meters in the courtyard (224 total RFM points).The authors also experiment with averaging RSSI measurements over multiple samples, obtaining an error that varies from 1.72 m (1 sample) to 1.28 m (10 samples).Then, the authors try to increase the RFM sampling density in proximity of structural points, obtaining 239 total points and reducing the error to 1.70 m (without averaging).In [22], authors develop an Iterative Weighted k-NN (IW-kNN) for fingerprinting-based localization.They select k = 4 and use 10 beacons in a test room of 18.8 m × 12.6 m, obtaining an average error of 2.52 m.In [23], the author propose a k-NN with k = 1 in an small office room.Using 20 beacons and an RFM of 120 points, the average error obtained is 1.9 m. Other studies map the localization problem to a regression or classification using support vector machines (SVM) or neural networks (multi-layer perceptrons-MLPs).In [24], the authors test different configurations of a Multi-Layer Perceptrons in two different locations, a room of 4 m × 4 m and a 36 m-long corridor.In the test room, they use four beacons and 16 RFM points, obtaining an average error of 2.48 m.In the test corridor, nine beacons and 36 RFM points yield an average error of 1.25 m.In [25], the authors use an SVM in a living room of 3.53 m × 8.09 m divided in six zones, with two beacons placed in the middle of the room, equidistant from the walls.A set of RSSI values are taken in different zones of the room and out of it to test the algorithm.The SVM shows an accuracy of 88.54% in classifying the zone. Both distance-based and fingerprint-based localization algorithms can be applied either directly on the raw RSSI values measured at the receiver, or after extracting some higher-level features by aggregating or processing those raw values.Table 1 summarizes the feature extraction methods adopted in the relevant literature works on this problem. As shown in the table, many works [4,5,9,12,17,19,[25][26][27][28][29] do not perform any feature extraction and work directly on raw RSSI data.Another common solution [10,11,14,18,21,23] is to use as feature the average of RSSI measures over a short interval of time (from a few seconds to some minutes) for noise rejection.Other authors [15,22] also use the average RSSI over a time window, but remove outliers before computing it.Mazan et al. [24] use the RSSI sample maximum rather than the average, while Grzechca et al. [16] consider several different summary statistics as possible features for localization (sample average, median, etc.).Eventually, however, they determine that the average/moving average is the feature that yields the best results.Lastly, Daniş et al. [20] store the entire histogram of RSSI measurements over a time interval, where each bin of the histogram could be considered as a different feature.However, they do not use these features to perform localization.Rather, they use them to generate a dense RFM from a sparse one. According to literature [27], there are many factors affecting the accuracy of BLE-based localization.When a receiver moves through the environment, the RSSI signal can vary abruptly due to the presence of walls and objects.The signal can penetrate differently through different materials and interact with objects as it moves along different paths through the environment.Furthermore, fingerprinting schemes require regular review to ensure the accuracy of the map.If environmental changes (e.g., movements of large objects) occur, fingerprinting methods performances can degrade drastically.More specifically, the positions of furniture elements, the density of people and even the positions of walls and partitions can influence the measurements.In [28], the authors analyze the problem of multi-paths and possible mitigation schemes when the beacon signals are transmitted using different frequencies (2402 MHz, 2426 MHz, and 2480 MHz).They also verify that positioning accuracy increases with the number of beacons measured in each fingerprint point, up to a threshold of around 6-8, beyond which there is no quantifiable improvement.As mentioned in Section 1, while all these factors are relevant even in the case of a domestic or public space, their impact is expected to increase in an industrial scenario, due to the larger size of environments and the abundance of reflecting materials such as metal.However, none of the aforementioned papers compared different BLE-based localization algorithms in the context of a Smart Industry application, which is exactly the main contribution of this work. Localization Algorithms In this work, we considered one distance-based algorithm (Trilateration) and three fingerprint-based algorithms (k-NN, SVM, MLP).These algorithms have been selected as those that provide the most consistent results across different types environments in literature (see Section 2).All considered fingerprint-based algorithms could be used for either classification or regression.The former solution could be used when the goal is simply to identify the area (e.g., room) where the target object is located, whereas the latter approach is needed to obtain precise coordinates.Consequently, we use all three fingerprint-based algorithms in regression mode.In the following, we describe each algorithm in detail, focusing in particular on the hyper-parameters settings used in our experiments. The trilateration algorithm (see Figure 1) is a sophisticated version of triangulation which calculates the distance from the target object directly, rather than indirectly through the angles.Data from a single beacon provide a general location of a point within a circle area.Adding data from a second beacon allows for reducing the plausible area to the overlap region of two circles, while adding data from a third beacon reduces it to a point.In the figure, U is the known distance between beacon 1 (B 1 ) and beacon 2 (B 2 ), while V x and V y are the coordinates of beacon 3 (B 3 ) with respect to B 1 .The radii of the three circles, obtained from RSSI measurements are r 1 , r 2 , and r 3 .With this notation, the coordinates of the object P in a 2D space are calculated using the following equations: The k-Nearest Neighbors algorithm (k-NN) is a non-parametric supervised learning method that can be used for both regression and classification [29].When k-NN is used for regression, the output of the algorithm is the property value of the object (i.e., its position) calculated as a weighted average of the values of its k nearest neighbors, where k is the main hyper-parameter of the algorithm.K-NN does not require a proper training, as it just requires storing the RSSI values and corresponding coordinates for the training data.At inference time, the k nearest neighbors of a new sample are simply identified evaluating a distance metric between its RSSI measurements and those of all training data. In our work, we tested different values of k and selected the one that yielded the lowest median error on the test data set.We obtained that the optimal value of k did not depend on the density of beacons present in the environment.However, our tests only included either three or four beacons in two different environments, hence the optimal value may change in a different scenario.Nonetheless, finding the best value of k is a straightforward process if the number of test points is not extremely large, since this is the only hyper-parameter of the algorithm.The value of k used in our experiments is reported in Section 4.1. The multi-layer perceptron (MLP) is a type of feed-forward artificial neural network used in supervised regression and classification tasks [30].As shown in Figure 2, an MLP consists typically of three or more layers of neurons: (i) an input layer, (ii) an output layer, and one or (iii) more hidden layers.MLPs are trained using backpropagation and exploit nonlinear activation functions (rectified linear unit-ReLU or sigmoid functions) in the hidden and output layers to learn complex input/output relationships.In our work, the number of neurons in the input layer is equal to the number of beacons whose RSSI measurements are recorded (which varies among different environments and experiments), while the number of output neurons is fixed to 2, i.e., the predicted (x, y) coordinates of the target object.Concerning the hidden layers, we tried different combinations of number of layers and neurons.The exact type and number of layers used in our experiments are reported in Section 4.1. Lastly, support-vector machines (SVM) can also be used for supervised classification and regression [30].When used for regression, SVMs are more properly called support-vector regressors (SVRs) and are trained to predict the output variables with a given margin of tolerance, called ε.In its basic form, the SVR training algorithm finds a hyperplane that maximizes the number of samples that fall within the margin (see Figure 3).In practice, this is a transformation of the type: where r is the vector of RSSI values measured from the beacons, p is the corresponding vector of coordinates (x, y) of the object, and w and b are a matrix and a vector of weights, which are optimized to maximize the number of training samples that fall within the margin.Nonlinear relationships are dealt with by projecting the input variables using a kernel function.For instance, the common Radial Basis Function (RBF) kernel transforms the input r onto: where γ is a hyper-parameter used to optimize the hyperplane and r − r 2 is the squared Euclidean distance between two input samples (r and r ). For a sufficiently complex problem, some data will inevitably fall outside of the tolerance margin (see Figure 3).SVR training tries to minimize the total distance of these points from the margin (ξ) including a penalization term C ∑ ξ to the objective function.The hyper-parameter C is used to weigh the importance of these outliers.In our work, we used a grid search to find the best combination of hyper-parameters for an SVR.Specifically, we varied the kernel function, the kernel coefficient γ (only when using a RBF kernel), the boundary limit ε and the penalty parameter C. We considered linear and RBF kernels, whereas γ was varied in the interval [0.01-0.08],ε in [0.02-0.08],and C in .The exact kernel functions and the optimal values of the parameters found in our experiments are reported in Section 4.1. Experimental Setup In our experimental setup, we used up to four beacons as BLE transmitters.As mentioned in Section 1, we wanted to focus on off-the-shelf BLE-based devices that could be bought on the market today at a low cost to implement indoor localization in industrial environments.Specifically, we selected Estimote beacons [31] because of their low cost and long battery lifetime.These devices adopt the Apple proprietary "iBeacon" protocol [32] to broadcast information packets over BLE.Each packet includes a universally unique identifier (UUID) of the beacon and an indication of the nominal transmitted power at 1 m from the device, which is needed to convert RSSI measurements into distances. As a target device to be localized, we used a Raspberry Pi 3 B (quad-core ARM<EMAIL_ADDRESS>with its on-board Bluetooth receiver.All localization algorithms have been implemented in Python using the sklearn package [33].In order to improve the correlation between the measurements collected when building the RFM and those exploited to localize the device, the Raspberry Pi has been used for both tasks.The localization algorithms described in Section 3 have been compared in two industrial environments: • Environment A is an industrial shed of 100 m 2 , with a 9 m high ceiling.The walls are partly masonry partly metal, with a doorway of 4 m 2 on a side.The area is partially occupied by pallets and building materials of different types. • Environment B is an industrial workshop of 288 m 2 with a 10 m high ceiling.The area contains electronic and mechanics equipment, computers, and metal structures and was continuously in use by workers during our experiments.The walls are partly masonry partly glass. Beacons have been positioned as equidistant as possible from each other, given the restrictions caused by obstacles in the two environments.When using four beacons, the minimum distance between them was ≈3.9 m and ≈7.7 m for Environment A and Environment B, respectively.Notice that a simple localization method based on proximity (i.e., one that simply detects the area of the environment based on the beacon that generates the strongest RSSI) would incur an error that is always greater than half of these distances (i.e., >1.85 m and >3.85 m respectively for the two Environments).Therefore, these half-distances represent a sort of lower bound on the acceptable performance of any localization algorithm that exploits more than one beacon, such as trilateration and fingerprint-based ones. Fingerprint methods have been tested with different densities of points in the RFM.Specifically, for both environments, we considered three grids of equidistant points with a distance between adjacent points of 0.5 m, 1 m and 2 m, respectively.RFM points have been collected whenever the corresponding location was not obstructed by furniture or other obstacles.This gave us a total of 192 RSSI values for training and 31 for testing in Environment A, while, in Environment B, we measured 293 RSSI points for training and 50 for testing. In accordance with many literature papers on indoor localization using BLE RSSI [10,11,[14][15][16]18,[21][22][23], we used a simple yet effective "feature extraction" method.Specifically, rather than using raw RSSI measurements to perform localization, we averaged the measurements over an interval of time to partially filter noise.For training measurements, we averaged the RSSI over intervals of 1 min.Testing measurements have been performed for intervals of 20 s to avoid increasing too much the time required to obtain an estimate of the position.In both training and testing measurements, the Raspberry Pi was randomly oriented, to simulate a realistic localization scenario.The ground truth positions of training and testing points have been measured using a professional meter. For the k-NN algorithm, we tested different values of k and we obtained the best performance using k = 3.For the MLP, the lowest error was obtained with a model including three hidden layers composed of 8, 8, and 6 neurons, respectively, and using ReLU activation functions.For the SVR, the lowest median localization error was obtained with the RBF kernel, γ = 0.01, ε = 0.02 and C = 1. Trilateration The error box plots for the trilateration algorithm, computed on all test points are reported in Figure 4 for both environments.As shown in the graph, we performed measurements with either three or four beacons installed in each of the two environments, resulting in different beacons' densities.In the case with four beacons, trilateration has been performed using the three strongest RSSI measurements at a given point. As expected, the localization error is influenced by the density of beacons in the area, but the impact is minimal and the trends are not straightforward.In Environment A with four beacons covering an area of 25 m 2 each, we obtain a median error of 3.3 m and an interquartile range (IQR) of 2.0 m.With only three beacons covering an area of 33 m 2 each, the error median and IQR are practically unchanged, but the two extremes of the IQR are slightly shifted towards larger errors.For Environment B, the error clearly increases due to its wider area.For this environment, 4 beacons cover an area of 77 m 2 each, obtaining a median error of 5.4 m and a IQR of 8.0 m.As for Environment A, when using only three beacons (each covering an area of 103 m 2 ), the error median is practically identical, and surprisingly the IQR decreases to 6.7 m.However, the difference is not so significant, so also in this case the two results can be considered substantially equivalent. Such large errors are clearly unacceptable for most indoor localization tasks, and clearly show the limits of simple RSSI-based distance algorithms.In fact, the median errors obtained with trilateration are significantly larger than the minimum half-distance between beacons (reported in Section 4.1), showing that this method does not reduce the error compared to a simple proximity-based localization.This is probably due to the fact that RSSI measurement precision is strongly affected by external noise sources, shadowing effects and multi-paths, which are particularly relevant in industrial environments.The fact that the RSSI is measured with a random device orientation also worsens the results. Moreover, the results in Figure 4 show that increasing the density of beacons does not yield significant improvements to the localization accuracy.Of course, if these experiments were to be extended to a much higher number of beacons (e.g., >10), trilateration could indeed perform better.However, having such a high density of BLE devices would be unlikely in an industrial context, which is characterized by large environments and where the positioning of beacons is not always easy due to the obstructions caused by industrial machinery, etc.Moreover, the experiments of Section 4.3 show that fingerprint methods can significantly reduce the median error without resorting to a larger number of beacons, hence providing a lower cost solution. For all these reasons, we conclude that the trilateration algorithm is not suitable for this type of environment and that a fingerprint method is indeed necessary. Fingerprint Methods Table 2 reports the median error and the corresponding IQR for the three fingerprinting algorithms considered in this work.Errors are reported for both environments and refer to the case where 4 beacons are installed and the RFM measurements are collected every 0.5 m. Table 2. Median error and interquartile range (IQR) of fingerprint-based methods in Environment A and Environment B using four beacons and a fingerprint grid with one measurement every 0.5 m.These results clearly show that fingerprinting helps in improving localization accuracy significantly, both in terms of median error and in terms of localization stability (i.e., IQR).SVM and k-NN show similar performance, with k-NN achieving the lowest median error in both environments.The MLP obtains the worst performance among the three tested algorithms.This is due to the low amount of training data available, which is typical for this application, where the amount of data are limited by the number of site RSSI characterization recordings.In general, the error increases for Environment B due to the lower density of beacons and to the presence of metal objects causing disturbances in the signals. Error Maps Figures 5 and 6 show the map of the localization error of fingerprint algorithms for both environments.The positions of the four beacons are represented by white diamonds in the figures.The plots refer to the same experiment described in the previous Section (four beacons, one RFM measurement every 0.5 m). Interestingly, Figure 5 shows that the area with largest error (>3 m) for Environment A is towards the left side of the room, where the wall is made of metal.Similarly, Figure 6 shows that SVM and MLP obtain a larger error (>10 m) in the bottom-right area, where there is a doorway made of metal and glass.k-NN seems to be less affected by the disturbances caused by these materials.In general, the map highlights that k-NN obtains the most stable estimate across the area of the two environments.Table 3 reports the percentage of testing points whose error is lower than the minimum half-distance between beacons (reported in Section 4.1).As shown, for both environments, more than half of the testing points achieve a localization error that is smaller than the best result achievable with proximity-based localization, thus proving the effectiveness of fingerprint-based methods.Similarly to trilateration, we evaluated the impact of beacons density on the localization error in fingerprinting algorithms.As before, we tried decreasing the number of beacons in each environment from 4 to 3, leaving the density of the RFM grid unchanged (one measurement every 0.5 m).The results of this experiment are reported in Figure 7.As shown, the error trends for SVM and k-NN are similar, and it is difficult to identify a clear winner.With either three or four beacons, SVM is slightly better than k-NN for Environment A, and vice versa for Environment B. For all three algorithms, the impact of having a larger number of beacons on the results is minimal.In general, the boxes identified by the IQR, as well as the "whiskers" of the box plots tend to shift towards lower errors when the number of beacons increases from three to four.However, the medians are often very similar and sometimes even increase.For instance, the median error of the SVM reduces by 0.4 m when going from three to four beacons in Environment A, but increases by 0.3 m in Environment B. Impact of Fingerprint Grid Density In fingerprint-based methods, the density of the grid map influences the performance of localization.To this purpose, we keep the number of beacons to four and we increase the number of RSSI measurements.In this section, we analyze the impact of this design parameter considering three fingerprint grids' densities (one measurement every 0.5 m, 1 m, and 2 m).Figures 8 and 9 show the results of this analysis for the two environments using four beacons. Reducing the number of fingerprint points tends to cause a slight increase in the median error, in the values of the first and third quartiles, and in the length of the box plot whiskers.However, this is not always true and in general the impact is not dramatic, showing that even a quite "sparse" RFM is sufficient to obtain a decent localization accuracy.Once again, it is not easy to identify a clear winner among the three fingerprint algorithms.For example, in the smallest environment, SVM shows the most stable results with a dense grid (IQR = 0.8 m), but when the distance between the points increases, it is outperformed by k-NN (e.g., IQR = 1.1 m versus the 1.9 m of the SVM with one fingerprint point every 2 m). Impact of Environmental Perturbations In an industrial environment, movements of large objects and machines may occur.Thus, a good localization algorithm should be resilient to environmental changes.This is clearly relevant especially for fingerprint-based algorithms, since when something changes within the environment the fingerprint data collected previously become obsolete and could lead to an incorrect positioning.Therefore, we test the effect of changes in the environment on the three fingerprint algorithms, first by simulation and then in the field. The simulation is performed adding a zero mean Gaussian noise to the fingerprint RSSI values to mimic the scenario in which a variation in the environment makes them obsolete.Figures 10 and 11 report the trends of the error box plots of the three algorithms as a function of the Gaussian standard deviation σ.The three algorithms show a similar trend when Gaussian noise is introduced, and once again it is hard to state which algorithm performs better, since the differences in the box plots for a given value of σ do not justify the choice of one algorithm over the other. As a field experiment to test the influence of environmental perturbations, we introduce a large item (a 4 m × 4 m vehicle emulating industrial operations) in Environment A, then perform the localization without updating the RFM.The results obtained with the three algorithms in this scenario are reported in Table 4. As expected, the median error in the test points increases for all three algorithms compared to the results of Table 2, but k-NN is slightly less affected by the perturbation than SVM and MLP, since its median error only increases by 0.5 m. Conclusions In this paper, we have presented a study that compares state-of-the-art indoor localization methods based on BLE in an industrial setting.Our results show that fingerprinting-based methods are sharply better than distance-based ones.This is motivated by the fact that the latter are affected by shadowing effects and reflections, which are particularly relevant in industrial environments, due to the presence of large moving metal objects. Vice versa, the three considered fingerprint algorithms achieve very similar localization errors and show similar resilience to environmental variations and noise.Therefore, we argue that the selection of the best algorithm should be based on other aspects than sheer localization accuracy. Even computational complexity considerations do not help the selection, as all three algorithms have a negligible time and memory complexity for the selected target device (the Raspberry Pi).Specifically, given the relatively small size of the training data (and consequently of the prediction models), we obtain that the inference execution time is <2 ms for all three algorithms, and the memory occupation is <40 MB. Therefore, the final selection of an algorithm should be done based on training complexity and ease of adaptability, which directs the choice towards k-NN.In fact, this algorithm has a much simpler set of hyper-parameters (consisting solely of the number of neighbors k) compared to both SVM and MLP.This eases the deployment of a k-NN-based localization system in a new environment, as the only tests that need to be performed are those to identify the optimal value of k.Both MLP and SVM require a much more time-consuming parameter tuning for setting the number of layers, the number of neurons in each layer and the activation functions in the former case, or the basis function and values of γ, C and in the latter.Moreover, k-NN does not require an actual training phase, which makes it more flexible for online adaptation.Specifically, whenever new or updated fingerprinting measurements become available, it is sufficient to update the database of reference points used by k-NN in order to obtain an updated location estimate.In contrast, such kind of update would require re-training in the case of SVM and MLP. Therefore, we conclude that k-NN is the most suitable algorithm for industrial scenarios. Figure 3 . Figure 3. Support-vector regressor (SVR) example for a two-dimensional problem, using a linear kernel. 3 beacon ( 1 /Figure 4 . Figure 4. Error box plots for the trilateration algorithm.The x-axis reports the number and the corresponding density of the beacons in the environment. Figure 5 .Figure 6 . Figure 5. Error map of the three fingerprint-based methods in Environment A. Figure 7 . Figure 7. Error box plots for the three fingerprinting algorithms with different beacon densities.Results for a fingerprint grid with one measurement every 0.5 m. Figure 8 .Figure 9 . Figure 8. Error box plots for the three fingerprinting algorithms with different fingerprint grid densities in Environment A. Figure 10 .Figure 11 . Figure 10.Error box plots for the three fingerprinting algorithms as a function of the standard deviation of the Gaussian noise in Environment A. Table 1 . Summary table including relevant works on indoor localization based on Bluetooth low-energy (BLE) received signal strength indication (RSSI) and the corresponding feature extraction methods. Table 3 . Percentage of testing points lower than minimum half-distance between beacons. Table 4 . Error median and IQR for the three fingerprinting based algorithms in Environment A after adding a large object not present during the collection of the RFM.
8,600.2
2019-12-28T00:00:00.000
[ "Engineering", "Computer Science" ]
Multistable mechanosensitive behavior of cell adhesion driven by actomyosin contractility and elastic properties of force-transmitting linkages The ability of cells to sense the mechanical properties of their microenvironment is essential to many physiological processes. The molecular clutch theory has played an important role in explaining many mechanosensitive cell behaviors. However, its current implementations have limited ability to understand how molecular heterogeneity, such as adhesion molecules with different elasticities, regulates the mechanical response of cell adhesion. In this study, we developed a model incorporating the experimentally measured elastic properties of such proteins to investigate their influence on cell adhesion. It was found that the model not only could accurately fit previous experimental measurements of cell traction force and retrograde actin flow, but also predicted multistablility of cell adhesion as well as a feedback loop between the densities of the extracellular matrix proteins and contractile myosin II motors in living cells. The existence of such a feedback loop was successfully confirmed in experiments. Taken together, our study provides a theoretical framework for understanding how the mechanical properties of adaptor proteins, local substrate deformations and myosin II contractility affect cell adhesion across different cell types and physiological conditions. INTRODUCTION The ability to perceive and adapt to the surrounding environment is a hallmark of living organisms.At the cellular level, a prime example of this is the mechanosensitivity of cells, which can detect and respond to a variety of mechanical signals, such as external force, substrate stiffness and topography [1,2].Mechanotransduction is crucial for the regulation of various physiological processes, including cell migration, proliferation, differentiation, and apoptosis [3].Additionally, aberrant mechanosensing has been implicated in a number of pathological conditions, such as fibrosis, cancer, and cardiovascular diseases [4].Therefore, a better understanding of cell mechanosensing mechanisms is important for gaining deeper insights into both physiological and pathological processes. Since the discovery of the mechanosensitive behavior of cell adhesion complexes [1], several potential molecular mechanisms have been proposed to explain this phenomenon.First, it has been shown that formins may play an important role in the force-induced growth of focal adhesions (FAs) by activating the polymerization of actin filaments [1,5] -a hypothesis supported by singlemolecule studies [6,7].Furthermore, it was found that applied mechanical load promotes talin-vinculin and αcatenin-vinculin interactions, enhancing the linking of integrins and cadherins to the actin cytoskeleton, which is important for cell mechanotransduction [8][9][10][11][12][13].In addition, LIM domain proteins, such as zyxin, were also found to be recruited to FAs in response to mechanical stress, further reinforcing and stabilizing them [14].Finally, it has been proposed that ion channels, such as Piezo 1/2 and L-type calcium channels, may contribute to cell mechanosensing by allowing ion flux in response to force application to a cell, activating downstream signalling pathways [15][16][17].Although single-molecule force spectroscopy techniques have greatly improved our understanding of the mechanosensing mechanisms of such proteins [18], much less is known about how their forcedependent response is integrated at the mesoscale level, ultimately directing cell behaviour. Despite its strengths, current implementations of the molecular clutch theory are based on a phenomenological representation of molecular clutches that does not adequately reflect the mechanical and biochemical properties of the underlying molecules.For example, existing models often assume high stiffness of molecular clutches (∼ 1 − 1000 pN/nm [20,[24][25][26][27]), resulting in the pre-diction that mechanical stress of molecular clutches is primarily determined by substrate deformations.Yet, such an assumption is inconsistent with single-molecule measurements showing that key elements of molecular clutches, such as talin, are much more elastic due to the presence of long flexible peptide linkers in them, as well as due to the force-induced unfolding of protein domains, being able to extend to more than 200 nm when subjected to a force of several pN [13].Therefore, although classical model recapitulate mechanosensitive behavior at the cellular level very well, it is difficult to obtain accurate physiological insights at the molecular level with their help, such as estimations of the average tension and force loading rates of molecular clutches and the potential role of the molecular composition and/or mechanical properties in mechanosensitive cell adhesion behaviour, which are the key factor behind cell-type specific mechanical responses of cells. Furthermore, the ECM is often composed of heterogeneous fibrous materials, such as fibronectin and collagen networks, which are easily deformed at nanometer scales.These types of local deformations have not been taken into account in many previous models, which often treat the substrate as a rigid block on the FAs length scale [20,[24][25][26][27].As a result, such studies often predict large fluctuations in retrograde actin flow [27], which is contrary to the steady retrograde actin flow experimentally observed in filopodia and FAs [20,26,31,32]. These inconsistencies make it difficult to obtain precise molecular insights by comparing experimental data with existing models, hindering understanding of how the mechanical properties of various molecular clutch components, such as talin, kindlin or α-actinin, influence the mechanosensitive properties of cell adhesions and how molecular clutch tension dynamics, which have recently been mapped using DNA-based tension sensors [33], relate to cellular response.Moreover, previous theoretical studies have mainly relied on numerical simulations, which has made detailed analysis of the dynamic behavior of cell adhesion very difficult.Addressing these important questions requires an improved theoretical framework based on a more realistic description of the key molecular clutch components. In this work, we developed a semi-analytical model of cell adhesion based on the molecular clutch theory, incorporating local substrate deformations at adhesion sites of molecular clutches as well as the experimentally measured force-response of the main molecular clutch component, talin.We demonstrate that the developed model is able to fit experimental data on cellular mechanical response equally well or better than previous molecular clutch models, while providing greater physiological insight into the mechanical responses of individual molecular clutches.Model predictions suggest that the elastic properties of molecular clutch components are one of the key factors affecting the mechanical response of living cells, and that molecular clutch dynamics are dominated by steady-state behavior that can exhibit bifurcations in certain scenarios.Furthermore, it is shown that the model can potentially allow to understand more complex scenarios, such as the maturation of FAs, as well as the different mechanical responses of molecular clutches with various component compositions. A. Linear molecular clutch model To address the above issues, we developed a semianalytical approach based on consideration of the longterm behaviour of molecular clutches, incorporating into the model experimentally measured force-response of adaptor proteins and local substrate deformations at the adhesion sites of molecular clutches.Specifically, we used the model schematically shown in Figure 1(b) in which molecular clutches were represented by springs consisting of two parts: 1) cellular, corresponding to adaptor proteins, and 2) extracellular, describing local substrate deformations at the adhesion sites of molecular clutches.In the simplest case, these two parts can be represented by linear springs with spring constants k c and k ′ s , respectively. In addition to local substrate deformations at the adhesion sites of molecular clutches, the model also took into account the average substrate deformation over a much larger area of cell adhesion of radius R (R ≈ 1.7 µm [26]), encompassing many molecular clutches, which was described by the spring constant k s shown in Figure 1(b). Using the theory of elasticity, it can be shown that both k ′ s and k s are related to the Young's modulus of the substrate (E) as [34]: Where r is the characteristic radius of the adhesion site of a single molecular clutch. To describe the dynamic state of the system, all sites available for the molecular clutch formation were divided into two groups in the model: those that are occupied by molecular clutches ('on' state) and those that are not ('off' state). In addition, each molecular clutch was characterized by its own deformation, resulting from to the mechanical load F m generated by myosin II motors, see Figure 1(b).Super-resolution studies of the nanoscale architecture of FAs show that the average contact angle of the intracellular part of molecular clutches with the cell membrane / substrate surface, θ [Figure 1 35]).Thus, molecular clutches experience much greater deformation along the x-axis parallel to the substrate surface compared to the y-axis perpendicular to it.As a result, to a first approximation, the physical state of each molecular clutch can be described by its extension, l, along the x-axis parallel to the substrate surface, see Figure 1(b).Accordingly, the state of the entire system can be represented by a time-dependent probability distribution, p on (l, t), to observe a molecular clutch with the extension l at time t at each site available for the molecular clutch formation.This distribution satisfies the following normalization condition: Where p off (t) is the time-dependent probability of the offstate.Molecular clutch extension can be positive or negative depending on the direction of the molecular clutch along the x-axis. In the model, the rate of molecular clutch formation was described by the first-order rate constant k 0 on .Furthermore, it was assumed that, immediately after assembly, molecular clutches are in a mechanically relaxed state with zero extension along the x-axis (l = 0).As a result, the extension-dependent assembly rate of molecular clutches, k on (l), was expressed in terms of the Dirac δfunction, δ(l), as: Where k 0 on is a constant.Retrograde flow of actin filaments created by myosin II motors causes gradual stretching of molecular clutches.This, in turn, leads to an increase in their deformation, causing subsequent dissociation.It was assumed in this study that dissociation of molecular clutches is mainly driven by detachment of integrins from their ECM ligands, such as fibronectin.Indeed, previous studies suggest that this may be the weakest point of molecular clutches, as bonds between other molecular clutch components appear to be greatly strengthened by applied mechanical load [36,37].Since integrin-ECM bonds have previously been shown to possess catch-bond behaviour [38], the dissociation rate of molecular clutches (k off ) was approximated by the following two-term exponential formula, which was found to fit experimental data quite well [Figure S1, SI]: Where x t and x ′ t are two transition state distances characterizing the force-dependent behaviour of the integrin-ECM catch-bond, and k 0 off and k 0 ′ off are the corresponding dissociation rates at zero mechanical load.β = 1/k B T is the reciprocal of the thermodynamic temperature.|F | is the absolute value of the tension of the bond.In the case of molecular clutches represented by two-part linear springs: Here Given the probability distribution p on (l, t), the total tension of molecular clutches, or what is the same, the traction force created by molecular clutches on the substrate, F c (t) [Figure 1(b)], can be found as: Where N c is the total number of sites available for the formation of molecular clutches in a cell adhesion area.Recent experimental studies show that the average density of talin, and therefore likely the density of sites available for molecular clutch formation, is very similar in both small and large FAs [39], suggesting that it is independent of the size of FAs.As a result, it can be concluded that N c should be proportional to the cell adhesion area: , where σ c is the average density of sites available for the formation of molecular clutches in the cell adhesion area with a characteristic radius R. According to the Newton's third law, the total tension of engaged molecular clutches is equal to the force generated by myosin II motor proteins, F m (t) = F c (t).Following previous studies [20,[24][25][26][27], the resisting effect of such a load on the movement rate of myosin II motors, v(t), was approximated by a linear force-velocity relation: Here F st is the stalling force of a single myosin II motor.v 0 is the movement rate of myosin II motors at zero mechanical load.N m is the total number of myosin motors pulling actin filaments in a cell adhesion area.Since previous studies suggest that myosin II motors exert a local pulling effect on the actin cytoskeleton in the cell cortex [32], it was assumed in the model that N m is proportional to the cell adhesion area: N m = πR 2 σ m , where σ m is the average surface density of myosin II motors near the cell-substrate interface. Experimental studies show that actin filaments exhibit retrograde movement at nearly constant rates (v(t) ≈ const) [20,26,31,32].Thus, it can be concluded that myosin II motors and the entire ensemble of molecular clutches function close to a steady-state in cells.This makes it possible to derive a system of simple equations that describe the behaviour of the molecular clutch system.Namely, by combining the above formulas, it can be shown that the dynamics of the entire molecular clutch system in the general case is given by Eq. (A10) from Appendix A, SI, which has the following stationary solution that describes the long-term behaviour of the entire ensemble of molecular clutches and mysoin II motor pro-teins [Appendix A, SI]: Here p on (l) = ⟨p on (l, t)⟩ T is the long time-averaged probability distribution.k off (l) = k off (F (l)).x s and k s are the average mechanical deformation of the substrate and a constant describing the elastic properties of the substrate over the cell adhesion area [Figure 1(b)]. To find out the long-term behaviour of the molecular clutch system, one needs to solve Eq. ( 8) in terms of p on , F c and v. From Eq. ( 8) it is clear that it can be solved by finding the intersections of F c (v) and F m (v) graphs, which allows not only to construct a simple geometric interpretation of the solutions of Eq. ( 8), but also to assess their stability, see Figure 2. While F m (v) can be directly calculated from the third line of Eq. ( 8), computation of F c (v) requires knowledge of p on (l) probability distribution, which can be found from the first three lines of Eq. ( 8) [Appendix B, SI]: , if l > 0 (9) Knowing p on (l) distribution, one can easily obtain the total tension of the engaged molecular clutches (F c ) from Eq. ( 8) as a function of the retrograde actin flow speed (v), and the traction stress (P ) exerted by a cell to the substrate: In the special case of slip-bonds (k 0 ′ off = 0), Eq. ( 9) reduces to a simple analytic formula [Appendix B, SI], which is in good agreement with previous studies [21][22][23]: Where a = k 0 off /βkvx t , ϕ(a, ε) = ae ε+a E 1 (a) and ε = ln(k 0 on /k 0 off ).E 1 (a) = +∞ a t −1 e −t dt is the exponential integral function. Substituting Eq. ( 11) into Eq.( 6), we get: Here G m,n p,q is the Meijer's G-function, see Figure S3, SI.Finally, it should be noted that although above we considered only molecular clutches represented by linear springs, the resulting Eq. ( 8)-Eq.(10) and Eq.(A10) are very general in nature, allowing an arbitrary forceresponse of molecular clutches to be used in the model.For example, to describe plastic deformations of molecular clutches caused by force-induced unfolding of their globular protein domains, it is sufficient to introduce one additional parameter into the model -the yield strength of molecular clutches, changing Eq. ( 5) to: Where l y is the molecular clutch extension corresponding to the yield strength.This coarse-grained approach is most suitable for description of molecular clutches with unknown architecture / protein structure. On the other hand, if the structure and mechanical stability of the main molecular clutch components are known, more detailed information can be obtained compared to the approach described above by considering all possible transitions between the different folded / unfolded states of molecular clutches.We utilized this method to account for the highly nonlinear behaviour of talin molecules and the force-induced unfolding of the mechanosensitive talin domain (R3) observed in singlemolecule experiments [13], see Appendix C, SI. It is important to note that almost all model parameters can be measured experimentally, see Table T1, SI.As for the remaining parameters, by simultaneously fitting experimental data obtained under different experimental conditions (wild-type or talin 1,2 knockdown cells treated with myosin II inhibitor or grown on substrates coated with different concentrations of fibronectin), the average number of fitting parameters per curve can be reduced to one or two, making the model very robust.A list of key model fitting parameters used in this study can be found in Table 1 in the main text. Image analysis and statistics.Collected images of RLC-GFP labelled myosin II stacks were analysed with the help of ImageJ-win64 software.To this aim, for each cell, several random regions near the cell corners were selected, and automatic thresholding was applied using built-in 'Auto Threshold' function of ImageJ in order to map the contours of myosin II stacks.The resulting images were further processed with 'Analyze Particles' function, which provided the total number and sizes of the detected myosin II stacks.After discarding all particles with an area less then 0.01 µm 2 , the average density and the average size of myosin II stacks, as well as corresponding standard errors of the mean, were calculated.All experimental results presented in the article are representative of three independent experiments.Statistical analysis of the data was carried out using the non-parametric Mann-Whitney test. Semi-analytical model based on the physiological molecular mechanics of cell adhesion complexes To cope with the limitations of existing models based on the molecular clutch theory, we developed a semianalytical theoretical framework consistent with the experimentally observed force-response of molecular clutch components at the single-molecule level [12,13] and experimentally measured tension [39,40] and loading rates of molecular clutches [33].This allows for a comprehensive study of the dynamic aspects of cell adhesion me-chanics, providing new insight into the role of different cellular elements in shaping mechanosensitive cell adhesion behaviour. In the model, individual molecular clutches are represented by two-part springs: one part describes cell adaptor proteins connecting adhesion receptors to the cytoskeleton, and the other part depicts local deformations of the substrate at the attachment sites of the adhesion receptors, see Methods section and Figure 1.The latter part of molecular clutches can be related to the substrate rigidity through a spring constant, the value of which can be obtained from the theory of elasticity [Eq.( 1)]. In addition, the model distinguishes between 'occupied' (on) and 'free' (off) states of sites available for the formation of molecular clutches, and quantifies the deformation of each molecular clutch under the influence of mechanical load created by the contractile activity of myosin II motors.This allows the probability distribution of molecular clutch extension to be described over time, making it possible to calculate the total traction force exerted by molecular clutches on the substrate, which results from the molecular clutch-mediated transmission of the pulling force generated by myosin II motors, see Figure 1(b) and Eq. ( 6).The efficiency of this force transmission, which is determined by the dynamics of molecular clutch formation and dissociation, is a key element for understanding mechanosensitive cell adhesion behaviour, as can be seen from the master equation describing the time evolution of the molecular clutch system, see Eq. (A10) in Appendix A, SI. Utilizing a few physiologically relevant fitting parameters (Table 1), our model can fit well with previously published experimental data, such as retrograde actin flow and cell traction measured as a function of the substrate Young's modulus [26], see the results below.This makes it possible the model to be used to plot cell adhesion stability diagrams and extract detailed information about molecular clutch dynamics at the single-molecule level from experimental data obtained on cells, including the average tension and conformational changes of molecular clutches, providing new insight into the key factors regulating FA behavior. Potential effect of talin force-response on cell adhesion Talin was shown to be a major component of molecular clutches in FAs, and its deletion was found to cause dramatic changes in mechanosensitive cell adhesion behavior, which was previously explained to be due to the force-induced enhancement of FAs by talin-vinculin interactions [26].However, this view conflicts with multiple experimental studies showing that vinculin is not an essential component of force-transmission pathways mediated by molecular clutches [39][40][41], suggesting that other factors, such as the talin force-response, may play a role in regulating mechanosensing behavior of molecular clutches. To this aim, in our model we treated the intracellular part of molecular clutches as a nonlinear spring with a force-response corresponding to that of talin measured in single-molecule studies [13], including the forcedependent unfolding of the talin mechanosensitive domain (R3) [talin WT model, see Appendix C, SI].As for the extracellular part of clutches, it was represented in the model by a linear spring according to the theory of elasticity, see Methods. Fitting experimental data on the mechanosensitive behaviour of MEF WT cells from ref. [26] to the stationary solution of the model given by Eq. ( 8)-Eq.( 9), using the three model fitting parameters listed in Table 1, revealed that even at the highest substrate stiffness, most talin-based molecular clutches experience relatively low tension in the range of 0.5 − 1.3 pN [see Figure 3(a,b) and S4(b), SI].This tension is below the force level at which the nonlinear elastic properties of talin become significant [Figure 3(c)], suggesting that unfolding of the mechanosensitive talin domain is potentially not a major factor affecting the mechanical response of adhesion complexes under normal experimental conditions.Consistent with these results, our model predicts that on rigid substrates, only a small percentage (1% − 2%) of talin-based molecular clutches are subject to forces sufficient to substantially stretch talin molecules and trigger the unfolding of their mechanosensitive domain, see Figure S4(a), SI.This finding is in good agreement with a recent experimental study performed on Xenopus laevis cells grown on a fibronectin-coated glass substrate, in which only about ∼ 4% of talin molecules underwent significant conformational changes upon molecular clutch stretching [42]. These results suggest that the mechanical properties of talin-based molecular clutches in living cells are predominantly determined by their passive elasticity rather than force-dependent conformational changes, while the latter likely contribute to downstream signal mechanotransduction. Notably, although under normal conditions molecular clutches experience the low average tension mentioned above, variation of the model parameters showed that reduction in the dissociation rate of molecular clutches (k off ) and the density of sites available for the formation of molecular clutches (σ c ) can lead to a significant increase in the average tension of molecular clutches on rigid substrates (to 30 pN), resulting in the force-induced unfolding of the mechanosenstive talin domain in ∼ 70% of the molecules, see Figure S5, SI.This finding suggests that the type of the ECM ligands (collagen / fibronectin / RGD) and integrins interacting with them, as well as the density of the ligands, can strongly influence experimental data and therefore should be precisely controlled in experiments. Molecular clutch elasticity is an important factor for adhesion mechanosensing The talin WT model shows that most molecular clutches experience low tension at which they can be well approximated by linear springs.To test whether this type of passive elasticity is one of the main factors determining mechanosensitive cell adhesion behaviour, we fitted experimental data collected on MEF WT cells and MEF Talin 1 KO, Talin 2 shRNA cells from ref. [26] to a simplified model, in which both the intracellular and extracellular parts of molecular clutches are represented by linear springs.To this aim, we used a few physiologically relevant fitting parameters, including the elasticity of the intracellular part of molecular clutches, listed Table 1 (see the linear WT and linear KD models).From Figures 3(a,b) it can be seen that the linear molecular clutch model was able to fit the experimental data from ref. [26] as accurately as the talin WT model. The values of the fitting model parameters were found to be in good agreement with previous experimental studies, suggesting that the linear molecular clutch model is able to correctly capture the main physiochemical properties of molecular clutches.For example, the experimental data fitting to the linear molecular clutch model indicates that in the case of MEF WT cells, the elasticity of the intracellular parts of molecular clutches (k c = 0.05 pN/nm) is close to the elasticity of talin proteins estimated from previous single-molecule measurements (0.09 pN/nm, see Table T3, SI).Interestingly, the intracellular parts of molecular clutches in MEF Talin 1 KO, Talin 2 shRNA cells are predicted to be more than ten times stiffer (k c = 1.6 pN/nm) compared to the case of MEF WT cells, suggesting that cells potentially use at least two different types of molecular clutches to establish adhesion bonds with the substrate -soft (talinbased) and rigid.Moreover, the latter have a more than ten times larger characteristic size of the adhesion site (r = 26 nm) compared to talin-based molecular clutches (r = 0.6 − 0.8 nm), see Table 1.Because previous singlemolecule studies have shown that typical cytoskeletal and FA proteins have elasticities less than 0.3 pN/nm (Table T3, SI), these data suggest that rigid molecular clutches may be formed by clusters of adaptor proteins.It has been shown that more rigid FA proteins, such as kindlin and α-actinin, bind to FAs before talin, and αactinin is able to form clusters / condensates [43], which, therefore, may be prime candidates for the main components of molecular clutches in the talin knock out cells. Importantly, to fit the experimental data, we did not need to assume the force-dependent enhancement of molecular clutches by vinculin, suggesting that vinculin might be more of a downstream factor in cell adhesion mechanics, playing a more prominent role in mechanotransduction rather than force-transmission, in good agreement with previous findings [39][40][41]. Because both soft talin-based and rigid molecular clutches must be present in MEF WT cells, we next fitted experimental data obtained from such cells to a model describing the competitive formation of the two types of molecular clutches (WT-2 model) -rigid linear molecular clutches corresponding to those found in MEF Talin 1 KO, Talin 2 shRNA cells, and soft nonlinear talinbased molecular clutches.This model was found to more closely match experimental data obtained on MEF WT cells [Figures 3(d,e)], with the values of most model fitting parameters being either identical or very close compared to the case when experimental data were fitted to a single type of molecular clutches, see the linear KD and talin WT models in Table T1 and the WT-2 model in Table T2, SI.Moreover, the value of the spring constant of rigid molecular clutches (k c = 0.9 pN/nm) turned out to be very close to the estimates of the rigidity of α-actinin clusters (0.8 pN/nm), since each such cluster contains on average four molecules of α-actinin [43], each of which has a spring constant of 0.2 pN/nm, see Table T3, SI.This result further supports the model prediction regarding the two types of molecular clutches in MEF WT cells. Finally, by varying the spring constant of the intracellular part of molecular clutches (k c ) in the linear WT model, it was found that the cell traction curve and the retrograde actin flow curve can undergo significant changes on rigid substrates, see Figure 4. Thus, although most molecular clutches experience low tension, their linear elastic response plays an important role in determining cell adhesion behaviour on rigid substrates. Local substrate deformations are important for the mechanosensitive behaviour of cell adhesion In addition to molecular clutch elasticity, we also examined the potential role of local substrate deformations at the adhesion sites of molecular clutches in cell mechanosensing.Indeed, experimental data show that mechanical forces can be effectively transmitted from a cell to the substrate by individual molecular clutches [44].Furthermore, from the theory of elasticity it is known that elastic substrate deformations are inversely proportional to the linear size of the adhesion area to which mechanical stress is applied [34] [Eq.(1)].Taking into account the experimentally measured characteristic size of FAs (∼ 1.7 µm [26]) and the typical size of the binding sites of adhesion receptors (up to several nanometers), local substrate deformations at the adhesion sites of molecular clutches should be significantly greater than the substrate deformation averaged over the FA scale, and therefore have a much stronger effect on the dissociation kinetics of molecular clutches compared to the average substrate deformation. Indeed, from Eq. ( 4), Eq. ( 5) and Eq. ( 9) it follows that when the local substrate deformations are completely neglected (i.e., k ′ s model parameter is set equal to infinity), the sensitivity of the molecular clutch system to the elastic properties of the substrate is significantly reducedthe probability distribution p on (l) of observing a given site occupied by a molecular clutch with extension l becomes independent of the Young's modulus of the substrate, E. Thus, in the absence of local substrate deformations at the adhesion sites of molecular clutches, the molecular clutch system loses its mechanosensitive properties after relaxation to the stationary state.This finding is in good agreements with previous experimental studies showing that cells do indeed apply local contractions to the substrate, which serve as a critical step in rigidity sensing [45], highlighting the importance of local substrate deformations by molecular clutches.Moreover, this is consistent with the nature of many biological ECMs, which are heterogeneous in both composition and topology. Furthermore, comparison of the force-extension curves of linear and nonlinear talin-based molecular clutches from the linear WT model and the talin WT model at different values of the substrate Young's modulus (E) shows that on soft substrates (E ≤ 10 kPa) the mechanosensitive behaviour of cell adhesion is determined primarily by the elastic characteristics of the extracellular part of molecular clutches, while on rigid substrates (E > 10 kPa) the elasticity of the intracellular part of molecular clutches begins to play a more important role in this process, see Figure 3(c).This result is in good agreement with the graphs shown in Figure 4, which indicate that the spring constant of the intracellular part of molecular clutches (k c ) affects only cell behaviour on rigid substrates, but not as much on soft ones.Thus, elasticity of both intracellular and extracellular parts of molecular clutches is a key factor influencing mechanosensitve cell adhesion behaviour. Bifurcation in the molecular clutch system causes bistability of cell adhesion on rigid substrates The semi-analytical nature of the models developed in our study also allowed for a detailed analysis of the stability of the molecular clutch system and bifurcations experienced by it.It was found that at high values of the spring constant of the intracellular part of molecular clutches (k c ), the cell traction curve and the retrograde actin flow curve often undergo bifurcation, which leads to the appearance of two additional stationary branches -stable and unstable, see, for example, Figure 4. To map the bistability region in the space of model parameters, we varied the values of k c and the myosin II density (σ m ) in the linear KD and linear WT models, see Figure 5.As a result, it was found that in the case of MEF Talin 1 KO, Talin 2 shRNA cells, the bistability region is localized at moderate values of the myosin II density, close to the parameter range of the linear KD model corresponding to MEF Talin 1 KO, Talin 2 shRNA cells [Figure 5(a,e)].This finding suggests that molecular clutches likely experience bistability in myosin IIdepleted regions of such cells that can often be found at the cell periphery [32].Indeed, fitting of the pre-viously published experimental data collected on MEF Talin 1 KO, Talin 2 shRNA cells treated with myosin II inhibitor, blebbistatin [26], to the linear KD model predicted that these cells begin to develop bistable cell adhesion behaviour when the density of myosin II motors in them is decreases by ≳ 1.6 times [15 µM blebbistatin case, Figure 6(a)], with a further decrease in the myosin II density leading to a more pronounced bistability of cell adhesion [50 µM blebbistatin case, Figure 6(a)].On the other hand, blebbistatin-treated MEF WT cells do not appear to exhibit bistable cell adhesion behaviour [Figure 6(b)], mainly due to the very high elasticity of talinbased molecular clutches, which results in MEF WT cells being far from the corresponding bistability region in the model parameter space, see Figure 5(b,e).This finding suggests that talin molecules could potentially play an important role in stabilizing nascent adhesions (NAs) during their maturation. To better understand the origin of the bistability of cell adhesion and the potential role of myosin II density (σ m ), we varied the latter in the linear KD model, plotting cell traction (P ) as a function of retrograde actin flow (v) and Young's modulus of the substrate (E), see Figure 5(c).It was found that a decrease in the myosin II density to 44 µm −2 leads to the appearance of an unstable branch with two saddle-node bifurcation points at its ends, where the unstable branch merges with the stable ones, -a typical sign of a cusp catastrophe [Figure 5(c)].A further decrease in the myosin II density in the linear KD model led to the displacement of the two saddle-node points along the surface describing cell traction (P ), outlining a region of instability that includes only points corresponding to the unstable branches of the cell traction curves shown in Figure 5(c).Being projected onto the horizontal Young's modulus axis, the trajectories of the two saddle-node points form the boundary of a section of the bistability domain [Figures 5(a,d)].Thus, it can be concluded that the bistability region arises in the model parameter space due to a cusp-like catastrophe that the molecular clutch system experiences in the case of a sufficiently high rigidity of the intracellular part of molecular clutches. Finally, it should be noted that the catch-bond behaviour of adhesion receptors (integrins) turned out to be dispensable for the occurrence of all the above bifurcations, since very similar behaviour of the molecular clutch system was also observed in the case of slip-bonds [Eq.(11) and Eq. ( 12)], suggesting that the bistable behaviour is an intrinsic property of the system. Cell adhesion complexes function close to a stationary / quasi-stationary state The existence of stable stationary states at each value of the substrate Young's modulus suggests that the molecular clutch system should converge over time to one of these states.To find out how quickly the molecular clutch system reaches these stationary states, starting from an initial configuration with zero adhesion bonds between a cell and the substrate, we used the finitedifference method to numerically solve the master equation describing the time evolution of the molecular clutch system [Eq.(A10), SI].As a result, both the linear WT and linear KD models were found to predict rapid convergence of key experimental observables, such as cell traction and retrograde actin flow, to their stationary values within 10 − 20 s, see Figures 7(a,b) and S7(a,b), SI.The probability distribution of molecular clutch extension also demonstrated rapid convergence to the stationary distribution given by Eq. ( 9) in as little as 20 s [Figures 7(c) and S7(c), SI]. These results are in good agreement with previous experimental observations indicating that retrograde flow of actin filaments in filopodia or near the edges of living cells occurs at an almost constant rate (v(t) ≈ const) [20,26,31,32], suggesting that cell adhesion complexes should function close to a steady-state.On the other hand, these results are in sharp contrast to the predictions of the conventional molecular clutch theory, which suggests that retrograde actin flow should experience strong fluctuations under similar conditions [27], also see Figure S6(a), SI.Thus, our model better reflects existing experimental data.Moreover, the average density of engaged molecular clutches predicted by the linear WT, talin WT and WT-2 models in the case of MEF WT cells was found to be relatively high [50 − 150 µm −2 , Figure S4(a,c), SI] compared to previous models (≲ 10 µm −2 [20,24,27]), which is also in better agreement with experimental estimations of the talin density in FAs (∼ 80 − 500 µm −2 [43]).Furthermore, the loading rates of molecular clutches in the stationary state predicted by the linear WT, talin WT and WT-2 models in the case of rigid substrates (0.2 − 5.5 pN/s, E = 1 MPa, Figure S8, SI) are also in better agreement with recent experimental studies (0.5 − 4 pN/s [33]), in sharp contrast to the predictions made by previous molecular clutch models (> 100 pN/s [20,[24][25][26][27]). Interestingly, by fitting the calculated curves of retrograde actin flow to an exponential decay function, it was found that the average value of the characteristic relaxation time of the molecular clutch system (τ relax = 0 − 4 s) is practically independent of the viscous properties of the substrate, although still dependent on its Young's modulus, see Figures 7(e) and S7(e), SI.In addition, it turned out that in the case of viscous substrates (ξ s ≥ 100 pN•s/nm), the cell traction and retrograde actin flow curves reach their quasi-stationary values within the same 10 − 20 s, despite the fact that the substrate deformation continues to change slowly with time [Figures 7(d) and S7(d)].This result suggests that in the case of viscoelastic materials described by the Kelvin-Voigt model, which was used in our model to represent large-scale substrate deformations [Figure 1(b)], the viscous properties of the substrate have little effect on the rapid establishment of a stationary / quasi-stationary state by cell adhesion complexes. Furthermore, it can be seen from Table T4, SI that the predicted relaxation time of the molecular clutch system is close to the characteristic dissociation time of talinbased molecular clutches measured in experiments, suggesting that our model provides physically reasonable estimates. Notably, the relaxation time predicted by our model turns out to be 2 − 4 orders of magnitude less than the average lifetime of FAs [Table T4, SI], indicating that they spend more than 98% of their time in a stationary / quasi-stationary state.This finding is in good agreement with the above conclusions regarding the steady-state behaviour of cell adhesion complexes based on experimental observations of the nearly constant rate of retrograde flow of actin filaments associated with these complexes. In addition, from Table T4, SI it is clear that the experimentally measured characteristic binding and dissociation times of vinculin and paxillin to / from FAs are an order of magnitude greater than the characteristic relaxation time of the molecular clutch system.This result suggests that talin may serve as a rapid response protein that helps establish the initial landscape of molecular clutches in a substrate-mechanosensitive manner, which can subsequently be reinforced and shaped by vinculin / paxillin into matured FAs, in good agreement with previous experimental data [10,26,[39][40][41]. Ligand concentration regulates cell adhesion via a myosin II feedback loop in fibroblast cells Finally, to investigate the effect of the density of molecular clutches on cell adhesion, we fitted previously published experimental data on MEF WT cells and MEF Talin 1 KO, Talin 2 shRNA cells plated on elastic substrates that were coated with different fibronectin concentrations [26].Since the density of fibronectin network affects the rate of molecular clutch formation, as well as the distribution of mechanical load near adhesion sites of molecular clutches, the data fitting was performed by varying the clutch formation rate and the characteristic radius of adhesion sites of molecular clutches.In addition, the density of myosin II motors also varied.Using this approach, the experimental data could be fitted fairly well to the the linear KD, linear WT and talin WT models, see Figure 8(a,b). Interestingly, the data fitting showed a strong correlation between myosin II motor density (σ m ) and fibronectin concentration as the fitting did not converge when myosin II density was fixed.According to the model, the density of myosin II motors pulling on actin filaments decreases by ∼ 25% on substrates coated with 1 µg/ml fibronectin solution compared to the case of 100 µg/ml fibronectin solution, see Figure 8(c).To test this model prediction, we transfected NIH-3T3 cells with myosin regulatory light chain (RLC)-GFP [46] and measured the density and the average size of myosin II stacks labelled with this construct as a function of the fibronectin concentration, see Figures 8(d-f).As a control, we also performed similar measurements in HeLa cells. It was found that while in HeLa cells myosin II motors formed more or less uniform assemblies resembling beads on a string [Figure S9, SI], in NIH-3T3 cells they formed more irregular stacks, see Figure 8(d).More importantly, although the fibronectin concentration was found to have little effect on the density of myosin II stacks in both NIH-3T3 and HeLa cells [Figure 8(e)], it had a strong effect on their average size in NIH-3T3 cells (∼ 35% drop, 1 µg/ml fibronectin solution compared to 100 µg/ml solution), but not in HeLa cells [Figure 8(f)], suggesting that this effect may be cell-type dependent.Thus, our model can quite accurately capture the mechanical states of fibroblast cells and make physiologically relevant predictions. Previous studies have shown that fibronectin density can influence the activation of Rho kinase (ROCK) in endothelial cells, which controls the assembly of myosin II stacks [47].Thus, MEF cells could potentially use a similar mechanism.However, the precise molecular pathways responsible for fibronectin-dependent activation of ROCK remain unknown.Since our observations indicate that such a feedback loop is cell type-dependent, it may be possible in the future to gain detailed insight into the origin of this phenomenon by comparing normal and transformed cells. DISCUSSION In this work, we developed a semi-analytical model of cell adhesion based on the molecular clutch theory to examine the potential roles of adaptor protein elasticity, local substrate deformations, myosin II and ligand densities on mechanosensitive cell adhesion behaviour. The main difference between our model and previous studies is that we took a more explicit approach to modeling the elastic properties of adaptor proteins and substrate deformations.Then, by deriving a semi-analytical solution to the differential master equation of the model, we were able to evaluate experimental observables characterizing the mechanosensitive behaviour of cell adhesion complexes, such as cell traction force and the rate of retrograde actin flow, using experimentally measured parameters such as the stiffness of adaptor proteins and the dissociation rate of molecular clutches.Based on experimental data fitting, performed by varying a few key model parameters (Table 1), our model was able to accurately reproduce all previous experimental observations and make predictions regarding the correlation between ECM ligand and myosin II densities that were successfully tested in experiments.Importantly, the physical picture offered by our model differs from the conventional molecular clutch theory in several essential aspects. First, our model predicts that the molecular clutch system quickly reaches a stationary state at any substrate rigidity in less than 10 − 15 s.This prediction better reflects the experimentally observed steady retrograde actin flow in living cells under both low and high cell traction conditions [20,26,31,32], which is in sharp contrast to the strong fluctuations predicted by previous models [27].This means that the mechanodependent dynamics of cell adhesion complexes is a fast and robust process that allows easy differentiation of soft and rigid substrates without having to go through multiple stochastic load-and-fail cycles, which can lead to an increase in the number of missense events. The predicted characteristic relaxation time of the molecular clutch system to a stationary state (0.1 − 4 s) turned out to be an order of magnitude less than the average turnover time of talin and integrin in FAs (15 − 50 s) and the characteristic binding time of vinculin and paxillin to FAs (30 − 60 s) measured in experiments (Table T4, SI).This result suggests that the dynamic evolution of cell adhesion is a quasi-stationary process mainly driven by talin-based molecular clutches, which are shaped by vinculin and paxillin into higherorder structures such as FAs, in good agreement with previous experimental studies [10]. Our model further highlights the role of the elastic properties of adaptor proteins that constitute molecular clutches, such as α-actinin and talin.Namely, the model predicts that the high elasticity of long talin molecules allows them to robustly form molecular clutches on both soft and rigid substrates, in contrast to the previously described load-and-fail mechanism of molecular clutch functioning on rigid substrates [20,24,27].Upon increase of the stiffness of adaptor proteins, -a situation reminiscent of talin knockout / knockdown condition, our model predicts a strong decrease in the density of molecular clutches and the cell traction force.This results in altered cell behaviour, which is in good agreement with experimental studies [26], highlighting the importance of molecular clutch elasticity in governing cell adhesion behaviour. It should be noted that in our study we considered only the unfolding of the mechanosensitive R3 domain of talin.Yet, it has previously been suggested that mechanical unfolding of other domains may help stabilize the tension experienced by talin, which in turn influences the stability of the corresponding molecular clutches, -an effect that has also been proposed to contribute to the function of many adaptor proteins found in various cells [48,49].Moreover, FAs contain many different types of adaptor proteins that have different elastic properties than talin.How the unfolding of multiple talin domains, as well as how mixtures of different molecular clutch species regulate molecular clutch behaviour, is an important question.In the future, combining our model with atomiclevel simulations of the stability of globular domains of adaptor proteins [50] may lead to a better understanding of how the protein composition of molecular clutches influences mechanosensitive cell adhesion behaviour in a context-dependent manner. Notably, several physiological-relevant parameters coming out of our model predictions also appeared to be more in line with experimentally measured values compared to previous theoretical studies.For example, the force loading rates of clutch molecules predicted by our models in the case of rigid substrates (0.2 − 5.5 pN/s, E = 1 MPa) are in better agreement with recent direct experimental measurements using single-molecule DNA tension sensors (0.5 − 4 pN/s, [33]) compared to the predictions made by previous molecular clutch models (> 100 pN/s [20,[24][25][26][27]).Furthermore, the models developed in our work also suggest that talin-based molecular clutches transmit most of the cell traction force on rigid substrates, -in good agreement with recent experimental measurements based on FRET talin tension sensors showing that talin is the main force-bearing molecule in cell adhesion complexes [39,40], which is in sharp contrast to previous studies predicting that talin-based molecular clutches transmit only ∼ 7% of the cell traction force [26].These results demonstrate that the theoretical framework developed in our study is capable of accurately relating the force-response of molecular clutches at the single-molecule level to the large-scale ensemble behaviour of cell adhesions. Our model also predicts that at high adaptor protein and substrate stiffness (k c ≳ 0.5 − 0.8 pN/nm, E ≳ 100 kPa), the molecular clutch system exhibits bistable behaviour that, from a physical perspective, resembles the cytoadhesion properties of malaria-infected red blood cells [51,52].This bistability includes two stable states corresponding to weak and strong cell adhesion, with the former characterized by lower density of molecular clutches, smaller cell traction and faster retrograde actin flow compared to the latter.It was found that the global bifurcations responsible for the emergence of such bistable behaviour determine not only the shape of the cell traction and retrograde actin flow curves within the bistability region, but also far beyond it, resulting in a very distinct mechanosensitive response of WT MEF and MEF Talin 1 KO, Talin 2 shRNA cells to the substrate stiffness [26].In the future, detailed observations of retrograde actin flow / traction force in talin-depleted cells may shade further light on the fraction and transition dynamics between the two stable states of cell adhesion. The bistability of the molecular clutch system may potentially occur at a single cell level or the level of individual FAs, depending on the heterogeneity of the density and topography of the ECM ligand network with which cell interacts.Indeed, both can influence the local density of sites available for molecular clutch formation and hence the rate of molecular clutch assembly, which was found in our study to have a significant effect on the location of the bistability region of cell adhesion in the model parameter space and, consequently, on the shape of the cell traction and retrograde actin flow curves.This, in turn, can strongly influence the tension experienced by molecular clutches and actin filaments in different cell parts, altering the mechanodependent cooperative as-sembly of myosin IIA and myosin IIB stacks, which has been previously found in experiments [53][54][55].The assembly of such stacks is regulated by ROCK, MLCK [55] and PKCz [54] kinases, which are responsible for phosphorylation of the regulatory light chain and heavy chain of myosin IIB, respectively.And while the accumulation kinetics for myosin IIA,C were found to be nearly identical across cell types, myosin IIB exhibited highly celltype-and cell cycle stage-specific behaviour due to PKCz activity [54].Thus, taken together, the above results suggest that the heterogeneity and topography of the ECM ligand network should have a significant impact on cell adhesion behaviour, which may be highly cell-type dependent, in good agreement with previous experimental studies [2]. Our model calculations indicate that an increase in myosin II density, which can be caused by the mechanodependent cooperative assembly of myosin IIA and myosin IIB stacks around NAs, can trigger the transition of such NAs from a weak to a strong adhesion state.This, in turn, may lead to talin recruitment and further maturation of NAs, as talin binding to actin has previously been shown to be enhanced by applied mechanical load [36].Thus, the fibronectin-myosin II feedback loop observed in our study may play an essential role in the maturation of cell adhesion complexes. Indeed, it was previously shown that inhibition of myosin II in HeLa cells by treatment with blebbistatin or Y-compound, as well as knockdown of the myosin IIA heavy chain, leads to a significant weakening of the filopodia adhesion to fibronectin-coated substrates [32].Very similar behaviour was also observed in filopodia that lack myosin II at their base region [32].Since filopodia, which guide cell migration, are often used by cells to probe the local microenvironment and form NAs, these results suggest that the bistable behaviour of molecular clutches could potentially be employed by living cells to move through the ECM or on flat substrates.Furthermore, our results indicate that the difference between the elasticity of fibronectin and RGD ligands, which are commonly used in mechanobiological studies, and the way these ligands are attached to the underlying substrate (non-specific adsorption or covalent binding) can have a strong effect on the ability of the ligand network to transfer force from cell adhesion complexes to the substrate.This in turn will result in very distinct cell adhesion behaviour. For example, our model predicts that to probe the elastic properties of the substrate, cells locally apply significant mechanical load to the ECM, pinching it at the adhesion sites of molecular clutches.This result is in good agreement with previous experimental studies performed using elastic micropillar arrays, which showed that cells indeed apply local contractions to the substrate to sense its mechanical properties [45].On the other hand, it suggests that cell-generated forces can significantly perturb the local organization of the ECM network.In the case of ECM ligands adsorbed to the substrate via non-specific interactions, these local perturbations may eventually accumulate into large-scale changes in the ECM network at the cell site.Indeed, it was experimentally discovered that in this case cells apply a force strong enough to remove surface-adsorbed fibronectin, reorganizing it into fibrils [56].This, in turn, leads to a transition of cells from FAs to elongated fibrillar adhesions, radically changing the cell adhesion pattern [56]. These differences may be important for understanding the microenvironment sensing behaviour of cells beyond substrate stiffness.Namely, although various materials such as collagen network, or PDMS and glass coated with ECM ligands may have similar bulk elasticity, the behaviour of cells on such substrates may differ significantly due to their different elastic response at the molecular level.In the future, increasingly realistic models of the substrate-molecular clutch coupling may help resolve numerous experimental discrepancies due to such differences and provide valuable information for understanding cell-ECM interactions, which is important for tissue engineering. It should be noted that one critical missing link for the development of molecular clutch models is how the dynamic behaviour of cell adhesion complexes could potentially be involved in the regulation of downstream cell fate decisions.Many mechanosensitive signaling events involve force-dependent conformational changes of molecular clutch components such as talin, p130cas and FAK, which can alter molecule interaction patterns, enzyme kinetics, and calcium influx.With more accurate and realistic modeling of the conformational states of molecular clutch proteins, which is possible within our model, one can begin to quantitatively investigate the complex interactions between mechanical and biochemical processes at the molecular level that have important consequences for many physiological processes, from morphogenesis to cancer progression. For example, the quasi-static nature of molecular clutch kinetics and the appearance of bistable cell adhesion behaviour in our model suggest that, for a given substrate rigidity, multiple stable states of molecular clutches can coexist.This means that our model can, in principle, be used as a theoretical basis to describe the maturation process of FAs, in which small NAs gradually transform into mature FAs with a different molecular composition, molecular clutch density and traction force, despite the same extracellular environment.In this context, the maturation of FAs can be viewed as a series of quasi-stationary states, each of which represents a snapshot of maturing cell adhesion.Then changes in physiological parameters describing mechanical or biochemical processes, such as molecular clutch density / elasticity and myosin II contractility, will collectively model the evolution of quasi-stationary behaviour of FAs towards more mature states.In addition, the bistability of cell adhesion found in our work could also explain why only a small fraction of NAs mature on rigid substrates while the rest disassemble, warranting future studies. Finally, in addition to substrate stiffness, rheological properties of the ECM, such as viscoelasticity, have also been shown to be an important regulator of cellular functions and are related to pathological processes of tumor growth and metastasis.There have been several interesting attempts to model the mechanosensitive behaviour of cells on viscoelastic substrates based on the conventional molecular clutch theory, which have successfully reproduced cellular behaviours such as periodic oscillations in cell spreading and cell migration [23,[27][28][29][30].In the future, it will be interesting to combine our semi-analytic model with these previous studies to explore how physiological factors, such as the stiffness of molecular clutch molecules, regulate these important cell behaviours. AUTHOR CONTRIBUTIONS P.L. performed experiments.P.L., Q.W. and A.K.E.analysed the data.M.Y. and A.K.E.designed the research and wrote the paper.A.K.E.supervised the study, derived formulas and carried out computations. ACKNOWLEDGEMENT We greatly appreciate the encouraging and insightful discussions with Dr. Jie Yan (MBI, Singapore) and Dr. Alexander D. Bershadsky (MBI, Singapore) about the results of our study.We are grateful to Dr. Alexander D. Bershadsky for providing genetic constructs used in the study.We would like to thank Dr. Qin Peng (Shenzhen Bay Laboratory, China) for NIH-3T3 and HeLa cell lines.Also, we would like to thank Shenzhen Bay Laboratory Supercomputing Center for assistance in carrying out the calculations and Bioimaging Core of Shenzhen Bay Laboratory for providing cell imaging support.We are also grateful to Bioimaging Core engineer Yu Mei for assistance with SIM microscopy (Elyra7).This work was supported by the start-up funds from Shenzhen Bay Laboratory (A.K.E.).).While the latter is assumed to have a linear response to the applied load, the former is considered as a linear spring in the linear clutch model and a nonlinear spring with a talin-like force-response in the talin clutch model.The pulling force (Fm) generated by myosin II motors causes movement of actin filaments at a speed of v.This leads to a gradual extension of molecular clutches predominantly along the x-axis parallel to the substrate surface (the contact angle θ ≈ 15 • [35]), which acquire extension l.This, in turn, allows transmission of the myosin II-generated force (Fm) to the substrate, resulting in the cell traction force (Fc), which causes deformation of the substrate (xs) over a large cell adhesion area.The efficiency of the force transmission is mainly determined by the kinetic behaviour of molecular clutches under mechanical load, which is described by their formation and dissociate rates, kon and k off , respectively.Viscoelastic behaviour of the substrate over a large cell adhesion area is described in the model by a linear spring with the stiffness ks and ξs constant, which denotes the viscous properties of the substrate.As can be seen from the graph, the characteristic relaxation time decreases with increasing Young's modulus of the substrate.However, it is practically independent of the viscous properties of the substrate.TABLE 1. Values of fitting parameters in the linear KD, linear WT and talin WT models (1) .Parameter Model (2) Description Linear KD Linear WT Talin WT k 0 on 0.005 s −1 0. the resulting spring constant of molecular clutches consisting of the two parts: extracellular (k ′ s ) and intracellular (k c ) [Figure 1(b)]. FIG. 1 . FIG.1.Molecular clutch mechanism.(a) Integrins and talins are the main components of adhesion links that form between the actin cytoskeleton and the substrate.Myosin II-generated retrograde flow of actin filaments leads to mechanical stretching of these links, causing local substrate deformations (shown in red).Force-transmitting units, each including an intracellular part (integrin and talin) and an extracellular part (locally deformed substrate), are referred to in this study as 'molecular clutches'.(b) In the model, molecular clutches are represented by composite springs consisting of two parts -a cellular one (kc) and an extracellular one (k ′ s ).While the latter is assumed to have a linear response to the applied load, the former is considered as a linear spring in the linear clutch model and a nonlinear spring with a talin-like force-response in the talin clutch model.The pulling force (Fm) generated by myosin II motors causes movement of actin filaments at a speed of v.This leads to a gradual extension of molecular clutches predominantly along the x-axis parallel to the substrate surface (the contact angle θ ≈ 15 •[35]), which acquire extension l.This, in turn, allows transmission of the myosin II-generated force (Fm) to the substrate, resulting in the cell traction force (Fc), which causes deformation of the substrate (xs) over a large cell adhesion area.The efficiency of the force transmission is mainly determined by the kinetic behaviour of molecular clutches under mechanical load, which is described by their formation and dissociate rates, kon and k off , respectively.Viscoelastic behaviour of the substrate over a large cell adhesion area is described in the model by a linear spring with the stiffness ks and ξs constant, which denotes the viscous properties of the substrate. FIG. 2 .FIG. 3 . FIG.2.Geometric interpretation of the force balance in the molecular clutch system.(a, b) Intersections of the traction force curve, Fc(v), and the force-velocity curve of myosin II, Fm(v), correspond to the stationary points (S1 and S2) described by Eq. (8), at which the molecular clutch system reaches mechanical equilibrium.Stability of trajectories near such points is largely determined by the slopes of the cell traction and myosin II force-velocity curves.Namely, if the latter is greater than the former [panel (a)], trajectories in the neighbourhood of the corresponding stationary point are stable (point S1).That is, small perturbations do not cause them to move far from the stationary point.On the other hand, if the slope of the myosin II force-velocity curve is smaller than that of the traction force, trajectories near the stationary point will exhibit unstable behaviour [panel (b)].As an example, consider point B in panel (a).The initial deviation of the system away from the stationary point S1 (for example, due to dissociation of several molecular clutches) will first lead to an increase in the retrograde actin flow, since myosin II motors have to work against a smaller resisting force.However, to maintain a steady flow of actin at a rate corresponding to the new state (point B ′ ), myosin II motors must, in the long run, balance the resisting traction force created by molecular clutches, which is determined by point B ′′ .Since the pulling force generated by myosin II motors is less than the stationary traction force (i.e., point B ′ is below point B ′′ ), this will cause the retrograde actin flow to slow down, which will eventually bring the system closer to the stationary point S1.Similar considerations apply to the rest of the points shown in panels (a) and (b), demonstrating the long-term stability of trajectories near the stationary point S1 and the instability of trajectories near the stationary point S2.Numerical simulations of the dynamic behaviour of the molecular clutch system show good agreement with the above physical considerations, see Figure7and FigureS6(b), SI.(c) Example of 3D plots of the cell traction force (Fc, coloured surface) and myosin II-pulling force (Fm, gray surface) as a function of the retrograde actin flow (v) and Young's modulus of the substrate (E).Black solid curves formed by the intersection of the two surfaces on the left panel indicate stationary points of the system that attract trajectories to their neighbourhood (stable branches), whereas the black dashed curve indicates unstable stationary points (unstable branch).White dot designates a saddle-node bifurcation point where the stable and unstable branches merge together.All cell traction and retrograde actin flow curves shown in the Results section were obtained by projecting the stable and unstable branches of 3D graphs like those displayed in panel (c) onto the vertical and horizontal planes, respectively. FIG. 4 .FIG. 5 . FIG. 4. Effect of the elasticity of the intracellular part of molecular clutches on cell adhesion.(a, b) Graphs of the cell traction and retrograde actin flow curves calculated using the linear WT model for various values of the spring constant (kc) describing the elastic properties of the intracellular part of molecular clutches.Solid curves denote stable branches of the graphs, and dashed curves indicate unstable branches, see Figure 2 for details.It can be seen from the plots that the spring constant kc has a strong influence on the cell adhesion behaviour in the case of rigid substrates (E > 10 kPa), while there is practically no change on soft substrates (E ≤ 10 kPa).In addition, the figure shows that in the case of stiff molecular clutches (kc ≳ 2.6 pN/nm), the molecular clutch system undergoes bifurcation, leading to bistability of the cell traction and retrograde actin flow curves on rigid substrates.For points A, B and C shown in panel (b), which correspond to the case of kc = 5 pN/nm, we performed stochastic simulations and finite-difference calculations of the time evolution of the molecular clutch system by solving the master equation [Eq.(A10), SI], confirming that the molecular clutch system exhibits bistable behaviour on rigid substrates but not on soft ones, see Figure S6(b), SI.To demonstrate the bifurcation, the model parameter values used in the calculations were the same as for the case of low fibronectin density (1µg/ml) shown in Figure 8(a). FIG. 6 .FIG. 7 . FIG. 6.The role of myosin II density in mechanosensitive cell adhesion.(a, b) Fitting of experimental data collected on MEF Talin 1 KO, Talin 2 shRNA cells (a) and MEF WT cells (b) treated with different concentrations of the myosin II inhibitor, blebbistatin [26].The data were fitted to the linear KD model [solid curves, panel (a)], linear WT model [solid curves, panel (b)] and talin WT model [dotted curves, panel (b)], respectively, by varying only the density of myosin II motors.The remaining model parameters were fixed at the values shown in Table T1, SI.The dashed segments of the cell traction curves in panel (a) indicate unstable branches of the corresponding graphs.Panels (a, b) show that while MEF Talin 1 KO, Talin 2 shRNA cells are predicted to have bistable behaviour on rigid substrates at low myosin II densities, MEF WT cells lack this type of behaviour due to the high elasticity of talin-based molecular clutches, see Figure 5 for more details. 23 s −1 0.3 s −1 Assembly rate of molecular clutches in a mechanically relaxed state.The estimated values are in fairly good agreement with the experimentally measured rates of formation of talin-based molecular clutches in Xenopus laevis cells (0.03 − 0.06 s −1 [42]).Density of myosin II motor proteins near the cell-substrate interface.The estimated values are close to the experimentally measured density of myosin II filaments (∼ 2−3 µm −2 [32, 46]) multiplied by the average number of individual myosin molecules in each myosin II filament (∼ 30 [57]).kc 1.6 pN/nm 0.05 pN/nm Spring constant of the intracellular part of molecular clutches (linear KD and linear WT models only).Experimentally measured spring constant of talin in the physiologically relevant range of 0 − 10 pN is ∼ 0.1 pN/nm, see Table T3, SI.Molecular clutch extension corresponding to the yield strength (linear KD and linear WT models only).The value of this model parameter did not have a strong effect on the results in the case of the linear WT model.
15,108.8
2024-06-13T00:00:00.000
[ "Engineering", "Physics", "Biology" ]
Evaluation of water structures in cotton cloth by fractal analysis with broadband dielectric spectroscopy Broadband dielectric spectroscopy measurements were performed on naturally dried cotton cloth, and a recently developed analytical technique for fractal analysis of water structures was applied to obtain existential states and locations of water molecules in the material. Three relaxation processes observed in GHz, MHz, and kHz frequency regions were attributed to dynamic behaviors of hydrogen bonding networks (HBNs) of water and interacting molecules, polymer chains with interacting ion and water molecules, and ions restricted on the interfaces of larger structures, respectively. Water molecules were heterogeneously distributed in the cotton cloth, and the HBNs remained as a broad GHz frequency process. Fractal analysis suggested that water molecules distributed in the material were characterized by a small value (0.55) of the Cole–Cole relaxation time distribution parameter, indicating spatial distribution of HBN fragments with various sizes in cotton cloth. This result was also supported by the T2 relaxation time obtained from nuclear magnetic resonance for naturally dried cotton yarn. Comparing previous results of dielectric relaxation measurements and fractal analysis with the τ–β diagram for various aqueous systems, the results determined that water molecules cannot exist inside cellulose microfibrils. The fractal analysis employed in this work can be applied to dynamic water structures in any material. The presented analytical technique with a universal τ–β diagram is expected to be an effective tool to clarify water structure detail even for heterogeneous hydrations of the low water content substances. GRAPHICAL ABSTRACT Introduction Cotton materials are naturally derived substances used in our daily lives. Since the structure and physical property are affected by the water content, the hydration structure and mechanism in cotton [1][2][3] and the main component, cellulose [4][5][6][7][8][9] and related materials [10,11] have been extensively studied. However, since water molecules form various water structures interacting with surrounding molecules form various water structures, there exist difficulties in observation and analyzing those behaviors in detail, and various approaches are still required. One of the authors (T. I.) has recently presented a hydration model of cotton cloth to explain the hardening effect of dry cotton rag from the wet state [1][2][3]. The hardening effect disappears with mechanical stimulus, and it is not shown in the completely dry state. These results suggest that the hardening effect brought by water molecules is not explained by only the average water content but also the spatial distribution of water molecules in the dry sample. The model shows that the stiffness of cotton cloth is caused by a cross-linked network between single fibers mediated by capillary adhesion of bound water on the surface of cellulose. However, it is difficult to evaluate the spatial distribution of water molecules in a low water content sample especially for such heterogeneous materials. Broadband dielectric spectroscopy (BDS) [12] is recognized as one of most effective tools to study hydration structures of various moist materials, including biological systems [13,14]. The GHz relaxation process, generally observed for water [15][16][17] and aqueous polymer systems [18][19][20][21], exhibits a decrease in the relaxation frequency with decreasing water content. In other words, water molecules involved in moist materials, such as polymer-water systems, show a plasticizer effect in which water molecules increase mobility of both water molecules and local polymer chains with increase in water content [21,22]. Decreases in molecular mobility, typically observed with reduced water content and temperature, are often accompanied by J Mater Sci (2021) 56:17844-17859 structural formations called slow dynamics. Dielectric measurements for the GHz process of aqueous materials provide meaningful information about dynamic behaviors of water structures. Dielectric spectroscopy has been applied to hydration studies on cellulose [9] and related materials [10,11]. Zhao et al. reported that BDS and nuclear magnetic resonance (NMR) measurements of wide regions of temperature and water content suggested various states of restricted water molecules [9]. In the low water content region, however, relaxation processes with low dielectric constant become broader, and their overlap tends to induce ambiguities and make systematic and universal interpretation difficult especially for heterogeneous materials like cotton cloth. The problem is even more common for low water content substances. Therefore, it is desired to apply an effective approach that can be utilized to clarify various water structures for any materials in combination. There are few universal and accurate methodologies for comparing the hydration processes of materials with different chemical structures other than fractal analysis applied to GHz frequency process observed by dielectric spectroscopy measurements for moist substances [23][24][25][26][27]. The relaxation mechanism of the GHz frequency process has been explained by cooperative exchanges in hydrogen bonding networks (HBNs) for hydrogen bonding liquid and mixtures [21,28]. Fractal behaviors characterized by the lower fractal dimension were usually shown for more heterogeneous systems [14,29,30]. This result means that HBNs tend to remain even in the lower water content region for more heterogeneous systems and the analysis suggests an application to evaluate the spatial distribution of water molecules in HBN fragments. Details of the physical meaning and application of fractal analysis are described in the following section. In the present study, we examined a practical methodology of fractal analysis with the BDS measuring technique to clarify spatial distributions and fluctuations of water molecules in HBN fragments for cotton cloth. The water structures thus obtained by fractal analysis were also confirmed by the transverse relaxation time, T 2 , obtained by NMR measurements for cotton yarn. The consistency of the hydration model is also considered. Combinations of these evaluation techniques at different scales of observation are also expected to offer additional information for obtaining detailed water structures [14,[29][30][31]. Fractal analysis of water structures Conventional fractal analysis of water structures with dielectric spectroscopy Mixtures of polymers and hydrogen bonding solvents, like water, exhibit a GHz frequency process and another process, observed at frequencies around 100 times lower at typical temperatures, due to chain dynamics [21]. The GHz frequency process is often described by symmetric relaxation curves described by the Cole-Cole relaxation function [32], are the low and high-frequency limits of the dielectric constant, respectively; j is the imaginary unit; x is the angular frequency; s is the relaxation time; and b is the Cole-Cole symmetric relaxation time distribution parameter. When the b value is unity, the relaxation process is described by the Debye equation [33]. By contrast, the lower-frequency process due to local chain dynamics of polymers often exhibits asymmetric curves described using the Kohlrausch-Williams-Watts function [34], Havriliak-Negami function [35], etc. However, both processes are often affected by opposite components because of the inter-molecular interactions. Thus, the high-frequency process also reflects the local parts of the chain, and the low-frequency process also contains solvent molecules attached to the polymer chains. Since the degree of these interactions is dependent on the respective mixtures, there still exist a lot of investigations on liquid structures. The GHz frequency process can sometimes be treated by two or three Debye-type processes for precise fitting procedures. In the present work, we characterize fluctuation of dynamic behaviors of HBNs and dispersion of their locations as a hydration model of cotton cloth. In this case, it is not suitable to allocate many Debye processes to express a broad relaxation process, even for materials with low water content. The explanation of liquid structures by several Debye processes with similar relaxation times is not suitable for the heterogeneous and fluctuating structures of liquids treated in the present work. Generally, conventional analysis of water structures using dielectric spectroscopy for aqueous polymer solutions examines three relaxation parameters ðs; b; and DeÞfor the GHz relaxation process as independent variables. Figure 1 a and b shows typical model behaviors of water content dependence for relaxation time, s, and its distribution parameter, b respectively. However, the common solvent, water, in the mixtures should exhibit characteristic trajectories of slow dynamics with decrease in water content, as shown in Fig. 1c [14,19]. Therefore, water content dependencies of the relaxation parameters, shown in Fig. 1a and b, cannot be sufficiently characterized for water structures. The relationship between the parameters should be determined to characterize the water structures shown in Fig. 1c and their physical meaning. The relationship between the parameters can be described from a simple ergodic hypothesis for the time response function reflecting geometric fractal structures [23,24] where s 0 is the cutoff time of the scaling in the time domain and d G is the fractal dimension of the point set where relaxing units are interacting with the statistical reservoir. In Eq. (2), is the characteristic frequency of the self-diffusion process, where d E is the Euclidean dimension, D s is the self-diffusion coefficient, R 0 is the cut-off size of the scaling in the space, and G is a geometrical coefficient approximately equal to unity. Equation (2) indicates a hyperbolic relation between the average relaxation time and its distribution parameter, as shown in Fig. 1c. The s-b diagram of the fractal analysis suggests a reasonable comparison of the relaxation time distribution parameter at the same average relaxation time for any aqueous system. The intercept on the vertical axis of the asymptote parallel to the horizontal axis of the hyperbola in the figure corresponds to the fractal dimension. Curve C shows a higher value of fractal dimension than do curves A and B. It is clearly indicated that the result of fractal analysis in Fig. 1c may be quite different from the conventional water content dependencies discussed using Fig. 1a and b. Although the physical meanings obtained from fractal analysis are important, these treatments are not easy since accurate dielectric measurements in a wide frequency range, from higher to lower frequencies, are difficult, especially in the case of materials with low water content. In addition, the fractal dimension obtained by the analysis often appears to be a nonsensical result, such as the small value (less than unity) usually shown for protein solutions. However, recent applications of fractal analysis for various moist materials, including biological materials and cement materials, have solved this problem and made fractal analysis a practical analytical technique for water structures [14,30,36]. Recent developments of fractal analysis A physical picture of HBN dynamics for the molecular mechanism of the GHz frequency process was supported by dielectric studies of hydrogen bonding materials [21,28]. The concept of HBNs also clarifies how a water structure should be considered by fractal analysis [30]. When fractal analysis was first considered, two-dimensional networks of hydrogen bonds among water molecules were discussed, and Eqs. 2 and 3 were derived since the maximum value of the fractal dimension was 2 [23,24]. Though this value can be easily modified by the correction parameter, we maintain the original form here. In fact, various fractal analyses extensively developed since the 1980s have generally been considered less suitable for characterization after detailed evaluation of the physical properties of substances. Then, in actual analyses, the relative values and their changes are usually much more important than the absolute value of the fractal dimension. The HBN dynamics model suggests that the molecular mechanism of the GHz process is a rate process of exchanges in HB and the relaxation time becomes larger with decrease in number density of HBs cite. Then, the physical pictures of restricted dynamics of bound water molecules, often shown for hydration studies, suggest that the water molecules are not restricted by the HB itself but by the decrease in the number density of HB sites [21,28]. This property also corresponds to the plasticizer effect of water molecules to interacting polymer chains [21,22]. It has been recently confirmed that the value of the fractal dimension of HBNs could be determined from the box-counting method [14]. In general, the value of fractal dimension determined by observations is dependent on the length scale used in the observation technique. Using observation techniques with small length scales, any structure should have a maximum fractal dimension of 3. In that sense, there exist no absolute values of fractal dimension, and characterization of structures is always obtained as relative values of fractal dimension dependent on the observation techniques. This universal property of fractal analysis is also available, even in cases that do not deal directly with length scales, such as dielectric spectroscopy [30]. Since length and time scales are indirectly interrelated through the dynamic nature of complex systems, no observation method can be completely independent from both length and time scales. Therefore, fractal analysis by dielectric spectroscopy suggests that the trajectories shown in Fig. 1c reflect how the HBN is fragmented and how water molecules are dispersed in materials. As a result of the above confirmation of the characteristic feature of fractal analysis, an advanced analytical technique can be conducted by using the results we have obtained thus far. Even if the absolute value of the fractal dimension is not determined from analysis of the hyperbola in Fig. 1c, the possible value of the fractal dimension can be estimated from the location of plots and a single plot on the s-b diagram. The existence state of water molecules can then be discussed as the degree of HBN fragmentations. Using previous fractal analysis obtained for various aqueous systems, even a single plot on the s-b diagram can characterize the water structure from comparisons with various trajectories. The methodology of fractal analysis is also available for other symmetric relaxation processes with relaxation mechanisms observed in lower frequencies, such as structured water molecules [36] and ion dynamics on the interface [37]. These examinations make fractal analysis more universal and an effective analytical technique. Preparation of cotton cloth and yarn Plain-woven parts of cotton towels (TW220, Takei Corp. Japan) were used as sample cloth. Before the experiment, these fibers were pre-washed completely using two methods (referred to as methods A and B) to remove all fiber-treatment agents used to manufacture the fibers. After pre-washing, the cotton cloths were purified with a mixed bed type of ionexchanged resins (501-X8, manufactured by Bio-Rad, USA) to remove ionic components, such as Ca 2? , Mg 2? , Na ? , and K ? , in the cloth. Elemental analysis after purification was below 10 mg kg -1 . In Method A, samples were pre-washed using a fully automatic washing machine (NA-F702P, Panasonic Corp. Japan). Twenty-four cotton towels and 52.22 g of nonionic detergent (Emulgen108, Kao Corp. Japan, 10% aqueous solution) were put into the washing machine with 47 L of water and washed according to the following two steps: (1) samples were washed for 9 min (with water containing the above-mentioned nonionic detergent), rinsed twice with water, and spin-dried for 3 min (this step was repeated thrice); and (2) samples were then washed for 9 min (with water only), rinsed twice with water, and spun dried for 3 min (this step was repeated twice). In Method B, samples were pre-washed with organic solvents. Cotton towels were first cut into pieces (8 cm 9 8 cm) and pre-washed using Method A. Then, the samples were washed again and stirred in 300 mL CHCl 3 /MeOH (1:1 wt. ratio) for 5 min in a beaker; this process was repeated 5 times. For polyester faille and cotton yarns, only solvent washing was applied. For a desalination procedure, the same weight of ion exchange resin (AGÒ501-X8, Bio-Rad) as the cotton samples was added to 2 L of ultrapure water (Milli-Q water, Millipore) and stirred for 1 or 10 days. The obtained were then air-dried on a clean bench with 45% relative humidity (RH) for about 5 days. Samples were kept in the laboratory at a constant temperature and humidity of 23 ± 1°C and 50% ± 1% RH before all measurements. Broadband dielectric relaxation measurements for cotton cloth High-frequency dielectric spectroscopy measurements were performed by our original time domain reflectometry (TDR) method with a digitizing oscilloscope mainframe (86100C, Agilent Technologies) for wet samples in the frequency range of 100 MHz up to 30 GHz. For dried samples, another mainframe (54120B, Hewlett-Packard) with Four Channel Test Set (HP 54124A) in the frequency range of 100 MHz up to 10 GHz was used. The electrode used for the present work was handmade open-ended coaxial electrodes with an outer conductor having an outer diameter of 2.2 mmu. After calibration procedures, measurements were performed with ultrapure water (Milli-Q, Millipore) and 1, 4-dioxane for wet and dried reference samples, respectively. The flat end of a semirigid coaxial electrode was set up to have good contact with the sample surface, and dielectric measurements were performed three times for each sample to confirm the reproducibility of the contact at 21 ± 1°C and 50 ± 1% RH. Details on dielectric relaxation measurements with TDR systems were described in our previous papers [14,30,38]. To examine the effect of deionization, an Alpha-A-Analyzer with handmade three-terminal parallelplate electrodes (diameter: 10 mmu) and an LCR meter with three-terminal parallel-plate electrodes (A-type electrode with a diameter of 38 mmu, 16451B, Hewlett-Packard) were used. Cotton cloth samples set into the electrodes were placed in a chamber regulated to constant humidity/temperature, as shown in Fig. 2. The C 0 values were approximately 1.4 and 20 pF for the Alpha-A-Analyzer and LCR meter, respectively. For other measurements, IA and IMA measurements were performed with handmade two-terminal parallel-plate electrodes inside the SMA-type connector, as shown in Fig. 3. The C 0 value was about 0.45pF. T 2 relaxation measurements for cotton cloth relaxation and measurements of nuclear magnetic resonance for cotton yarns NMR measurements were performed by a 400-MHz NMR (Ascend 400WB, Bruker) with a 5-mm-diameter NMR tube to obtain the T 2 relaxation time for cotton yarn samples. The Carr-Purcell-Meiboom-Gill (CPMG) method [39] was used to determine the T 2 relaxation time with 32 times integration. After acquisition of the background signals, the difference from the obtained sample signal was analyzed. The cotton yarns were kept in a constant temperature and humidity chamber at 50 ± 5% RH and 23 ± 2°C for about 1 week in advance. For preparation of humidified samples, one end of the sample tube was cut and both ends were opened; the sample tube was kept in a desiccator with 98% RH for 1 week. The tube was sealed again with silicon putty before NMR measurements. Averaged results were obtained by using three to eight samples under each condition. In addition, some samples were vacuumdried at room temperature for about 1 week and then allowed to stand in a constant temperature and humidity chamber again to confirm reproducibility. In the case of a small number of sample sheets, smaller permittivity was observed as the electric field pattern passed through the sample and reached the air. This result suggested that at least five sheets are necessary for TDR measurements. Then, eight sheets obtained by folding three times were used in the following measurements. Results and discussion As shown in Fig. 4, a GHz frequency process at around 10 GHz and a higher frequency tail of the MHz frequency process were observed. Considering the typical experimental results obtained for moist materials in the frequency range, the GHz frequency process is attributed to cooperative dynamics of water molecules. In the present paper, the molecular mechanism of HBN dynamics is considered for the GHz frequency process. Figure 5 shows relaxation curves obtained by lower-frequency measurements for naturally dried samples of cotton cloths with and without deionization procedure. The relaxation curves clearly show that ion dynamics have enlarged contributions of dc conductivity and all relaxation processes including the higher frequency processes, and respective processes must be obscured by the ionic process without deionization. Thus, samples of cotton cloths used in the present study were sufficiently deionized before measurements. Figures 4 and 5 show three relaxation processes observed in the GHz, MHz, and kHz frequency regions and another process due to electrode polarization in the lowest frequency region, even for dry cotton cloth samples. These results are supported by a recent work by Zhao et al. [9], even though they analyzed more detailed fitting analysis for each process in wide temperature and water content regions. Part of their results for cellulose samples seems to reflect some ion dynamics. Our recent work also indicated that these ionic behaviors on interfaces can be observed, even in the MHz frequency region, for small-scale heterogeneities [37]. Measurement error is inevitable, especially around 1 MHz in BDS measurements since the measurement principals of higher frequency transmission line and lower-frequency electric circuit measurements are essentially different and switch around 1 MHz. In addition, hard materials with low dielectric loss and wide relaxation time distributions in the dry state make measurement and analysis difficult because of contact problems between electrodes and samples with large-scale heterogeneities of cotton cloth. Therefore, in Fig. 6, the value of the dielectric constant, e 0 10 MHz, on the highest frequency side (* 10 MHz) obtained in Fig. 5 was plotted against the water content. The high-frequency limit of the dielectric constant in the lower-frequency measurements describes the remaining relaxation strength for high-frequency processes in the higher frequency region. Compared with the proportional relationship, indicated by the dashed line for high water content samples, the solid line obtained for plots of the low water content samples clearly shows a smaller slope. Even while considering the presence of air in the low water content sample, the dielectric constant still seems small. This result indicates that there is another relaxation process on the high-frequency side, in addition to the relaxation process observed in the frequency domain up to the MHz region. As is often seen in dispersion systems, the relaxation process due to dynamic behaviors of HBNs of water with cellulose molecules observed in the GHz frequency region for the cotton cloth sample widens the tail on the lower-frequency side, maintaining the average relaxation time. Recent understanding in the relaxation mechanism of the GHz frequency process of water with HBNs can be associated with a model wherein water molecules heterogeneously distributed in the cotton cloth exhibit a broad relaxation process. However, the dielectric properties of dry cotton cloths have not yet been clarified in detail, especially between the 10 GHz range and 10 MHz range processes. This is because of the difficulty that observation techniques have in replacing measuring principals and ambiguities in heterogeneous hydration. It is difficult to expect an effective improvement in measurement accuracy at present, and it is difficult to capture some changes in the hydration state with clear relaxation peaks, especially for low water content samples. Therefore, it is effective to confirm how GHz frequency data continue to the lower-frequency region Figure 5 Lower-frequency processes observed for dried samples of cotton cloths with and without deionization. The dotted lines show respective processes due to dc conductivity, electrode polarization, and interfacial polarization from the lower-frequency side. Figure 6 Water content dependency of the dielectric constant, e 00 10 MHz, at the higher frequency side (* 10 MHz) for the lower relaxation processes. Dashed and solid lines were obtained for humid and dried samples, respectively. and to perform data analysis that makes full use of the characteristics of broadband data. Another series of dielectric measurements, in a middle frequency region from 1 MHz up to 1 GHz, were added to compensate a part of the spectra in Fig. 7, which shows a BDS spectrum for a dry sample of cotton cloth. Plots for three independent observations are shown for a GHz frequency process obtained from TDR measurements. Though data scattering and systematic errors appearing between the two different series of measuring systems were eliminated at * 1 MHz, experimental errors at * 100 MHz with large scattering from TDR measurements could not be sufficiently reduced. Since a lowloss sample of 1,4-dioxane is used as the standard sample for calibration in dielectric measurements, the complex dielectric constant thus obtained indicated the existence of the GHz frequency process, although large scattering of data was observed. The error in the measurement data is mainly due to the low quality of the contact between the small size electrodes required for high-frequency measurements and the cotton cloth sample. This is unavoidable, especially in a fiber sample with low water content. Furthermore, the use of logarithmic scale graphs to show the entire data also makes the error remarkable. The validity of the small permittivity values obtained will be further discussed in subsequent paragraphs. Figure 7 shows that the BDS spectra could be obtained from suitable fitting procedures with a small contribution from the frequency region from 10 MHz up to 1 GHz in the entire BDS spectra. This procedure combined several measuring sub-systems, and it is certainly an advantage of BDS measurements. Figure 7 also clearly shows that the decreasing behavior of the dielectric constant continues in the broad frequency region, from the kHz to GHz regions. Fitting procedures were applied with high-frequency data obtained from the average of three TDR measurements, as shown in Fig. 8. Though the averaging procedures were not sufficiently effective to reduce the experimental error by simple static treatments of the three measurements, the GHz, MHz, and kHz processes, with broad distributions of relaxation time, have overlapped each other in the entire frequency region with the low-frequency process due to electrode polarization. The relaxation curves obtained by BDS measurements were analyzed by fitting procedures using the following equation: where r is the direct current (dc) conductivity and e 0 is the dielectric constant of vacuum. The subscripts 1, 2, and 3 in Eq. (4) indicate the relaxation process around 10 kHz, 10 MHz, and 10 GHz, respectively, and the subscript E indicates the process due to electrode polarization. The relaxation parameters for the GHz process are listed with those average values in Table 1. In the present paper, we focus on the discussion of water structures from the relaxation parameters obtained for the GHz process using the fractal concept. Table 1 indicates a relaxation time value (3 ps) that is smaller than that for usual bulk water (8 ps). It is considered that the small value was easily induced by fitting procedures for the broadening GHz frequency processes without a clear loss peak in the high-frequency region. Here, we did not use any restrictions to maintain the relaxation time of 3 ps since it was important to confirm a relative change in the dielectric constant and loss for the lower-frequency side of the GHz process. The b value is determined from the slope of the lower-frequency side of the GHz process in the absorption curve [40]. Considering that the water content of dry cotton cloths kept under typical laboratory condition is about 5%, the values of relaxation strength of the GHz frequency process of around 1.2-1.3 were still smaller than those expected from a simple proportional relationship with the water content. It is reasonable to consider that these smaller values reflect air included in the cotton cloths and the lowerfrequency shift of dynamic behaviors of water molecules restricted with cellulose chains. In our present model of hydration, we do not need to distinguish the two kinds of dynamic behaviors of water molecules as one from the lower-frequency tail of the GHz frequency process and another from MHz processes [9,41]. This is because the two broadening relaxation time distributions sufficiently overlap [42]. Though the b value of 0.55 is smaller than usual for moist materials, it is difficult to make exact and reasonable comparisons with hydration structures of other materials, as explained for Fig. 1. Therefore, more detailed discussion of hydration is expected to be obtain from the s-b diagram in Fig. 9 for the GHz frequency process of the cotton cloth. Though the b value itself means the time correlation behavior of HBN dynamics, the combination with the relaxation time suggests the spatial fractal structures of HBNs as shown by Eq. (2). Abbreviations for the various aqueous materials used in Fig. 9 and related references are shown in the figure caption. A b value of 0.55 has been obtained for conventional dispersion systems reported thus far; however, the combination of such small value, 0.55, and the small relaxation time obtained for GHz process has not been reported so far. Even if we consider the error, indicated as an arrow, for the relaxation time, the plot for the cotton cloth shown in Fig. 9 still clearly indicates a value of b smaller than that of typical materials. The smaller fractal dimension means broad spatial distribution of HBN fragmentations with various sizes, which is generally shown as typical behavior of dispersion systems [14]. It is also emphasized that cotton cloth does not show the usual lower-frequency shift of GHz frequency process that is shown in typical solution systems. The characteristic behavior of cotton cloth is the retaining of HBNs with an aggregation structure of water molecules, even in the dry state, and fragmentations of the HBN at the same time. The size of aggregations is probably considered to be more than 1 nm, corresponding to five or six water molecules. The existence of water molecules with different mobilities was also observed by NMR T 2 relaxation measurements. Figure 10 shows plots of the intensity, M(t), against the echo time, t, obtained from NMR T 2 measurements with the CPMG method for the humid sample kept under 98% RH. The echo time dependence of the decay could not be explained by a single exponential function. Thus, the decay curve was expressed by following bi-exponential behavior, given by where the first and second terms indicate the fast and slow components of the decay, respectively. Figure 10 shows that the bi-exponential function describes the decay behavior well, except for the larger echo time region where the signal decays and is hidden in the error. The bi-exponential analysis applied for all samples with Eq. (5) suggested that T 2s and T 2f values for the humid samples are 16.4 and 2.4 ms, respectively. These values decreased to 9.5 and 0.72 ms for the dry sample. It is reasonable to consider that the mobility of water molecules interacting with cellulose molecules increases with increase in water content, as shown in typical aqueous systems. Figure 10 Echo time dependency of the intensity obtained by the CPMG method of T 2 relaxation time measurements of NMR for the cotton yarn sample kept under 98% RH. Lines were obtained from the fitting procedure for the fast process with a single exponential function (blue) and the entire decay with a biexponential function (green). these two different observation techniques, and their consistent results are significant. As described above, the present analysis with the s-b diagram is focused on the plot area rather than on values of the fractal dimension since conventional analysis has often shown large fluctuations of fractal dimension due to mathematical fitting procedures for scattering data. With respect to a database of s-b diagrams [14,30], the cotton data were more prominent than any dispersion systems previously observed. By contrast, even in the same category of protein-water systems, plots for collagen were obtained in the area for solution systems and plots for gelatin solutions were located in the area for dispersion systems. These results may seem opposing since the more structured collagen shows a more solutionlike result and the coiled state of gelatin indicated a dispersion-like result. However, these results can be interpreted as HBNs in gelatin excluded from the globular protein structure and HBNs remain around collagen molecules and tropocollagen. In the case of collagen, HBNs of water molecules and collagen chains are mutually penetrating [30]. When this model is applied to cotton cloth, it is presumed that HBNs of water molecules cannot penetrate the 1-nmscale microfibrils of cellulose and that HBNs of water molecules and microfibrils of cellulose cannot penetrate each other, like collagen. This model corresponds well to NMR studies reported by Salmén et al. [8]. The hydration properties of the cotton cloth sample obtained in this study were similar to those for conventional dispersion systems; however, the relaxation time distribution parameters showed an even smaller value. This low fractal dimension indicates heterogeneous hydration in the cotton cloth sample and similar dynamic behaviors for the bulk water, even at a low average moisture content. The cotton cloth sample is more inhomogeneous than usual dispersion systems since various voids filled by air exist, as well as heterogeneous cellulosic fibers. There exist local high water content parts in the sample, even for the low average water content sample, and the HBN fragments exhibiting bulk-like water contribute to the GHz frequency process. By contrast, HBN fragments smaller than the correlation length of HBN in bulk water have a low hydrogen bond density, contribute to the lower-frequency component of the relaxation time distribution, and exhibit an overlap with water molecules bound to cellulose in the same relaxation frequency range. Even when the cotton cloth is naturally dried from the large water content, water molecules are heterogeneously removed. The remaining water molecules aggregate to retain the HBN fragment, which contributes to the GHz process, and some water molecules bound to cellulose form structures. However, the GHz process has not been treated in conventional investigations for low water content substances, and the water structure related to the GHz process extending to such a low-frequency region was evaluated for the first time. Considering consistency with the hydration model proposed by Igarashi et al. [1][2][3], the drying process of cotton cloth broadens the GHz process, and the size distribution of HBN fragments also shows broadening. The smaller size of the HBN fragments contributes to the lower-frequency components of the GHz process. Water molecules interacting with the cellulose chains via HB observed at * 1 MHz and those included in the low-frequency tail of the GHz process cannot be distinguished because of the overlap of the two relaxation processes. However, the model of the stiffening mechanism of cotton fabrics suggests that these restricted water molecules are closely related to the stiffening mechanism of the cotton fabric. When water molecules are removed by mechanical stimulus or further drying from the stiff fabrics in the dry state, the cotton fabric is rejuvenated and the stiffening effect is loosened. The slow component of water molecules takes a single HB state with the cellulose molecules. More detailed interpretations are expected to be obtained from further investigations with the same BDS analytical techniques and other complementary observation techniques for water structures of cotton materials modified by treatments with additives, for example. The fractal analysis shown in the present work suggests dynamic behaviors and locations of water molecules in the materials. In various areas wherein water structures of aqueous systems, such as moist materials, biomaterials, and cement materials, are treated, the present analytical technique of BDS using fractal analysis with the s-b diagram is expected to be an effective and universal tool to clarify details of hydration mechanisms. Conclusions In the present work, the water structure of naturally dried cotton cloth was investigated by BDS and fractal analysis of spatial distribution of HBNs with various fragment sizes performed with a s-b diagram. The analysis of the GHz process revealed that the mobility of water molecules significantly fluctuated, reflecting variety in the agglutination/dispersion state of water molecules in the cotton cloth. Comparisons of the b value (0.55) with the smaller relaxation time with a database obtained from our previous studies suggest that water molecules involved in HBN fragments with various sizes tend to disperse widely in a sample while aggregating much stronger than other substances or conditions currently reported. It is expected that the present characterization technique providing more detailed insights into the widely distributed water structures that have not been covered in previous studies can be applied to the evaluation of various properties of cotton cloth samples in future developments of material processing and fabric softener. It was also shown that our analytical technique is effective for universal analysis of water structures for moist materials, even for those in which there exist serious difficulties in measurements of the absolute value of permittivity. Declarations Conflict of interest The authors declare no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licen ses/by/4.0/.
8,528.8
2021-08-27T00:00:00.000
[ "Physics" ]
Anomalous dimensions at finite conformal spin from OPE inversion We compute anomalous dimensions of higher spin operators in Conformal Field Theory at arbitrary space-time dimension by using the OPE inversion formula of \cite{Caron-Huot:2017vep}, both from the position space representation as well as from the integral (viz. Mellin) representation of the conformal blocks. The Mellin space is advantageous over the position space not only in allowing to write expressions agnostic to the space-time dimension, but also in that it replaces tedious recursion relations in terms of simple sums which are easy to perform. We evaluate the contributions of scalar and spin exchanges in the $t-$channel exactly, in terms of higher order Hypergeometric functions. These relate to a particular exchange of conformal spin $\beta=\Delta+J$ in the $s-$channel through the inversion formula. Our exact results reproduce the special cases for large spin anomalous dimension and OPE coefficients obtained previously in the literature. Introduction and results Recent years have seen a resurgence of the bootstrap program boosted by the developments of [2] on bounding operator dimensions by imposing crossing symmetry on correlation functions. Subsequent applications of this techniques [3,4,5,6] lead to tremendous progress that can be followed by looking into the recent updated reviews on the topic [7,8,9]. Despite the crossing equation is the suitable tool to analyze conformal observables numerically, there are some regions in parameter space that still allow to be explored analytically. In particular, a great deal of progress has been made by looking at the spectrum of large spin operators [10,11]. Based on this analysis a successful perturbation theory in spin has been developed [12,13,14,15,16] which allows not only to compute anomalous dimension of large spin operators by also to understand universal properties of those operators in generic conformal field theories. One of the striking achievements of this approach is that, even when a perturbation expansion has been made in inverse powers of large spin, the given expansion can resum to get results at finite smaller values of the spin. The reason why this happens is due to the analyticity in spin of the conformal partial wave expansion recently proved in [1] (see also [17]), where also a powerful inversion formula has been derived which express the OPE coefficient of a given operator exchange in terms of a convolution of the double discontinuities of the four-point correlation functions across the lightcone branch cuts. This inversion formula is our main tool in this paper to compute the anomalous dimension of the large spin double-twist operators at large but still finite values of the spin, or in other words we show that the inversion formula indeed resums the large spin expansion of the anomalous dimension. We do this by writing the four-point function in a conformal partial wave expansion in both, position space and Mellin space. A first consideration of the inversion formula in Mellin space have been made recently in [18] which we developed and improve further here. A boostrap approach in Mellin space has been developed and applied in the works [19,20,21,22,23], where unlike here, crossing symmetry is guaranteed by construction and the bootstrap equations corresponds to conditions that eliminate spurious exchanging operators. Even though there are closed forms for the conformal blocks in two and four dimension [24,25,26], that's not the case in general dimension, in particular there is not known closed form in any odd dimension. One of the advantages of working in Mellin space is that it is possible to write the conformal partial waves expansion in arbitrary dimension [27] and we exploit this fact here. The most important result of the paper is demonstrated in section 3 and again in section 4. We consider a correlator of the form 4 ) in the z → 0 limit so that in the s−channel, we consider the product of OPEs O 1 × O 2 . In the t−channel, the decomposition is between the OPE of O 1 × O 1 and O 2 × O 2 (resembles the decompostion of identical scalars). Specifically, in the s−channel, we can write, in the z → 0 limit, where τ = ∆ 1 +∆ 2 +γ 12 (β), β = ∆+J is the conformal spin, with γ 12 (β) the anomalous dimension. This decomposition, is related to the contribution in the t−channel through the inversion formula of [1] and gives, we will review this formula in the next section. Expanding both sides in the z → 0 limit, we obtain two sets of relations for the anomalous dimensions and the corrections to the OPE coefficients corresponding to the coefficient of the log z terms and the regular term. The contribution of the scalar exchange in the t−channel related to a particular operator of a particular conformal spin β = ∆ + J in the s−channel is given by, (1.4) In the limit z → 0, these are exact expressions in β as long as the anomalous dimension is keep it small (see below). The definiton of A m (J, ∆) is given in (A.21). Notice that (1.4) reduces to The rest of the paper is organized as follows. In section 2, we provide a brief review of the inversion formula of [1]. In section 3, we computed the large spin anomalous dimension from position space. In section 4, the inversion formula is analyzed from the Mellin (integral) representation point of view and the contributions to the large spin anomalous dimension from the scalar and spin exchanges are computed. In section 4.3 agreement between the two approaches is shown. Section 5 discusses some special cases and recover previous results in the literature. We also consider a perturbative expansion in d − dimensions for identical scalars. Section 6 discusses vaguely how the regular terms can be obtained from both the position space and the Mellin space. We end with some discussions in section 7. The relevant details of the computations are provided in appendices. Appendix A discusses the general integral representation for the conformal block and appendix A.1 discusses the relevant simplifications of the Mack polynomials in the limit z → 0. Appendix B discusses the recursion relations for the general spin−J conformal blocks in position space. Inversion formula We would like to consider the correlator of four conformal primary scalar operators, which by conformal invariance, is only a function of cross ratios, , and z,z are conformal cross-ratios given by, (2. 2) The correlator above, can be expand in an operator product expansion when two operators get close to each other. Expanding in terms of the small distance between, say 1 and 2, we have the following s-channel expansion, where the sum runs over the exchanged primary operator with spin J and dimension ∆. The function G J,∆ are termed conformal blocks and are eigenfunctions of the quadratic and quartic Casimir invariants of the conformal group. In even spacetime dimensions, the conformal blocks can be expressed in a closed form in terms of products of hypergeometric functions. They are very well known in two and four dimensions and are given respectively by where, Our main tool in this work is the Lorentzian OPE inversion formula recently derivated by Simon Caron-Huot [1] 1 , which we will review quickly in this section. The starting point is the spectral representation of the OPE (2.3) expansion given by [28], The contour integral pick up the physical poles associated to the exchange of operators in a OPE expansion and are contained in the function c(J, ∆). The function F J,∆ is given in terms of a linear combination of conformal blocks plus its shadow respectively as, with coefficients given by, and they form a set of orthogonal functions, such as the relation (2.7) can be automatically inverted in order to solve for the partial wave coefficients, 1 see also [17] with the normalization factor, (2.11) and the conformal invariant measure given by When going from the Euclidean to the Lorentzian region, the four-point function G(z,z) develops branch cuts singularities along the lightcone distances between the scalar in the correlator. The idea is then to explore the analytic structure of the partial wave coefficients (2.10) by deforming the contour of integration in such way that trapping the branch cuts with the deformed contour extracts the associated discontinuities. In order to do that, it is necessary to write the spectral function F J,∆ (z,z) in terms of solutions of the conformal Casimir equations such as the function can be split up in two parts: a part that vanishes with the proper power law at infinity and another that vanishes in the same way around the origin. Remarkably, it turns out that the particular combination with this property is actually a conformal block with the quantum numbers ∆ and J swapped (and shifted by d − 1), namely G ∆+1−d,J+d−1 . Once the proper spectral representation has been found, one can freely deform the integration contour by trapping the branch-cuts and hence extracting the discontinuities of the four-point function across them. Notice that for a given cross ratio branch cut, there are associated two lightcone distances and therefore, by crossing a given cross ratio branch cut, we are actually crossing two lightcone branch cuts, and therefore a double discontinuity. Denoting by dDisc the operation of taken that given double discontinuity and the s−channel OPE coefficients by, the final result from Caron-Huot is, 14) The u-channel contribution C u is the same but with operators 1 and 2 interchanged. In practice, the OPE coefficients can be extracted from thez integration as a power expansion in small z, since at this limit, the effect of the z−integration is only to produce the poles associated to the coefficient under consideration, in the following way: at leading order in small z (2.14) is approximated by, where the following "generating function" has been defined, (2.16) which at small−z will be given by a power expansion, such as schematically, We have defined the usual conformal twist and spin respectively τ = J − ∆ and β = ∆ + J. In the main body of the paper, we will be interested in study the contributions to (2.16) coming from a single exchange, so by using the t−channel block decomposition of the four-point point function G(z,z) we can compute that contribution from, where f i j(J,∆) corresponds to the three-point function between the external scalars i and j and the exchanging operator. In the remaining of this paper we are mainly interested on an equal-dimensions scalar four-point function. In such case several comments are in order: The operator exchanges are limited to even spins J. The C u and the C t coefficients are the same and therefore it is enough to consider only C t . Additionally we would like to consider the z → 0 limit in which the conformal blocks dependence on z splits into a singular contribution containing a log(z) factor and a regular power contribution. 3 Spinning anomalous dimension at finite β from crossratios space In this section we would like to use the formula (2.18) to compute the contribution to the anomalous dimension of large spin operators from a scalar exchange. We are going to do this in coordinate space and in latter sections also in Mellin space. In both cases, we are able to give exact expressions at finite β. Scalar exchange The scalar conformal block can be written as a doble power expansion [25], where h = d/2 and should be noticed that we are expanding in the t−channel. From this representation we can take the z → 0 limit to obtain, In this section we will focus only on the terms accompanying the log(z) term, which we will refer to as "the log term", and we will refer to the remaining terms as "the regular terms" which we will consider later. As we have mentioned in the section above, at small z the generating function (2.18) is given by a power expansion in z, whose leading term can be written as, If the anomalous dimension γ 12 (β) is small, which is the case we are going to consider in this work, we can approximate it as, where C 0 (β) corresponds to the tree-level OPE square coefficient of the double twist operator corresponding to τ = (∆ 1 + ∆ 2 ) . By comparing the log(z) term at (3.4) with (3.2) and using (2.18), the correction to the anomalous dimension γ 12 (β) from a scalar exchange is, ] . (3.5) Here we have taken the 2 F 1 function outside the dDisc because it is analytic in the argument 1 −z. Following [1] in order to perform this integral it is useful to define the following object, where the sin(πx) factors comes from taking the double discontinuity on the term in brackets. The square OPE coefficient C 0 (β) corresponds to taking the tree-level double twist τ = −τ 0 = , τ 0 meaning the tree-level twist of the double twist operators. It is also convenient to use the following transformation of the 2 F 1 , with y = 1−z z . By using the power series expansion of the Gauss hypergeometric in (3.5) and using (3.6) to perform the integral term by term, we arrive to, where as before we are using the definition τ = −(∆ 1 + ∆ 2 ). Hence we can write the contribution to the anomalous dimension coming from the scalar block as, Notice that this is an exact result in β, meaning we have not yet taking large values of β. In other words, the inversion formula resums the power expansion in β, as a reflection of the analiticity in spin. Of course, we still need to consider that β is large enough such that the anomalous dimension is small. For the operators we are considering, the anomalous dimension for large enough β scales as [10], Spin exchange Let us now consider the contributions to the anomalous dimension γ 12 (β) coming from a spin exchange. Here we are going to restrict again to the terms accompanying the singular log(z) at the leading z → 0 region, namely, In terms of g J,∆ , the generating function (2.18) for a particular spin exchange can be written as, where we have defined, In order to perform the integral we can expand g ∆,J as a power series in 1−z z ≡ y, Higher k−powers of y in the above expansion correspond to contributions from the descendant family of the given primary exchange. The generating coefficientĉ ∆,J (z, β) can be rewritten as by using (3.6) and dividing by the identity contribution, we obtain Here is worth noticing that the contribution to the discontinuity from (3.15) will come only from the primary at k = 0, since k = 0 is an integer and therefore y k is a single valued function. This will applies to the remaining cases considered later in this work. On the other hand g J,∆ (y) satisfy recurrence relations of the type considered in [25,5], however in the small−z limit those recursion are subtle due to the fact that the quartic and the quadratic Casimir mix the leading and the first term in the Taylor expansion in z and we would like to consider the leading log(z) term only as in (3.27). The adequate recursion at small−z can be obtained from the following Casimir equation [1], obtained from the quadratic and quartic Casimirs given respectively by, (3.20) whose eigenvalues are, By plugging the power series expansion (3.15) into (3.19), we get the following recursion relation for the coefficients g k−1 (J, ∆), where τ = ∆ − J is the usual conformal twist for the exchanged operators. From the above recurrence relation, we can compute all the coefficients expanding the conformal block (3.15), however they become unmanageable large very quickly. Let us display the first few coefficients for arbitrary ∆, J and d, for example, By using g k (∆, J) we can then compute all the coefficients in the expansion (3.17). For example, at leading order in y we have, which should be a good approximation as long as the ratio (3.11) is small. At the large β limit it simplifies to, This expression matches previous results in the literature [14,13,15]. 2 Notice that at leading order in large−β, each coefficient (3.18) (divided by the leading c 0 ) start at β −k , more precisely, therefore at a given order in a (β −1 ) 2 expansion, we only need a finite number of coefficients. 2 Our g k coefficients are slightly different to the ones from [14] g here , because we are expanding the blocks here in (3.15) is in y, whereas [14] expand it in z. The conformal blocks satisfy a recursion relation in spin for fixed ∆ [25,26], hence we can solve a block for spin J from the conformal blocks at spin J − 1 and J − 2, or equivalently, we can write a spin J conformal block in terms of linear combinations of scalar blocks (3.1), and subsequently the contribution to the anomalous dimension can be similarly be written in terms of linear combinations of the 4 F 3 in (3.9). Even thought this approach will give us closed expression at finite β, it become very large and tedious even for the lowest values of J. We show the simplest J = 1 block from this procedure in the appendix. We can still however write a closed expression for any spin in four dimension (as well as in two), which we will consider next. Four dimensions In four dimension the recursion above can be resumed into an hypergeometric function as we already pointed out at (2.4), By comparing the block above with (3.2) we notice that in four-dimension the contribution to the anomalous dimension is essentially the same as in the scalar case (in general dimension), by exchanging ∆ → τ − 2 and h → 1 in (3.28), we obtain the contribution to the anomalous dimension from a spin exchange in the following closed form, In particular, at the minimal twist, namely taking τ = 2 then γ 12 (β) simplifies considerably to as expected, since it corresponds to the energy-momentun tensor which is conserved and hence, can not develop an anomalous dimension. Spinning anomalous dimension at finite β from Mellin space In the previous sections we have discussed the inversion formula of [1] from position space conformal blocks. This section onwards, we will discuss it by alternative using the integral representation of the conformal blocks i.e. the Mellin space. As we will see, working in Mellin space representation have some nice advantages. On one hand, it allow us to write expressions, which are democratic with respect to the space-time dimensions. Even more appealing is that, unlike the cross-ratios conformal blocks in general dimension, we can write a compact representation for the blocks in terms of a contour integral that lately allow us to write them in a power series expansion without the need of solving the cumbersome recursion relations discussed in previous sections. Scalar exchange Consider for simplicity the exchange of scalars in the t−channel. For this, the Mack polynomial in (A.7) is P 0,∆ = 1 and the above expression undergoes considerable simplification. Furthermore We will come back to the discussion of the regular terms later, but for now, focus on the coefficient of the log term which contributes to the anomalous dimension of double field operators in the s−channel. The summation over k gives, Provided we choose to close the contour on the rhs, then ∆ ≥ 2h − 2 − 2s is always satisfied due to the unitarity bound. Thus, . (4.5) The coefficient of the log term becomes, It is straightforward to see that choosing the poles s = n, we can recover the usual log term scalar block in cross-ratios space, We will however in this section use a Mellin space representation of the conformal blocks, which as we will see, allow us to write them in a closed form, unlike the cross-ratio space analysis of sections above, which requires to solve a complicated recursion relation. The idea is to first perform thē z integral and leave the s−integral as the final step to the anomalous dimensions. The resulting coefficient for the log term, following (2.18) and (3.6) is, ∆+τ +2s (β) . (4.8) Note that I (a,a) ∆+τ +2s (β) has factors sin π( ∆+τ 2 + s + a) sin π( ∆+τ 2 + s − a) coming from the double discontinuity. Since we are choosing the poles of s = n from Γ(−s), these factors can be pulled out of the integral in the form of sin π( ∆+τ 2 + a) sin π( ∆+τ 2 − a). To obtain the anomalous dimensions, one divides the above expression by the tree-level contribution i.e. I (a,a) τ (β) and we obtain, Computing the poles of Γ(−s) at s = n we can see that, which matches with that obtained in (3.9) for a = b. For the sake of completion, we write down the final expression, Spin exchange A generalization of the scalar exchange is to extend the above formulation to the exchange of spin−J operators in the t−channel. We will start with the coefficient of the log term in (A.19), As explained in appendix A.1, we can use (A.20) to obtain, with A m (J, ∆) given in (A.21). The coefficient of the log term then becomes, . (4.14) The third line follows from the second line provided we close the contour on the rhs, so that the only pole contributions can come from Γ(−s) satisfying 1 − h + m + s + (∆ − J)/2 > 0 due to the unitarity bound. Following the discussion in section 4.1, is the generalization of (4.9), in the case of a spin−J operator exchange in the t−channel. We will choose to close the contour on the rhs in the complex s−plane so that it suffices to consider the poles coming from Γ(−s). The poles are at integers s = n ∈ I ≥0 . Thus the sin factors associated with the dDisc can be pulled out of the integral. After some simplifications (and dividing by the tree-level contribution), the above integral can be put in a more convenient form, where the s−integral evaluates to, where a = b = (∆ 2 − ∆ 1 )/2. Just for the sake of completion we will write down the final expression as a result of the above simplification, For J = 0 (and consequently m = 0), the above formula reduces to (4.11). The entire contribution from the t−channel can be summarized as, Matching cross-ratios conformal blocks We want to show here that from the previous expressions computed in Mellin space we can recover the coefficients obtained from the conformal blocks in position space. The coefficients of the log terms are, (4.20) Using (A.20), we can write, (4.21) Closing the contour on the rhs, one can see that (∆ + 2(1 + s + m) − d − J)/2 > 0 for all s poles and hence, The coefficient of the log term becomes, (4.23) Notice that for a particular n residue, the sum over m, can extend from J upto J − n as the terms m < J − n are zero. To fix the normalization, it suffices to evaluate the n = 0 residue, which gives, (4.24) The coefficients computed previously from position space (3.23) are then given by with this normalization g 0 = 1 and, (4.25) and so on, which of course match the results obtained from the recursion relations, but this time they come from the contour integrals in Mellin space. This is a very non-trivial cross check of our formulas, in particular, even though the compact form for the anomalous dimension (4.18) looks still complicated, it will be even harder to get to such a formula from the recursion relations in position space, while in Mellin space it boils down simply to the computation of a sum over some residues, which present a clear advantage in comparison with solving algebraic equations. Special cases As some special cases of (3.28) or equivalently (4.11), we will consider the case of identical scalars in the context of the perturbative −expansion in four dimensions 3 . Furthermore, we also reproduce previous results obtained in [14,13,15]. −expansion for identical scalars A special case of (3.28) is obtained for identical scalars where τ = −2∆ φ . In that case, we are looking at the anomalous dimensions of operators φ∂ . . . ∂φ with ∆ = 2∆ φ + J + γ J (β). For the exchange of the scalar φ 2 , ∆ = 2∆ φ + gγ φ 2 and ∆ φ = h − 1 + g 2 γ φ in a perturbative expansion in g, We can then write the large spin expansion in the s−channel in terms of the low twist scalar exchange in the t−channel, given by, Notice that the expansion begins at O(g 2 ) because of the sin factors and the leading order result is, Let's go to the next order. The overall factors, have the expansion, and the Hypergeometric function can be expanded as, Combining these two, we can write up to O(g 3 , 3 ), Particular dimensions In some specific cases the scalar contribution to the anomalous dimension simplifies considerably. Let us consider some of the cases computed previously in the literature [14,13,15]. In order to make the comparison more transparent we set τ = −2∆, ∆ = ∆ and f 11O f 22O = f 2 0 in (3.28), we can write, d = 3, ∆ = 1 The simplest case corresponds to taking d = 3, ∆ = 1. Plugging it back into (5.7), the expression simplifies to By further set ∆ = 1 and replacing β → 1 − 4j 2 + 1 we got, By Taylor expand around large j, we can write the above function as, where the coefficients of the expansion are given by, which is exactly the result quoted in eq. (35) [14]. Regular terms The computations considered in sections above only determines the anomalous dimension from the coefficient of the log terms. As one can see from (3.4), for the OPE coefficients one needs to analyse the regular (non-log) terms as well. For the scalar block (3.2) the leading regular non-log term is given by, After plugging it at (2.18) and expanding the 2 F 1 function in power series, we need to consider the following complicated integral, By further expanding the log we can perform the integral, Dividing by the identity, we can write the regular part contribution from the scalar to the coefficient C 0 (β) as, We could not find a more compact way to write this expression. In the next section we will consider this contribution from Mellin space. From Mellin space We will again start with (A.19) of appendix A but this time focussing on the non-log terms. To keep this simple, we will consider the regular terms in the case of scalar exchange. The spin counterpart follows identical logic but with additional complications due to the non-trivial Mack polynomials. The regular terms of (A. 19) for scalar exchange in the t−channel, are, The first step is to perform the k sum. This can be done by exploiting an identity, Closing the contour on the rhs, we can see that the last condition is satisfied for b = 1 + s + ∆ − h and c = s + ∆/2, due to the unitarity bound and provided that the exchanged scalar is not a fundamental scalar. Thus, Next, the derivative of (6.6), wrt the parameter c, gives, For the specified values of b and c, we can write, (6.10) Plugging this back in (6.5), we find, Finally, performing thez integral using (3.6) and dividing by the tree-level contribution, we find, with a = (∆ 2 − ∆ 1 )/2. We can now consider the s−poles from Γ(−s) and close the contour on the rhs, to obtain, (6.13) Although an exact expression is difficult to obtain, one can see that in the large β limit, the correction can be expanded in the form, (6.14) The first few coefficients are of the form, 15) and so on, where, with a = (∆ 2 − ∆ 1 )/2 and τ = −∆ 1 − ∆ 2 . Special case: Identical scalars We will consider the above non-log term in a special case of identical scalars from the expression in the last subsection. For identical scalars in four dimensions, a = 0 and τ = −2∆ φ . We will consider an −expansion around the free point, so that ∆ = 2∆ φ + g, and h = 2 − /2, and further . From (6.13), we then obtain, for identical scalars, (6.17) The overall factor can be written as a series expansion in g, as follows, (6.18) Notice that the n = 0 term of the sum, starts contributing from O(g, ). The n = 0 term is simple and, while n > 0 terms do starting contributing from O(1) and we obtain, The leading correction to C 0,∆ φφ (β) is then, (6.21) Conclusions and discussion In this paper we have computed the anomalous dimension of higher spin operators in conformal field theory by means of the Inversion Formula g k (J, ∆) y k , y = 1 −z z , (7.1) and the coefficients g k (J, ∆) can be obtained through the recursion relations (3.22). In the case of the (integral) Mellin representation, the recursion relation is replaced by a simple sum over residues. Economically speaking, the sum over terms is much easier to handle than the recursion relation itself. Secondly, the contributions of the scalar/spin exchanges in the t−channel can be resummed for any operator in the s−channel with finite conformal spin β = ∆ + J in terms of general p F q functions. Thirdly, we have also demonstrated that the formula we obtained in (4.18) reduces to (4.11) for J = 0, and further (4.11) produces the special cases obtained in [14]. Another advantage of the integral representation is taking the z → 0 limit. In terms of the position space representation, taking the z → 0 limit becomes a little cumbersome specially when spin-exchanges are involved. However starting from the (integral) Mellin representation, both the log z and the regular term can be obtained from the integral representation from the lowest pole in the integral variable. For example we have a following form, which is obtained from just the t = 0 pole of Γ(−t) 2 . By taking the t = 0 pole, we recover both the log and the regular term at the same time. The higher orders (away from the z → 0 limit) can be obtained from the t = n poles of Γ(−t) 2 . As future perspectives it would be interesting to see how this results relate to previous studies in Mellin space, such as the Mellin bootstrap program [19,20,22] A Integral representation We will start with the integral representation of the conformal blocks following [25,26]. where we have stripped off the overall kinematical factors. In general, is a linear combination of the physical block and the shadow respectively from the s = λ 2 + n and s =λ 2 + n poles. To explain, the symbols, d− is the spacetime dimension. P J,∆ (s, t, a, b) is the Mack polynomial given by, P J,∆ (s, t, a, b) = 1 (d − 2) J m+n+p+q=J J! m!n!p!q! (−1) p+n (2λ 2 + J − 1) J−q (2λ 2 + J − 1) n (λ 1 + a − q) q (A.7) In order to eliminate the shadow contributions in (A.1) from the start, we will consider a different definition of (A.1), that produces just the physical blocks. We will write, × Γ(s + t + a)Γ(s + t + b)P J,∆ (s, t, a, b)u s v t , where F (s) may be thought of as the projection operator 4 onto the physical poles. It is not very difficult to see that, F (s) = sin π(λ 2 − s) sin π(h − ∆) e πi(λ 2 −s) . (A.9) Combined with this, we can write, × Γ(s + t + b)P J,∆ (s, t, a, b)(zz) s ((1 − z)(1 −z)) t . (A. 10) in terms of the complex z,z coordinates. In order to simplify matters from the start, we will be dealing with correlators of the form O 1 O 2 O 2 O 1 and investigating the contributions of the t−channel exchanges through the inversion formula in [1]. For these kind of correlation functions, the t−channel contribution essentially reduces to the representation for the identical scalars. The cross-ratios in the t−channel, is merely the transformation (z,z) → (1−z, 1−z) and with a = b = 0, we can write, × P J,∆ (s, t, 0, 0)(zz) t ((1 − z)(1 −z)) s . (A.11) The above formula will be the starting point of our calculations. We are furthermore interested in the z → 0 limit, where there are simplifications. Before proceeding to the core of the calculations, notice that (A.11) is still not in the form most useful for the inversion formula since there are additional factors that we should take into account. The correct quantity in the t−channel after taking into account the additional factors is, Since we are interested in the z → 0 limit, it only suffices to close the contour on the right and consider the t = 0 pole. Explicitly, Res Γ(−t) 2 Γ(s − k + λ 2 + t)Γ(s + λ 2 + t)P J,∆ (s − k + λ 2 , t, 0, 0)z t t=0 =Γ(s + λ 2 )Γ(s − k + λ 2 )[(log z + H(λ 2 + s − 1) + H(λ 2 + s − k − 1))P J,∆ (s − k + λ 2 , 0, 0, 0) + P J,∆ (s − k + λ 2 , 0, 0, 0)] . (A. 17) where, The entire contribution from (A.16) can be decomposed into, lim
8,022
2018-06-28T00:00:00.000
[ "Physics" ]
A Metric for Secrecy-Energy Efficiency Tradeoff Evaluation in 3GPP Cellular Networks : Physical-layer security is now being considered for information protection in future wireless communications. However, a better understanding of the inherent secrecy of wireless systems under more realistic conditions, with a specific attention to the relative energy consumption costs, has to be pursued. This paper aims at proposing new analysis tools and investigating the relation between secrecy capacity and energy consumption in a 3rd Generation Partnership Project (3GPP) cellular network , by focusing on secure and energy efficient communications. New metrics that bind together the secure area in the Base Station (BS) sectors, the afforded date-rate and the power spent by the BS to obtain it, are proposed that permit evaluation of the tradeoff between these aspects. The results show that these metrics are useful in identifying the optimum transmit power level for the BS, so that the maximum secure area can be obtained while minimizing the energy consumption. Introduction Wireless media are inherently prone to security threats due to their open access nature.Traditional security mechanisms are based on the use of cryptographic techniques.Cryptography secrecy strength depends on the computational complexity that is required in order to solve complex numerical problems.In order to not rely only on the trivial assumption that the attacker has limited computational power, physical-layer information-theoretical security can be used instead.This approach was first led by Shannon and then Wyner, who introduced the concept of wire-tap channels and analyzed its inherent achievable secrecy rate [1].Generalization to additive white Gaussian noise (AWGN) channels was then made in [2].The concept under these works is that any wireless channel has an intrinsic secrecy capacity, i.e., potentially there exists a specific rate so that the information is reliable for the legitimate receiver but not to the eavesdropper.The secrecy capacity is bonded to the signal-to-interference-and-noise ratio (SINR) at a legitimate destination compared to the eavesdropper's one.Recently, this concept of physical-layer security has also been investigated in fading channels [3], and proposals of implementation of physical-layer security have been done in [4][5][6]. However, a better understanding of the inherent secrecy of the wireless systems under more realistic conditions turns out to be fundamental: particularly, a clear focus on the relative energy consumption and its related costs has to be considered. As a matter of fact, the global information and communications technology (ICT) industry is an important and quickly growing contributor to CO 2 emissions and energy consumption.According to the SMART 2020 study [7], it accounted for 830 Megatons each year that is approximately equal to 2% of global human carbon dioxide emissions and almost equivalent to those of the global aviation industry [8].Hence, in the last few years, growing attention has been shown by both the Regulatory entities and the Telcos on the impact of the energy saving strategies on the economics [9] and the environment [10]; in the framework of ICT systems, mobile communications networks are the main contributors in terms of energy consumption: their contribution is expected to grow up to 178 Megatons of CO 2 in 2020, while in 2002, it was 64 Megatons.Therefore, in order to reduce the power consumption of cellular networks, several energy efficiency strategies have been proposed that are based on power control and power amplifier sleep mode [11][12][13][14]. Moreover, in order to quantify and compare the energy consumption performance of different components and systems, several Energy Efficient metrics have been defined for component, equipment and system levels: two different BS types have been taken into account for the energy consumption model, as described in [15], the Remote Radio Head (RRH) and the Macro BS.Since telecommunication equipment normally operates at different loads and energy consumption, the introduction of a suitable metric becomes a crucial aspect of the network optimization.In literature, there exist papers evaluating the energy costs of cryptographic algorithms [16], as well as the joint optimization of secrecy rate and energy consumption in cooperative ad hoc networks [17].To the best of our knowledge, no paper is currently published in international journals dealing with the evaluation of the energy costs of physical-layer security when it is applied to 3rd Generation Partnership Project (3GPP) cellular networks. To this end, this paper aims at investigating the tradeoff between secrecy capacity and energy consumption in a 3GPP cellular network.We focused on secure and energy efficient communications for cellular systems, which are motivated by the fact that most confidential transactions are expected to be conducted over such networks in the very near future.Specifically, we first derived the secrecy capacity of a BS surrounded by another six BSs.Then, we proposed two new metrics which bind the secure area in the BS's sector, the afforded date-rate and the power spent by the BS to obtain it and that allow evaluation of the tradeoff between them.The secure area defines the set of locations in the cell where the eavesdropper cannot leak information to the legitimate user.The results show that these metrics are useful in identifying the optimum transmit power level for the BS, so that the maximum secure area can be obtained with the minimum energy consumption: particularly, the metrics are useful during the network planning phase since they permit the cell planner to define the power that allows the user to receive the data with a required quality of service (QoS).Given the distance of the legitimate receiver and the secrecy rate to be served to the user, the planner can define the minimum transmit power that maximizes the secure area. Cellular Network Model The cellular network model that is considered in this paper is compliant with the Evolved UMTS Terrestrial Radio Access (E-UTRA) Radio Frequency System Scenarios as defined in [18]; therefore, it resorts to the same frequency bands specified for UTRA: particularly, the simulation frequencies are assumed to be at 2000 MHz.Moreover, the macro cell propagation model in urban area is taken into account, i.e., the BS antenna gain (including feeder loss) and the BS antenna height are assumed, respectively, equal to 15 dBi and 30 m, whereas the propagation loss L is equal to L = 128 + 37.6log(R), where R is the distance between the BS and the User Equipment (UE). A single operator layout is assumed.Base stations with three sectors per site are placed on a hexagonal grid with distance of 3 • R, where R is the cell radius.The sector antennas and the transmit power are assumed to be equal.The number of sites is equal to seven. The BS antenna radiation pattern to be used for each sector in three-sector cell sites is also identical to those defined in [18]: where −180 ≤ θ ≤ 180, θ 3dB is the 3 dB beam width which corresponds to 65 degrees and A m = 20 dB is the maximum attenuation. The BS power consumption model is the same proposed in [15]: particularly, the power consumption at maximum load has been defined as where N TRX indicates the number of transceiver chains per site, P is the power level which is radiated by the Antenna of each sector, P RF and P BB are the power consumption of the power amplifier and of the baseband block, η PA is the amplifier efficiency and the terms σ f eed , σ DC , σ MS and σ cool account for the loss of the feeder, the converter, the main supply and the cooling, respectively. SINR and Capacity Determination The cellular system that has been described in the previous paragraph has been implemented in MATLAB (version R2016a, academic use, The MathWorks, Inc., Natick, MA, USA) simulation environment, providing the SINR values in the square playground which is depicted in Figure 1.The SINR values permit achieving the capacity in all of the playground areas. Physical-Layer Secrecy By adopting the common security terminology, the source of information, which is identified as Alice, is the serving sector of the cell under examination.The legitimate receiver Bob is the mobile user within the cell range.An undesired eavesdropper, Eve, can move around within the cell boundaries trying to capture the information from Alice to Bob.To restrict the analysis, Bob's location is chosen on the direction of maximum radiation of his serving antenna, so that Bob's position can be simply described reporting his absolute distance in meters from Alice.The results presented in the following sections are derived for Eve in the same cell as Bob, tuned to Alice-to-Bob wireless frequency.All other surrounding sectors are considered as interferers for both Bob and Eve, with signals as described in the previous section. In order to evaluate the achievable level of secrecy that the system can grant to a mobile user in the depicted cellular system, we adopt the concept of Secrecy Capacity derived from Shannon's notion of perfect secrecy [19], Wyner's wiretap channel [1] and Barros work [20].As in [20], Bob's theoretical capacity per unit bandwidth is expressed as where h B is a coefficient inclusive of both the transmit and the receive antenna gains and of path-loss of the Alice-to-Bob channel, being Bob served by Alice; P is the power level of the sector that is serving Alice; N B is the power of the equivalent Gaussian noise component perceived by Bob; it includes both thermal noise and interference from the surrounding sectors. A similar expression describes Eve's capacity, while the Secrecy Capacity of Bob can be expressed as where we called γ B and γ E the SINR that is experienced by Bob and Eve, respectively, (i.e., γ Metrics for the Evaluation of the Effective Secrecy-Energy Efficiency Tradeoff In this section, we propose two new metrics for the evaluation of the tradeoff between the width of the surface of the cell where a target secrecy rate can be delivered and the power spent to obtain it. Effective Secret Area Suppose that the BS (Alice) has to serve a user (Bob) in the cell and that the requested service has to be provided by means of a secure connection (QoSS-Quality of Service with Security).The BS sets a target secrecy rate R s depending on the QoSS of the user.Given the position of the user (Bob), a specific metric is required that can help Alice determine the minimum transmit power that maximizes the secure area of the cell, i.e., the region where Eve can stay without affecting the secrecy capacity C s of the legitimate link under the target secrecy rate, i.e., C s ≥ R s .In our analysis, we set a dynamic target secrecy rate, equal to 10% of the capacity of the legitimate link (Alice-Bob), i.e., we initially set R s = 0.1C B .Nonetheless, in the following, results with different target secrecy rates are also shown. Before introducing the metrics, let us show the distribution of the secrecy capacity C s in the cell that is covered by the central BS (Alice) by assuming a variable distance of the user (Bob) and increasing the transmit power. Figure 2 shows the geographical distribution of C s over the playground of the cellular network.In particular, we focus on the sector of the cell managed by the central BS (Alice) where the user (Bob) is present.The map shows the secrecy capacity of each point of the cell calculated, as Eve was in that specific point.While the transmit power of Alice is fixed (20 W), the distance of Bob in the maximum radiation pattern direction is varying from 10 to 100 m.The area where the secrecy capacity is less than the target secrecy rate R s (Unsecure Area) is represented with darker blue and increases as the distance of Bob increases.These results are summarized in Table 1. Figure 3 shows the geographical distribution of C s over the playground of the cellular network when the transmit power of Alice is varying from 5 to 40 W while the distance of Bob is fixed (50 m): the area where the secrecy capacity is less than the target secrecy rate R s remains the same, but the overall area of the cell increases.It is important to stress that in this graph the transmit power of all the sectors of other BSs and of the other two sectors which are co-sited with Alice are kept equal to 20 W (In this case, the possibility to change the transmit power of the antenna sector is realized to emulate the behavior of a basic Transmit Power Control (TPC)).These results are summarized in the Table 2. New Metrics Let us introduce an auxiliary parameter that is defined as effective secrecy area ratio and is equal to: where A is the area of the cell sector that is managed by the BS (Alice) and A s is the secure area, i.e., the set of points of the total cell sector surface where the attacker (Eve) can stay without decreasing the secrecy rate of the legitimate link under the target rate; therefore, the parameter A [eff] s defines the percentage of area (related to the overall area of the cell) where the attacker (Eve) can stay without decreasing the secrecy rate of the legitimate link under the target rate. The effective secrecy area ratio is computed by supposing that the eavesdropper could be located in any point (x, y) of the area managed by the base station.The position of the legitimate receiver (Bob) in the cell is supposed to be fixed as well as the transmit power.Results are shown with different transmit powers and distances Alice-Bob, while Eve could be located in any point of the cell of the BS.The extension of the cell depends on the transmit power.All of this information gives us a new parameter for evaluating how the area is extended where the legitimate link has a minimum target secrecy rate.The algorithm for the BS could be the following: Decide a target secrecy rate that Alice wants to keep with Bob; 2. Given the position of Bob and the transmit power of Alice, the extension of the cell is known; 3. Compute the secrecy capacity (C B − C E ) of each point (x, y) of the cell area, as Eve was located at that point; in other words, the surface managed by the BS is divided into infinitesimal squares whose surface is equal to dx • dy and the eavesdropper is supposed to be there for the computing of the secrecy capacity; 4. Count each point of the cell area that gives a secrecy capacity equal to or greater than the target rate; 5. Compute the effective secrecy area as the ratio between the set of points that give a secrecy capacity equal to or greater than the target rate and the transmit power. We propose a first metric that is called effective secrecy area per power unit [W −1 ] and defined as: where P is the power transmitted by the BS (Alice).Given the distance of the user (Bob), this metric can identify the transmit power to be used by the BS (Alice) in order to maximize the secure area in the sector.Thus, the metric allows the BS to maximize the area of security while minimizing the transmit power, i.e., saving energy at the same time . Since the main goal of this paper is the maximization of the effective secrecy area for the affordable target date rate with the BS minimum power consumption, we propose another metric that is called Effective Secrecy-Energy Efficiency [Bit/Joule] and is defined as: where P in is the BS power consumption as defined in Equation (2) and R s is the target secrecy rate.Given the distance of the user (Bob), this metric helps to identify the power to be used by BS (Alice) to maximize the secure area as well as the cost in terms of energy requested to send a secret bit stream to the legitimate receiver.Hence, this metric is a toll that helps to maximize the area of security, and, at the same time, minimize the BS power consumption.It is important to note that the secrecy area is intended as the area where the eavesdropper can stay without driving the secrecy capacity of the legitimate link under the target secrecy rate.Note that the power consumption of the BS (Alice) has been calculated by using a complete model (2), which takes into account any source of energy consumption in the BS equipment, even if the width of the area coverage is determined only by the transmit power P. The results are shown in the following section. Results In this section, the results that have been obtained by numerically computing the values of Equations ( 5)-( 7) are discussed.Figure 4 shows the percentage of the secure area A Some conclusions about secure and unsecure areas can be drawn: • If Bob is close to Alice (10 m), a power increase does not imply a proportional enlargement in the secure area; • When Bob is in the middle of the cell (50 m), a power increase is beneficial from the security point of view: the unsecure area becomes smaller; anyway, continuing to increase the power over and over does not imply a remarkably larger secure area; a sort of saturation in the extension of the secure area can be observed when the power increases over 10 W; • When Bob is in the boundary of the cell (100 m), a higher power is needed to obtain a secure area of about 50% of the cell sector extension; moreover, a higher than 20 W transmit power gives tiny enlargements of the secure area. Figure 5 shows the values of the metric ρ 1 (6) as a function of Bob's distance and Alice's transmit power.As it can be seen, there is always an optimum transmit power for Alice for every distance of Bob, i.e., the minimum power maximizing the effective secrecy area.Increasing the power over the optimum does not give benefits, i.e., the secure area does not increase considerably, while the power consumption gets higher.Figures 6 and 7 show the values of the metric ρ 2 (7) as a function of the Bob's distance and power consumption P in for two different BS types that are described in in [15], the Remote Radio Head (RRH) (According to the EARTH Deliverable D2.3 the maximum RRH transmit power is equal to 20 W: nonetheless, in the computation of the Effective Secrecy-Energy Efficiency metric, we have considered higher power values in order to allow a more complete system evaluation).and the Macro BS, respectively.The target secrecy rate is fixed and set to R s = 0.1C B .As it can be seen, for every distance of Bob, there is always an optimum power consumption, i.e., the minimum power that maximizes the effective secrecy area.As in the previous case, if the power is increased over this value, negligible additional benefits are achieved. The Effective Secrecy-Energy Efficiency vs. BS power consumption curves are not monotone: particularly, the maximum of the proposed metric is achieved for different power consumption values that depend on Bob's position; this result confirms that the optimization of the tradeoff between the security area, the afforded date-rate and the power spent by the BS is not a trivial task: particularly, the simple control of the radiated power is not efficient when a target secrecy rate has to be guaranteed to an end-user. This conclusion is enforced by the results that are shown in Figure 8, which presents the percentage of unsecure area (over the total coverage) as a function of the distance of Bob and the target secrecy rate R s to be supported.The target secrecy rate is calculated as the percentage of the capacity of the legitimate link, i.e., R s = {1%, 5%, 10%, 20%, 50%} of C B .The BS power consumption P in is fixed and, for Macro BSand assuming P = 20 and N TRX = 3, i.e., one carrier per sector, is set to 291.22 W (This value is obtained by following the recommendations of the EARTH (Energy Aware Radio neTwork tecHnololgies) project).As it can be seen, the higher the target secrecy rate, the wider the unsecure area.In particular, with a distance of Bob of 50 m and a target secrecy rate of 20% of the capacity of the legitimate link, 1/3 of the BS area is unsecure. Conclusions In this paper, two original metrics are proposed to evaluate the optimum tradeoff between the secure area, the transmitted data and the BS power consumption.The context is the 3GPP cellular network environment.The base station has to guarantee the minimum secrecy rate to the end user in the largest area together with the optimization of the power consumption.The metrics that have been proposed here can be optimized, obtaining the minimum power for which the secure area in the cell is maximized.Numerical results show how the behaviour of the secrecy capacity in the cell as a function of the transmit power and distance of the end user. We believe that this preliminary study can be useful in the very near future of cellular networks to implement the mandatory energy saving strategies while providing secure services to the end users. (a) Bob's distance from Alice is equal to 10 m.(b) Bob's distance from Alice is equal to 50 m.(c)Bob's distance from Alice is equal to 100 m. Figure 3 . Figure 3. Secrecy capacity geographical distribution when Bob's distance from Alice is 50 m. referred to as the overall area of the cell sector.The transmit power of Alice ranges from 5 to 40 W, while the distance of Bob varies from 10 to 100 m.(a) Bob's distance from Alice is equal to 10 m.(b) Bob's distance from Alice is equal to 50 m.(c)Bob's distance from Alice is equal to 100 m. Figure 5 . Figure 5. Effective secrecy area versus transmit power as a function of Bob's distance and transmit power. Figure 6 . Figure 6.Effective Secrecy-Energy Efficiency as a function of the power consumption P in for different distances of the legitimate receiver {10, 50, 100} m.The corresponding radiated power P is reported in the x-axis under the P in value.The power consumption is referred to a Remote Radio Head (RRH). Figure 7 . Figure 7.Effective Secrecy-Energy Efficiency as a function of the power consumption P in for different distances of the legitimate receiver {10, 50, 100} m.The corresponding radiated power P is reported in the x-axis under the P in value.The power consumption is referred to a Macro BS. Figure 8 . Figure 8. Percentage of unsecure area (over the total coverage) as a function of the distance of Bob and the target secrecy rate to be supported.The transmit power is P = 20 W, which corresponds to a consumed power P in = 291.22W. Table 1 . Secrecy capacity values when Bob's distance from Alice is equal to {10, 50, 100} m and P = 20 W. The total area covered by the BS is 9336 m 2 .
5,324.2
2016-10-27T00:00:00.000
[ "Computer Science" ]
MHD natural convection in an inclined cavity filled with a fluid saturated porous medium with heat source in the solid phase Abstract. A numerical investigation of unsteady magnetohydrodynami c free convection in an inclined square cavity filled with a fluid-saturated por ous medium and with internal heat generation has been performed. A uniform magnetic field inc ined with the same angle of the inclination of the cavity is applied. The govern ing equations are formulated and solved by a direct explicit finite-difference method sub ject to appropriate initial and boundary conditions. Two cases were considered, the first ca se when all the cavity walls are cooled and the second case when the cavity vertical walls are kept adiabatic. A parametric study illustrating the influence of the Hartmann nu mber, Rayliegh number, the inclination angle of the cavity and the dimensionless time p arameter on the flow and heat transfer characteristics such as the streamlines, iso therms and the average Nusselt number is performed. The velocity components at mid section of the cavity as well as the temperature profiles are reported graphically. The values o f average Nusselt number for various parametric conditions are presented in tabular for m. Introduction The study of flow of an electrically-conducting fluid has many applications in engineering problems such as magnetohydrodynamics (MHD) generators, plasma studies, nuclear reactors, geothermal energy extraction and the boundary layer control in the field of aerodynamics [1,2].Specifically, Bejan and Khair [1] reported on the natural convection boundary layer flow in a saturated porous medium with combined heat and mass transfer.Lai and Kulacki [2] extended the problem of Bejan and Khair [1] to include wall fluid injection effects.Chamkha and Khaled [3] considered magnetic field and wall mass transfer effects on coupled heat and mass transfer by natural convection from a vertical semi-infinite plate maintained at a constant heat flux. Heat and fluid flows in a cavity that experiences convective heating or cooling at the surface are found in a wide variety of applications including lakes and geothermal reservoirs, underground water flow, solar collector, etc. [4].Associated industrial applications include secondary and tertiary oil recovery, growth of crystals [5], heating and drying process [6][7][8], solidification of casting, sterilization, etc. Natural or free convection in a porous medium has been studied extensively.Cheng [9] provides a comprehensive review of the literature on free convection in fluid saturated porous media with a focus on geothermal systems.Oosthuizen and Patrick [10] performed numerical studies of natural convection in an inclined square enclosure with part of one wall heated to a uniform temperature and with the opposite wall uniformly cooled to a lower temperature and with the remaining wall portions.The enclosure is partially filled with a fluid and partly filled with a porous medium, which is saturated with the same fluid.The main results considered were the mean heat transfer rate a cross the enclosure.Nithiarasu et al. [11] examined the effects of variable porosity on convective flow patterns in side a porous cavity.The flow is triggered by sustaining a temperature gradient between isothermal lateral walls.It was found that the variation in porosity significantly affects natural flow convective pattern.Khanafer and Chamkha [12] performed numerical study of mixed convection flow in a lid driven cavity filled with a fluid-saturated porous media.In this study, the influence of the Richardson number, Darcy number and the Rayleigh number played an important role on mixed convection flow inside a square cavity filled with a fluid-saturated porous media.Nithiarasu et al. [13] examined effects of applied heat transfer coefficient on the cold wall of the cavity up on flow and heat transfer inside a porous medium.The differences between the Darcy and non-Darcy flow regime were clearly investigated for different Darcy, Rayleigh and Biot numbers and aspect ratio.Grosan et al. [14] discussed the effects of magnetic field and internal heat generation on the free convection in a rectangular cavity filled with a porous medium.The problem of effects of non-uniform porosity on double diffusive natural convection in a porous cavity with partially permeable wall was analyzed by Akbala and Bayta [15]. The main objective of this paper is to study the effects of an inclined magnetic field on the unsteady natural convection in an inclined cavity filled with a fluid saturated porous medium with heat source in the solid phase.The magnetic field is inclined on the cavity bottom with the same inclination angle of the cavity on the horizontal plane.The finite-difference method is employed to solve this problem.The present results are validated by favorable comparisons with previously published results.The streamline and isotherm shapes in the cavity for different values of the problem parameters are plotted and discussed.In addition, the velocity components in X and Y directions as well as the temperature profiles are illustrated and discussed. Mathematical formulation Consider unsteady laminar natural convection flow in inclined cavities with an electricallyconducting fluid-saturated porous medium with internal heat generation.In this problem, the following assumptions have been made: 1.The cavity walls are kept to a constant temperature T 0 or the cavity vertical walls are adiabatic. 2. Properties of the fluid and the porous medium are isotropic and homogeneous everywhere. 3. The enclosure is permeated by a uniform inclined magnetic field. 4. The angle of inclination of the magnetic field B on the cavity bottom is the same angle of inclination of the cavity on the horizontal plane. 5. A uniform source of heat generation in the flow region with constant volumetric rate is considered. 6.The viscous, radiation and Joule heating effects are neglected. 7. The density is assumed to be a linear function of temperature The geometric and the Cartesian coordinate system are schematically shown in Fig. 1.Under the above assumptions, the governing equations are (see [15]): The continuity equation is considered by defining a stream function ψ(x, y), so that equation ( 2) can be written as where u = ∂ψ/∂y and v = −∂ψ/∂x and β 0 is the magnitude of B. In this study, the following dimensionless variables are used for equations ( 3) and ( 4): Using these variables, the stream function and energy equations in non-dimensional form can be written as: where Ra = (kgβl∆T )/(να) is the Rayleigh number, Ha = σ 0 kβ 2 0 /µ is the Hartmann number for the porous medium and σ = (ε(ρc p ) f + (1 − ε)(ρc p ) s )/(ρc p ) f is the heat capacity ratio. The initial and boundary conditions for equations ( 6) and ( 7) are as follows: It should be noted that, in the second case with non-inclined cavity corresponds to Grosan et al. [14] case. Once we know the temperature, we can obtain the rate of heat transfer at the right wall, which are given in terms of the average Nusselt number Nu which is defined as: Solution technique The numerical algorithm used to solve the dimensionless governing equations ( 6) and (7) with the boundary conditions ( 8) is based on the finite difference methodology.Central difference quotients were used to approximate the second derivatives in both the Xand Y -directions.The obtained discretized equations are then solved using a suitable algorithm.The numerical computations were carried out for (61 × 61) grid nodal points with a time step of 10 −5 .The iteration process was terminated under the following condition: where λ is the general dependent variable which can stand for U, V, Ψ and θ.This method was found to be suitable and gave results that are very close to the numerical results obtained Grosan et al. [14].From Figs. 2, we can observe an excellent agreement between our results and the results obtained by Grosan et al. [14].This favorable comparison lends confidence in the numerical results to be reported subsequently. Present results Grosan et al. Results and discussion In this section, numerical results for the contours of the streamlines and isotherms as well as selected velocity and temperature profiles at mid section of the cavity for various values of the cavity and magnetic field inclination angle α are presented.In addition, representative results for the average Nusselt number Nu for various conditions are presented in tabulated form and discussed.In all of these results, ε was fixed at a value of 0.6. Case 1: The cavity walls are kept to a constant temperature T 0 Fig. 3 presents steady-state contours for the streamlines and isotherms for various values of the cavity and magnetic field inclination angle α (0.0, π/4, π/3, π/2) for a Rayleigh number Ra = 500 and a Hartmann number Ha = 0.5 when all the cavity walls are cooled.In general, for α = 0 (non-inclined cavity), two vertically stretched separated recirculating cells or vortices in the whole enclosure exist.As the cavity inclination angle increases, these two cells tend to stretch along the inclination line for α = π/4 and α = π/3 until they become stretched horizontally when α = π/2.In addition, the streamlines become crowded not only at the left wall but also at the right wall of the cavity which means that the velocity of the fluid increases in the immediate vicinity of these walls.It is observed that tilting the cavity by π/4 increases the flow movement and the maximum value of stream function increases to become Ψ max = 0.75.However, further tilting of the cavity yields a reduction in the fluid velocity.It is also observed from Fig. 3 that the isotherms form a single anti-clockwise rotating cell through the whole cavity.This is an interesting behavior because this means that the walls of the cavity are hotter than any other region in the cavity.In addition, as the inclination angle α increases, the temperature of the fluid decreases.Fig. 4 displays steady-state contours for the streamlines and isotherms for various values of α (0.0, π/4, π/2) with Ra = 1000 and Ha = 0.5.By comparison of Fig. 4 with Fig. 3, one can understand the effect of increasing the Rayleigh number on the streamline and isotherm contours.This comparison shows that as the Rayleigh number increases, stronger convective clockwise and anti-clockwise motion takes place in the cavity and the temperature gradient gets crowded at the walls of the cavity more than the previous case (Ra = 500).This, in general, causes reduction in the fluid temperature profiles. Figs. 5 and 6 show comparison between the steady-state contours for the streamlines and isotherms for α = π/4 with Ra = 5×10 3 in the presence and absence of the magnetic field force.From this comparison, we can conclude that the intensity of the convection in the core of the cavity is considerably affected by the magnetic field.A weak convective motion is observed in the case of the presence of the magnetic field so we can say that, the absence of the magnetic force tends to accelerate the fluid motion inside the cavity.However, the absence of the magnetic field leads to decrease in the temperature of the fluid.number Ra results in an increase in the fluid X-component of velocity whereas the Xcomponent of velocity can be reduced by increasing of the Hartmann number Ha.The same behavior is observed when we studied these effects on the profiles of the fluid Ycomponent of velocity, see Fig. 8. The temperature profiles at mid-section of the cavity for different values of angle α are depicted in Fig. 9.In this figure, the temperature of the fluid decrease by increasing the Rayleigh number as well as it takes the same behavior when the magnetic field increases. The effects of the inclination of the cavity, presence of magnetic field force and the Rayleigh number on the average Nusselt number for unsteady and steady states are displayed in Tables 1-3.It is clear from these tables that when the inclination angle of the cavity and Hartmann number increase, the values of the average Nusselt number increase.Also, the same behavior is observed when the dimensionless time parameter increases.However, the opposite effect or behavior is predicted whereby the average Nusselt number decreases when the Rayleigh number increases.These behaviors are clearly depicted in Tables 1-3. Case 2: The cavity vertical walls are adiabatic To complete our discussion, we investigated the case when the enclosure vertical walls are adiabatic.From Fig. 10, we can observe the shapes of streamlines and isotherms contours for different values of the dimensionless time parameter τ when Ra = 1000, α = π/4 and Ha = 0.5.It is clear that the fluid moves from the core of the enclosure to the vertical walls forming two symmetrical clockwise and anti-clockwise circular cells with maximum value Ψ max = 0.65 at τ = 0.05.As the dimensionless time parameter increases, the clockwise contours increase and the maximum value of stream function increases until it reaches the fixed value Ψ max = 2.0 at the steady-state condition.Also, the constant temperature lines or isotherms turn from parallel lines with maximum value θ max = 0.017 at τ = 0.05 to curves with maximum value θ max = 0.042 at steady state.Figs.11-13 display the effect of the dimensionless time parameter τ on the velocity components in the Xand Y -directions and the temperature profiles at the enclosure midsection when Ra = 5×10 3 , α = π/4 and Ha = 0.5.It is found that, as the dimensionless time parameter τ increases, the velocity components U and V increase until they reach fixed values at the steady state condition.Also, the same behaviors are observed for the temperature distributions.In addition, the values of the average Nusselt number for the unsteady and steady-state conditions are shown in Table 4. From this table, we can observe that as the dimensionless time parameter increases, the average Nusselt number decreases until it reaches the minimum value at steady state. Conclusions In the present paper, we have studied the transient MHD natural convection in an inclined cavity filled with a fluid saturated porous medium by including the effects of presence both of an inclined magnetic field and heat source in the solid phase.We have examined the effects of Rayliegh number, Hartmann number as well as values of inclination angel of the cavity, various values of the dimensionless time parameter and magnetic field on the flow and heat transfer characteristics for the case of cavity with cooled walls and the case of cavity with adiabatic vertical walls.From this investigation, we can draw the following conclusion: 1.In general, we can increase the temperature of the fluid by increasing both of the magnetic field force and the inclination angle α. 2. The various values of the inclination angle α affect on the streamline counters.These counters rotate and crowded at different walls by increasing of α. 3. A faster motion is considered when Rayleigh number increases whereas it causing in the decreasing of the temperature. 4. The average Nusselt number must be affected by the presence of the magnetic field, it takes a large value in case of the presence of magnetic field. 5. When the vertical walls were considered adiabatic, the activity of the fluid and heat transfer characteristics increases by increasing the dimensionless time parameter. Fig. 7 Fig. 3 .Fig. 4 .Fig. 5 .α = π/ 4 ΨmaxFig. 6 . Fig.3presents steady-state contours for the streamlines and isotherms for various values of the cavity and magnetic field inclination angle α (0.0, π/4, π/3, π/2) for a Rayleigh number Ra = 500 and a Hartmann number Ha = 0.5 when all the cavity walls are cooled.In general, for α = 0 (non-inclined cavity), two vertically stretched separated recirculating cells or vortices in the whole enclosure exist.As the cavity inclination angle increases, these two cells tend to stretch along the inclination line for α = π/4 and α = π/3 until they become stretched horizontally when α = π/2.In addition, the streamlines become crowded not only at the left wall but also at the right wall of the cavity which means that the velocity of the fluid increases in the immediate vicinity of these walls.It is observed that tilting the cavity by π/4 increases the flow movement and the maximum value of stream function increases to become Ψ max = 0.75.However, further tilting of the cavity yields a reduction in the fluid velocity.It is also observed from Fig.3that the isotherms form a single anti-clockwise rotating cell through the whole cavity.This is an interesting behavior because this means that the walls of the cavity are hotter than any other region in the cavity.In addition, as the inclination angle α increases, the temperature of the fluid decreases.Fig.4displayssteady-state contours for the streamlines and isotherms for various values of α (0.0, π/4, π/2) with Ra = 1000 and Ha = 0.5.By comparison of Fig.4with Fig.3, one can understand the effect of increasing the Rayleigh number on the streamline and isotherm contours.This comparison shows that as the Rayleigh number increases, stronger convective clockwise and anti-clockwise motion takes place in the cavity and the temperature gradient gets crowded at the walls of the cavity more than the previous case (Ra = 500).This, in general, causes reduction in the fluid temperature profiles.Figs.5 and 6show comparison between the steady-state contours for the streamlines and isotherms for α = π/4 with Ra = 5×10 3 in the presence and absence of the magnetic field force.From this comparison, we can conclude that the intensity of the convection in the core of the cavity is considerably affected by the magnetic field.A weak convective motion is observed in the case of the presence of the magnetic field so we can say that, the absence of the magnetic force tends to accelerate the fluid motion inside the cavity.However, the absence of the magnetic field leads to decrease in the temperature of the fluid.Fig.7illustrates the effects of the Hartmann number Ha and Rayleigh number Ra on the profiles of the X-component of velocity at mid-section of the cavity for different values of inclination angle α.The results show that increasing the values of the Rayleigh Fig. 7 . Fig. 7. Effects of Hartmann number Ha and Rayleigh number Ra on X-component of velocity at cavity mid-section for different values of angle α. Fig. 8 . Fig. 8. Effects of Hartmann number Ha and Rayleigh number Ra on Y -component of velocity at cavity mid-section for different values of angle α. Fig. 9 . Fig. 9. Effects of Hartmann number Ha and Rayleigh number Ra on temperature profiles at cavity mid-section for different values of angle α. Table 1 . Values of average Nusselt number at the right wall for different values of α and τ when Ra = 5 × 10 3 , Ha = 0.5. Table 2 . Values of average Nusselt number at the right wall for different values of Ha and τ when Ra = 5 × 10 3 , α = π/4. Table 3 . Values of average Nusselt number at the right wall for different values of Ra and τ when Ha = 0.5, α = π/4. Table 4 . Values of average Nusselt number at the right wall for different values of α and τ when Ra = 5 × 10 3 , Ha = 0.5.
4,418.4
2010-01-25T00:00:00.000
[ "Physics" ]
Cultural Barriers in Equivalence-The English Localization of the Video Game Wiedźmin 3: Dziki Gon With every passing day, video games are becoming increasingly popular, not only in Poland but also worldwide. As a consequence, a tendency has emerged among the biggest international companies to localize their digital products in an attempt to appeal to their target audience, and thus increase income. The following paper addresses the issue of equivalence in the English localization of the Polish video game Wiedźmin 3: Dziki Gon. More specifically, the authors conduct a comparative analysis of the Polish jokes, puns, songs, customs and other cultural references identified in the corpus, and their target language localizations. Finally, the paper discusses to what extent, if at all, the source and target language versions are equivalent in terms of linguistic, humorous and cultural implications.. INTRODUCTION Although localization as a term functioning within Translation Studies was discussed and analysed by researchers at the very beginning of the 21 st century, only latterly has it gained an increasing interest as a translation phenomenon (Maumeviÿienė 2012, 109). The said interest most probably eventuates from the rapid development of technology and computer software, and the fact that the video gaming industry constitutes a bigger market, and therefore generates more income, than the movie and music industries combined (Vierra and Vierra 2011, 100). Wiedźmin 3: Dziki Gon (The Witcher 3: Wild Hunt) substantially contributed to the success of the video gaming industry by receiving 79 various awards, earning over 15 times more income than Wiedźmin 2: Zabójcy Królów (Witcher 2: Assassins of Kings) and 60 times more than the first game from the trilogy within three years of the game's release. The authors of the present paper are of contention that Wiedźmin 3: Dziki Gon constitutes a suitable example for analysis not only due to the fact that it is replete with humour, folklore and cultural references, but also because the game in question has been localized into fifteen different languages, and therefore proves to be an excellent case for studying the equivalence of culturally-bound elements and reference. GLOBALIZATION, INTERNATIONALIZATION, LOCALIZATION AND TRANSLATION Before discussing the notion of equivalence in localization, however, it is instructive to provide an approximation of the aforementioned concept, so as to facilitate reception of the empirical part of the paper. The term localization, the core notion of the discussion presented herein, accounts for one of the integral elements of a particularly complex multi-stage phenomenon, namely the GILT industry (the appellation itself is an acronym foras hinted by the subheading -Globalization, Internationalization, Localization and Translation). Each of the said phenomenon's constituents plays a vital role in achieving something that is the aim of every modern company: a considerable increase in so-called return on investment, which is oftentimes referred to in its abbreviated form as ROI (DePalma 2006, 15). Generally speaking, globalization constitutes an umbrella term encompassing both internationalization and localization, as it is a "business strategy (not so much as an activity) addressing the issues associated with taking a product to the global market which also includes world-wide marketing, sales and support" (Schäler 2008, 197). As may be inferred from the definition provided, globalization revolves around the process of adapting a specific local or regional productwhether it be a tangible good such as a mobile phone or an intangible one such as a mobile applicationto the standards and, most importantly, the needs of the global market. 1 Before a given product can be released onto the global market, it needs to be localized. 2 However, there is one more step that needs to be undertaken prior to the commencement of the latter, namely internationalization. The notion in question may be defined as the "process of designing software so that it can be adapted to different languages or regions" (Laxström et al. 2017, 209). On the whole, internationalization involves implementation of specific technical modifications to the internal structure of a given product, so as to separate culturally marked elements, and consequently facilitate the localization process thereof into various languages. The pivotal role of internationalization in the GILT industry is particularly stressed by The Localization 1 Interestingly enough, Microsoft Corporation's definitions of globalization and internationalization differ slightly from the ones quoted in the present paper. By implication, what is described herein as globalization, the said company defines as internationalization and vice-versa (Anastasiou and Schäler 2010, 14). This phenomenon is particularly intriguing given the fact that Microsoft Corporation localizes its products into a plethora of diversified languageswith Windows 8 being localized into over 200 languages and Windows 10 having 111 language packs. https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/ available-language-packs-for-windows [Last accessed: 20.10.2019] 2 Or, at least, ought to be in order to maximise the chances of increasing a company's profits by gaining the socalled "soft benefits" (Lynch 2006, 45). Industry Standards Association (hereinafter referred to as LISA), which claims that, while generalizing somewhat, localization of a product that is not internationalized may demand twice as many resource from a company (Anastasiou and Schäler 2010, 13). As regards the last two constituents of the GILT acronym, translation and localization are so similar in terms of their scopes of reference that sometimes they are used as synonyms (Gambier 2010, 412). This is why it is necessary to point out that certain quite important, albeit very subtle differences exist between the two terms. That being said, translation revolves around "the replacement of textual material in one language (SL) by equivalent textual material in another language (TL)" (Catford 1965, 20). As may be inferred from the quotation provided, it is the textual information encapsulated in a given document that is of main concern in the process of translation. Localization, however, constitutes a slightly more complex phenomenon, as it focuses on the "linguistic and cultural adaptation of digital content to the requirements and locale of a foreign market, and the provision of services and technologies for the management of multilingualism across the digital global information flow" (Schäler 2007, 157). What links these two definitions is the fact that both translators and localizers deal with adaptation of textual content to a given locale -"the language and culture variety natural of a particular geographic region" (Bernal-Merino 2014, 35). What accounts for the afore-mentioned difference, on the other hand, is the fact that translation "does not necessarily deal with digital material whereas localization is always happening in the digital world" (Schäler 2008, 196). 3 That being said, software localizers work throughout the majority of their time, if not always, with multimodal files which comprise not only text and graphics but also, as in the case of videogames and the examples analysed herein, audio and video. LAYERS OF LOCALIZATION Given the fact that the examples provided in the following paper are to a great extent marked culturally, it is necessary to approximate here the notion of layers of localization. Chroust (2007, 860) believes that localization constitutes a multi-layered phenomenon which has to be performed on "different levels of increased comprehensiveness and cultural dependence", with higher levels being highly contingent on all of the lower levels of localization subordinate thereto: Fig. 1. Layers of localization (Chroust 2007, 860) The first of the seven layerswhich accounts for the foundation of the entire processis called the Technological Infrastructure (Barbour and Yeo 1996). This layer is particularly vital in the early process of internationalization, since all preparations of technical and organizational natureswhether they be, to name just a few, separation of textual material from the code of a given digital product, reservation of adequate storage space for localized texts or appropriate coding of non-standard characters inherent to certain more exotic languageshave to be meticulously planned prior to the commencement, or at the very least at the beginning, of the entire project. The Grammatical and Semantic Layers are connected to each other. The Grammatical Layer focuses on translationor, to be precise, modificationof highly standardized computer-supported textual content. Chroust (2007, 860) also postulates that "typically in literary texts variations of expressions are good style, in system oriented domains uniform, standardized texts are to be preferred in order to avoid ambiguity and confusion". As for the Semantic Layer, it pertains to differences in usage between technical and everyday language, expressiveness thereof and various abbreviations that are employed therein. Given that the said layer is, as the appellation itself seems to indicate, concerned primarily with semantics, it is considered to be the domain of linguistic issues that are mostly tackled by human, rather than computer, language translators. The Graphic and Iconic Representation Layer refers to the alteration or removal of graphic content that potentially may be culturally problematic or even controversial in certain locales. Such content includes, but is not limited to, gestures, symbols, pictures, animations, voice-acting, subjects of taboo, or even, in certain very conservative countries, colors. Of course, the said layer also deals with addition of the aforementioned elements to the final version of a product in order to appeal to the target audience, and hence increase the profits. The Business Conventions and Practices Layer, being the least important layer as far as the present paper is concerned, pertains to organizational aspects of a localizer's joba topic that is, regrettably, not broached whatsoever by the research in question. Notwithstanding, these facets refer to business practices that may differ drastically in various locales, and hence may eventuate in miscommunications, or even conflicts and alienation (Krishna et al. 2004). The Social and Communication Layer revolves around a very peculiar kind of communication, namely a unidirectional communication from a medium (digital software) to a human (user). This layer capitalises on implementation of culturally appropriate expressions that, inter alia, address, greet and answer the user in such a way, so as to create softwareand the interface thereof -that is characterised by "good behaviour, observation of etiquette and politeness, subservience, helpfulness, and the sensitivity of an intuitive, courteous butler" (Miller 2004). The Cultural Layer located at the very top of the model proposed by Chroust (2007, 860), constitutes the most complex notion as far as the linguistic implications thereof are at issue. This particular state of affairs eventuates from the fact that the layer in question deals with translation and localization of humour, metaphors, jargon, and other culturally bound references that rarely, if at all, have direct one-to-one equivalents in target cultures, and therefore also target languages (Bourges-Waldegg and Scrivener 1998). What makes this issue even more problematic is the fact that, according to Chroust (2007, 867), "context-information will often be lost during localization" of such expressions, thereby resulting in rather unamusing target language renditions that oftentimes are, regrettably, devoid of the meaning and connotations encapsulated by their source language equivalents. Lastly, the importance of the said layer is emphasized by the fact that it draws on the peculiarities of the remaining six layershence it requires from a translator or localizer a tremendous amount of not only purely linguistic skills but also practical and cultural knowledge of at least two not infrequently entirely different locales. CORPUS AND METHODOLOGY As regards the corpus subjected to analysis in the present research, one may deem it as being somewhat unusual in terms of content. This eventuates from the fact that the said corpus does not only comprise written texts per se (like, for instance, bestiary, or advertisements and witcher contracts posted on in-game notice boards) but also takes into consideration various dialogues uttered by game characters, or rather by the actors who provide voices for them (in, for example, conversations, songs, riddles, etc.). Therefore, the game itselfthat is the base game Wiedźmin 3: Dziki Gon and its two expansion packs: Serca z Kamienia (Hearts of Stone) and Krew i Wino (Blood and Wine)and everything that is either written or said therein, accounts for the corpus of the following research. Nevertheless, given the enormity of the game and the space limitations of this paper, it is necessary to stress the fact that only a few examples of potentially problematic localizations are scrutinized herein. To better visualize the size of the corpus analysed, however, it ought to be emphasized that the script of the campaign of the base game alone includes over 450 000 wordswhich is four times more than an average number of words in a typical novel. 4 As regards the said campaign, it ought to take approximately 50 hours to finish it without performing any additional activities. 5 If one wants to complete the entire game (each campaign, secondary quest, contract etc.), together with the abovementioned expansion packs, one ought to be able to do so in more or less 200 hours. Therefore, an assumption may be made that the corpus in question consists of at least, in great approximation, 1 000 000 words. As regards the methodology employed in the research, it revolves around the notion of comparative analysis, which focuses on "the explanation of differences, and the explanation of similarities" between at least two divergent yet somewhat interconnected phenomena (Adiyia and Ashton 2017, 1). In the present paper, the phenomena in question refer to the selected examples of the Polish versions of in-game songs, jokes, play-onwords and Slavic customs, as well as the translations, or rather localizations, thereof into the English language and, most importantly, the English culture. Inasmuch as the empirical part is concerned, the discussion pertaining to the data extracted from the corpus in question is divided into two stages. Firstly, the authors present both the Polish and English language versions of the afore-mentioned examples selected from the corpus and, subsequently, provide meticulous descriptions of the contexts in which the said corpus data occurred in the game under scrutiny. The second stage involves the performance of in-depth analyses on the examples in question with the purpose of identifying whether or not they may be classified as equivalents. Effectively, the authors investigate and compare potential similarities and differences between linguistic, humorous, cultural and other various kinds of implications evoked by the Polish examples and their English localizations. The results yielded by the said analyses constitute a point of reference for the formulation of conclusions regarding the relation between the notions of equivalence and localization in the last section of the paper. BRIEF DESCRIPTION OF THE GAME Wiedźmin 3: Dziki Gon constitutes a direct continuation of Wiedźmin and Wiedźmin 2: Zabójcy Królów video games developed and published by CD Projekt company. The trilogy also resumes the events of Wiedźmin: Pani Jeziorathe last book in the six-volume-long Wiedźmin fantasy novel saga written by Andrzej Sapkowski. 6 The game itself belongs to the action role-playing game (hence the abbreviated form RPG) genre whichcontrary to other genres like first person shooters, strategies or simulators of various typesfocuses predominantly on immersive storytelling. In such games, players can assume the role of a given protagonist and, by exercising not infrequently morally doubtful decisions that may have an enormous impact on the game's world, and therefore also on the game's ending, actively participate in the creation of the character's story. In Wiedźmin 3: Dziki Gon, players once again are given the possibility to control the titular witcher, Geralt of Rivia who is a monster hunter by profession. The plot of the 5 The last number constitutes a median of the results of the polls conducted by HowLongToBeat amongst 258 people who completed each and every achievementand thus tackled every activity availablein the game. game revolves around the protagonist's desperate attempts to find and save his adopted daughter, Cirilla. Despite the fact that the action of the game takes place in an entirely fictional world, players not infrequently may encounter a plethora of so-called easter eggsevents, items or jokes that constitute humorous references to pop-culture, show business, literature, historical figures and events, or even cultural rituals and longforgotten customs. Such references oftentimes may be, and in fact indeed are particularly problematic for translators, as is visualized by the examples provided in the present paper. PRACTICAL ANALYSIS OF THE SELECTED EXAMPLES The analysis conducted on the corpus yielded a few particularly problematic cases insofar as localization and equivalence are concerned. As regards Chroust's model, the most significant differences between the selected target language localizations and their respective source language equivalents are visible in the Semantic, Graphic and Iconic Representation, and Cultural Layers. The first example to be analysed is a ritual prayer. It is said by Pellar, a folk healer, who performs the ritual, and a mob that has gathered around him in the hope of meeting the souls of their dead relatives. Table 1 provided below contains the source text and its English localization, together with the number of syllables in each line and the rhyme schemes: Ye who wander on the gale, Ever caught in this world's thrall, See this sign, gentle, pale, Ye we summon! Ye we call! Hark! A sound I hear! 'Tis right? A spirit breaks the still of night! Burn the incense ever higher! Spirit, join us 'round the fire! Spirit -speak! This time is yours! Tell us of your ghostly woes! Although the authors of the present paper are of the contention that the above exampleand, frankly speaking, the entire game itselfis localized most accurately, there are a few discrepancies between the source and target texts that need to be addressed. First of all, the said localization proves to be problematic on three of the above-mentioned layers. Regarding the Cultural Layer, the source text constitutes a reference to Adam Mickiewicz's Dziady (the complete version of the book was translated into English as Forefathers' Eve by Charles Kraszewski in 2017)a famous four-part poetic drama that constitutes a significant work of Polish classical literature. Due to the fact that no one-to-oneor even a relatively closecultural target language equivalent of the said custom exists in the target culture, and the fact that archaic Slavic traditions are not that common in the modern world, the localizers decided to translate the source text and the message encapsulated therein into English, rather than to tailor it to suit a typical American or British custom. As a consequence, the source and target language texts are equivalent in terms of their meaning but the latter is, sadly, not localized into the target culture at all. In order to reverse the situationnamely, to localize the scene at the expense of literal equivalence -Peller could be juxtaposed with three witches from Macbeth who performed a similar ritual (as they too summoned apparitions, however, for different purposes) over a bubbling cauldron. By implication, a reference to Dziady by Adam Mickiewicz would be replaced with a reference to William Shakespeare's Macbethand thus both versions of the video game would contain a reference to great poetic works which are particularly significant in their respective cultures. There is one problem, though. Such substitution would be possible had it not been for the Graphic and Iconic Representation Layer, namely the fact that a player has to actively participate in the said quest by protecting Peller and the mob from various monsters that are accidentally awakened by the ritual performed in the middle of the marshes. As a result, players see how the folk healer looks (his clothes), how he is animated (gestures, body movements, facial expressions), what happens around him (the huge bonfire in the middle of the gathering, reappearing ghosts), and how the mob reacts throughout the entire ritual. Therefore, substitution of the described custom with any other, even if similar, ritual from a different culture would require a dramatic interference in the characters' animations, thereby generating not only huge costs, both in terms of money and time, but also resulting in lack of equivalence between the source and target texts. As regards the purely linguistic aspect of the analysed localization, that is the Semantic Layer, the localization slightly diverges from the source text in the seventh and eighth line of the first part (see Table 1). According to the source text, Peller asks the people if they are ready for the meeting and subsequent conversation with their dead relatives, and the mob replies that indeed they are waiting for them. In the English translation, however, Peller asks the mob if they want to relieve the dead from any evil, and the people reply that they are ready to put an end to their eternal torment. The eighth line of the English localization is particularly interesting, as it accounts for a good example of an over-interpretationit implies that the mob has the power to end the suffering of the souls of their loved ones, who somehow are stuck in-between the worlds of the living and the dead. Consequently, the respective source and target lines are not equivalent, as no mention of such power can be found in the source text. Lastly, it is worth drawing attention to the form of the prayer and various peculiarities thereof; namely the number of syllables in each line and the rhyme scheme. As regards the former, in the source text nearly 90% of all lines consist of 8 syllables, whereas this is reduced to about one-fourth in the target language version. The rhyme schemes are also dissonant, particularly at the very end and in the middle of the first and second parts respectively. Nevertheless, such cosmetic changes may be viewed as an unavoidable "necessary evil", since sometimes one has to partially sacrifice the aesthetic form in order to save meaning and maintain equivalence. The source text was rendered into English in the following manner: Both male and female Beauclair centipedes are formidable predators. The beasts prefer to hunt in packs, yet the author of this extraordinary work was able to capture a lone specimen, preparing to attack. Cast your eye on the hunter, who, though fully aware of the danger, wears a look of cold determination on his face as he awaits the optimal moment to strike. It is imperative to draw attention here to two characteristic features of the said texts. Firstly, both of them address a specific scene from an animal's lifein this case the hunt of a huge carnivorous centipede. Secondly, the texts are stylised in such a way as to mimic a text that one might hear in an educational or documentary programme. The differences between the source and target language texts are so minor that one could accept the English version as a proper localization. The problems, however, arise when a player approaches the merchant in-game and hears her voice. In the Polish version of the game the said merchant is dubbed by Krystyna Czubówna, one of the Polish most famous television narrators who has voiced-over a vast amount of educational and documentary programmes. If one combines the afore-mentioned features of the analysed texts with the fact that it is being read by Krystyna Czubówna, one finds a humorous reference to Polish culture which puts a smile on faces of many Polish players. Regrettably, the English version of the game lacks such an effect, as the merchant in question was dubbed by a regular voice-over actress. Inasmuch as localization is at issue, the said humorous reference could be achieved if the merchant was voiced-over by a person who narrates educational or documentary programmeslike, for instance, Sir David Attenborough. Of course, this is only a suggestion of the authors of the present paper, as no dictionary exists which contains a list of peoplewhether they be voice-actors, historical figures, or othersand their intercultural one-to-one equivalents. The focus shall now be shifted to a folklore song which constitutes the third example of cultural differences between the original game and its localization into English. While two characters are on a hunt in the mountains after a forktail (a type of in-game dragonlike creature with a spike-covered tailhence the name), Eskel, a friend of the main protagonist, starts singing: "Idom se łowiecki, idom dołu pyrciom; Juhasa nie widno, ino dzwonki zbyrcom" [lit.: There go the sheep, there go the sheep along a narrow mountain path, the shepherd is nowhere to be seen, the cowbells are ringing]. When Geralt asks about the song, Eskel replies that his mother used to sing it to him when he was little. As regards the song itself, it is a song from Polish folklore that tells about sheep that are roaming around unattended by a drunk highlander. In the English version of the game, however, Eskel sings something entirely different: "The old hen she cackled, and she cackled on the fence; The old hen she cackled, and she ain't cackled since". This song, on the other hand, was initially sung most frequently by African American slaves but evolvedgiven the fact that it describes an old hen that cackles a lot and, as she lays no eggs, ends up in a potinto an African American children's song (Dabczynski and Phillips 2007). As may be inferred from the quotations, localizers decided to juxtapose an old Polish folk song with something that they believed to be a cultural equivalent thereof. Such an operation requires an extreme amount of caution, as it greatly alters the past and history of a given character, and therefore may not fit well in the narrative of the game: in this case Eskel's mother, who was in the source language version a Caucasian highland woman, becomes an African American slave from the regions of Texas. The last example to be scrutinized pertains to the Semantic Layer and the localization of jokes and puns. Although the entire game is full of them, the usage of both the former and the latter is particularly apparent in the quest in which Geralt, in order to banish a phantom, eats magic mushrooms and starts talking to his horse Płotka (Roach). During one of their conversations Płotka tells him a joke: "Co powiedział ślepy koń przed wkroczeniem na tor wyścigowy?" [lit.: What did the blind horse say when he entered the race track?] and then she immediately answers her question with: "Nie widzę przeszkód. He he...! Się uśmiałam..." [lit.: I see no obstacles. Ha ha! I had a good laugh...]. Such word-for-word translation, however, would not work here, since the phrase "Nie widzę przeszkód" [lit.: I see no obstacles] constitutes an idiom which means something along the lines of "It is possible, I can do it". This joke would have been banal and unamusing, had it not been for the expression "Się uśmiałam" [lit.: I had a good laugh...] which is the real joke here, as it accounts for a part of the Polish idiom "koń by się uśmiał" [lit.: a horse would laugh]. Such an idiomatic expression perfectly fits the situation and works well as a commentary on the first joke, as it is generally used when one wants to say that something is not funny at all. Regarding the English localization of the joke, it was translated as follows: "Horse walks into a tavern, and the innkeep says: Hey, pal, why the long face? Hah!". Here too, the joke is rather banal, as the phrase "long face" can be interpreted literally as a reference to horses' physiognomy, and idiomatically as "why are you so sad?". That being said, the source and target language idioms have different linguistic implications, and therefore cannot be treated as idiomatic equivalents. What is more, the English version is slightly problematic due to the fact that, unfortunately, it lacks the pun that is present at the very end of the joke in the source text. CONCLUSIONS The empirical considerations provided above show that the Semantic, Graphic and Iconic Representation, and Cultural Layers account for the most problematic and challenging stages in the localization process of cultural references, folklore and humour in Wiedźmin 3: Dziki Gon. As regards the Semantic Layer, sometimes faithful localization of idioms is, similarly to translation, not possible due to the fact that a particular source language idiomatic expression may not always have a direct one-to-one equivalent in a given target language. The joke analysed herein shows that a creative substitution of an idiomatic pun with a situational joke may still result, regrettably, in a loss of a comic effect. Additionally, the Graphic and Iconic Representation Layer may not infrequently also leave little room for linguistic manoeuvres because, as in the case of the ritual described, the target text needs to harmonize with the in-game characters' animations, behaviour, and the general setting of a given situation. Lastly, the Cultural Layer may be viewed as the most complex phenomenon, as it had to be taken into account in localization process of each and every example scrutinised herein. That being said, localization of cultural references requires from localizers not only vast amounts of cultural knowledge but also a great deal of caution, so as to avoid drastic alteration of certain elements of the gamelike, for instance, the character's background and his past. Summing up, localization, especially in the case of video games, may be viewed as an extension of Nida's (1964) considerations on the notion of equivalence. As emerges from the analyses of the examples, localizers opted for dynamic rather than formal equivalence; namely they prioritized an accurate rendition of the meaning of the source text at the expense of strict adherence to the lexical and grammatical peculiarities thereof. Consequently, certain linguistic, humorous and cultural implications of the source texts are, unfortunately, not present in the target texts, but this seems to be a natural elementor, rather, necessary evilof the complex, multi-stage localization process of a video game. Of course, the above conclusions ought to be considered illustrative and treated as such, as they apply only to the material analysed herein. Therefore, the said conclusions ought to be verified against a more extensive corpus.
6,868.4
2019-12-15T00:00:00.000
[ "Linguistics", "Sociology", "Computer Science" ]
Electronic Bottleneck Suppression in Next-generation Networks with Integrated Photonic Digital-to-analog Converters Digital-to-analog converters (DAC) are indispensable functional units in signal processing instrumentation and wide-band telecommunication links for both civil and military applications. In photonic systems capable of high data throughput and short delay, a commonly found system limitation stems from the electronic DAC due to the delay in sub-micron CMOS architectures and E-O conversions. A photonic DAC, in contrast, directly converts electrical digital signal to optical analog one with high speed and low energy consumption. Here, we introduce a novel parallel photonic DAC along with an experimentally demonstration of a 4-bit passive iteration of DAC. The design guarantees a linear intensity weighting functionality with a 50Gs/s and much smaller footprint compares to any other proposed photonics DACs. This design could be potentially implemented into novel photonic integrated neuromorphic computing engines for next generation label processing and edge computing platforms. Introduction The total annual global IP traffic is estimated to reach 4.8 ZB per year by 2022 1 . The continuous increase of both low-delay access and efficient processing of data demands novel platforms that can perform computational tasks closer to the edge of the network, thus enabling to analyze important data efficiently and in near real-time. In this context, most of the data travel in optical fibers which supports both high channel rates and throughput. Yet, the network's bottlenecks in terms of power consumption and throughput are found in limitations arising from connections and interfaces at its edge; that is, peripheral input/output (I/O) devices such as digital systems or sensors require a digital-to-analog conversion (DAC), and vice versa (ADC) 2,3 . Therefore, it becomes a pressing challenge, especially for large-scale data centers, to optimize or even re-invent their networks for meeting the needs of large data processing and low delay without trading power consumption. Photonic integrated circuits (PIC) have shown the potential to satisfy the demand of high dataprocessing capabilities while a) acting on optical data, while b) interfacing with digital systems, and doing so at featuring compact size, short delay, and low power consumption 2 .However, the performance gains of photonic platforms when interfacing with digital architectures are reduced by their interfaces to/from electronics, which often due to the achievable bandwidth and the resolution of the DACs and ADCs in addition to cumbersome domain-crossings between electronics (E) and optics (O). The ultimate performance and power consumption of this DSP-based technological pathway will be constrained by CMOS technology that is approaching its fundamental physical limit. 5 A DAC, for software-defined transmitters or as interface to computing systems, should be able to operate conversions at high speed, for a broadband spectrum, and in an accurate manner, without being affected by jitter and noise. Due to the maturity of the electronic components, electronic digital-to-analog conversion devices can provide high-accuracy, and high linear conversion trend accompanied by remarkable stability; nevertheless, they are intrinsically limited by their bandwidth and high timing jitter, which precludes the further development of purely electronics-based DAC for next-generation information systems. High speed electronic D/A can be based on multi-ladder voltage/current weighting circuits, switch and latch architectures or a combination of segmented architectures in addition of binary weighting 6 . Currently, for achieving greater sampling rate, and improve the high-speed performance, the converter design comprises multiple data converters with analog circuitry, such as Time or Analog Bandwidth Interleaving and Multiplexed DACs. 7 However, the delay in sub-micron CMOS architectures, due to the high resistance of the interconnection wires and the increased parasitic capacitance, can seriously compromise the very high frequency performances. In the past decade, paying the cost of a larger footprint, several high-speed optical DACs (ODACs) compatible with fiber communication have been proposed, based on different schemes, such as optical intensity weighting of multiwavelength signal modulated using micro-ring resonators 8 , nonlinear optical loop mirrors 9 , or interferometry and polarization multiplexing 10 or phase shifters 11 . Therefore, to overcome the bandwidth limitations and timing jitter of current electroniconly DACs, requiring a cumbersome opto-electric-opto (OEO) conversions 12 , i.e. improving delay and energy consumption while not trading off footprint, developing a photonic-based binaryweighted DAC (BW-DAC) becomes a main prerogative. Considering that the bandwidth of state-of-the-art DACs is lower than that of state-of-the-art electro-optical modulators, and being inherently immune to electromagnetic interference, this DAC can simultaneously enable high-speed sampling rates and high conversion efficiency, while not being affected by jitter and electromagnetic noise, and most importantly, allow to by-pass O-E-O conversion, thus facilitating network simplicity and possibly cascadability to other photonic networks. In addition, the BW-DAC is intrinsically compatible with optical fiber communication systems; therefore, they could be used for low-latency label processor, routing data in miniaturized switching networks, or interface to data processor and classifiers at the edge of the network. Specifically, in our vision, BW-DACs will be an essential device towards the realization of the next generation networks, which comprises photonic circuits, replacing inefficient D/A conversions and bulky devices. High speed photonic DAC can be used at the interface of network-edge photonic dedicated devices, in the fog, such as digitally controlled transmitters and receivers or photonic computing architectures, which can significantly lower the cost of running a network by providing edge-cloud capabilities. At higher level in the network, the DAC can be used as digitally controlled photonic micro datacenters and routers for intelligent re-direction of the traffic and label processing (Fig.1). More recently, few photonic DAC implementations have been proposed, which can be categorized as serial or parallel according to their operating scheme. The serial type (Fig. 2b) is usually based on the summation of weighted multiwavelength pulses opportunely spaced in time by properly setting the wavelength spacing and the length of dispersive medium, which recreates an analog waveform after being detected by a photoreceiver, enabling fast digital to analog conversion. Serial DAC can be straightforwardly cascaded for long-distance optical communication systems, which primarily operates in serial mode. The operating speed is hindered by multiple factors, such as pulse source and stability, dispersion component, optical modulator, and dispersion compensator. In this scheme, bit resolution trades off with sampling frequency. Experimental studies show 4bit serial DAC with an operating speed of 12.5Gs/s 13 . On the other hand, parallel type ( Fig. 2a), are generically characterized by a simpler architecture and usually employs electro-optic modulators (EOMs) to weight the intensities of multiple optical carriers according to an electrical digital signal input and subsequent summation at the end of optical link. 14,15 Parallel schemes can potentially leverage on a lower power consumption, while taking full advantage of the combined fast sampling rate and rather high-bit resolution, provided by the multiple parallel channels. One main limitation comes from the summation of the modulated optical carriers, since for achieving a full dynamic range and linear operation the optical signals need to be added coherently (in phase). The technical issue related to the incoherent summation is primarily addressed by the photodetectors which integrates the optical power and additional electronics, which, however, limiting the operating bandwidth, hence conversion latency of typical parallel photonic DACs. Distance Although, the conversion in the electrical domain by means of a photodetector is not necessarily desirable, especially for those applications which would still benefit from keeping the analog signal in the optical domain, such as optical machine learning [16][17][18][19][20][21] or optical telecommunication 12,23 . (Fig.2) Figure 2 Schematic representation of three different implementations of photonic DACs (Parallel, serial and Coherent-parallel). a) The parallel implementation is based on weighted integration of multiple wavelengths which encode a bit sequence. b) The serial scheme is based on summation of weighted multiwavelength pulses opportunely spaced in time by properly setting the wavelength spacing and the length of dispersive medium. c) Coherent parallel photonic DAC [this work] uses a pre-set unbalanced directional coupler that split light unevenly in different channels, which are then individually modulated at high speed, the pre-determined phase shift (in case of a 0 or 1) is actively compensated with phase shifters towards a coherent summation. In this work, we propose an original and synergistic-to-implement parallel type photonic BW-DAC scheme which exploits a combination of unbalanced couplers 24 and electro-absorption modulators (EAMs), or alternatively also electro-optic 2x2 switches, for overcoming the issues related to the uncoherent summation without requiring any conversion to the electrical domain. (Fig. 2c) In this configuration, in fact, a series of unbalanced couplers divides the optical power in multiple branches in a seemingly exponential manner; thence EAMs are employed for absorbing the optical power in each branch according to a digital electric signal. In this way, the EAMs will modulate the intensity of the optical signal travelling in each branch only if triggered by a digital input '0'; this would lead also to an alteration of the optical path length in a systematic, hence controllable manner. The systematic phase variations, in fact, can be easily compensated through PIC-integrated heaters or high-speed phase modulators, added at each branch, enabling a coherent summation. As such, the novel approach introduced here entirely avoids additional electronics or large-area photodetectors. In this work, we demonstrate a passive iteration of the 4-bit parallel BW-DAC, and show the potential of conversion speeds of 50 GS/s 25 along with energy consumptions as low as few pJ/S. We also analyze the performance degradation due to limited extinction ratios of the applied electro-optic modulators, the integral and differential nonlinearities, highlighting a seamlessly linear digital-to-analog conversion. This simple and relatively compact scheme can be employed in network-edge processors enabling low latency computing or high-speed routing such as for miniaturized data centers. We demonstrate a N-bit electro-optical D/A conversion utilizing asymmetrical directional coupler, for binary weighting according to the DAC resolution, and Y-combiner (Fig. 3). Following this concept, the BW-DAC converts the parallel digital signals comprising of N-bits into an analog output signal in the optical domain utilizing a silicon PIC. For the operation, in brief; a single continuous wave laser is coupled into the PIC passed through a sequence of asymmetrical directional couplers with a splitting ratio of 3:1 24,26 , which results to the best linearity of output optical power vs digital bit inputs. By design, each consecutive channel , here named bitwaveguide, receives a fraction of the optical power ( = 0.75) incoming from the previous unbalanced coupling stage, the analog signal power can be written as: Figure 3 Schematic diagram of a 4-bit Photonic DAC based on unbalanced directional couplers and electro absorption modulators. a) Schematic of the working principle. b) Sketch of a photonic DAC in parallel configuration. A carrier (CW Laser) is split in multiple branches thank to unbalanced directional coupler (i. SEM image of the directional couplers and its operation) according to the formula: (1-r) n where r is the splitting ratio (r=0.75) and n is the number of bits. The intensity of the signal is modulated in each branch by an electro absorption modulator (Extinction In this way, we obtain N separated continuous and weighted waves travelling in N channels (i.e. each waveguide presenting a bit), which represent the intensity weighting factors corresponding to each bit of the digital input signals. Thus, the resolution of the BW-DAC given by the number of waveguides N. Thanks to the pre-determined and successive splitting obtained by the series of unbalanced couplers and the systematic correction of phase alteration, the signal, modulated according to the N-bit sequence, is in-phase combined by a sequence of Y-junctions. I = To assess its correct working principle of the PDAC scheme and understand its performance, several passive implementations of a 4-bit BW-DAC (N = 4) representing different bitcombination (2 N = 16) are prototyped. For example, when the i th bit represents a '1', then the i th bit-waveguide is use as is, thus preserving the optical power passing through the combiners. Contrarily, to emulate in a passive fashion the i th bit being equal to a '0', we simply disconnect the i th bit-waveguide from the PDAC output port and thus simulating zero optical power. Notice, that in this way there is a vanishing amount of optical leakage power contributing to the optical analog output signal, thus overestimating the DACs performance). To assess the correct functionality of the passive iteration of the PDAC, we perform full wave FTDT 3-dimnesional simulation for a 2-bit DAC, which indeed shows near perfect agreement with the experimental results (supplementary online material, Fig. S1). In our experimental demonstration of a 4-bit PDAC, a CW laser source is coupled to the circuit by means of grating couplers and successively split to each arm with sequential weights (Fig. 3a, SOM). The 16 states of the 4-bit BW-DAC prototype demonstration is experimentally validated as individual circuits, where each circuit represents one-bit combination (SOM). Note, for the operation, a predetermined bias voltage was applied to metal heaters to tune the phase of waveguide (Fig. 3d). To demonstrate the correct functioning of the coherent photonic DAC, a total of 16 passive (with systematic active tuning of the phase) 4-bit PDAC circuits were measured using optical probe station. (the infrared image of measured '1111' PDAC circuit is shown in Fig. 3C). We perform numerical simulations (see methods section) to verify the design functionality and gain insights into the experimentally tested PDAC performance including an analysis of i) delay (throughput), ii) frequency and phase stability, and iii) DAC benchmarks such as differential nonlinearity. In brief, we use foundry-approved device models for improved yield and repeatability, and timely time-to-market; the EAMs have an extinction ratio of 4.6 dB and are driven by NRZ pulse generator to inject the binary digital bit sequence at 50 Gbit/s (supplementary online information). 25 Figure 4 Digital to analog conversion. (a) Generated optical power of the analog signal for different 4-bit combinations. Comparison between the Experimental results obtained for all the possible (2 N =16) passive versions (blue solid line), the photonic circuit simulated version (gold solid line) and the theoretical prediction according to the formula (red solid line). (b) Eye diagram of a 4bit DAC assuming thermal noise of the PD, static (1ps) and random jitter (1ps) of the pseudo-random code used as digital input of the electroabsorption modulators. (c) Integral and (d) Differential nonlinearity for a given electrical digital signal inputs for measured analog outputs correspond to 1LSB respect to the best fit regression. The DNL is between -0.94 and 0.71LSB while the INL is smaller than 1.99LSB. The measured and simulated output optical powers of the PDAC are in good agreement with each other as highlighted by the (expected) quadric trend, which reflects the relationship between the linear superposition of the electric field waves at the Y-junctions and the resulting intensity (Eq.1, Fig. 4b). The discrepancies between the model and the experimental case are associated with the reflections at the waveguide dislocation representing the different bit-combinations and scattering at the Y-junctions. In the passive implementation, the circuit that represents the digital input combination '0000', obviously provides zero optical power output due to the pruned bit-waveguides. In the numerical simulation instead, which accounted the presence of modulators in each arm, a small amount of 2.6 µW (10% of pass through optical power) output power was recorded, due to the limited extinction ratio of the modulators, which leads to an offset error between numerical and experimental outputs. Furthermore, the optoelectronic components, i.e. the modulator and detector (if used back-end), are associated to physical noise; for instance, the electrical signal driving the modulators is affected by phase jitter and intensity noise, whilst for detecting the output signal a and SINAD correspond to the measurement results are 65 and 10, respectively. The two values correspond to the result of dynamic simulation are 67 and 11 which are significant higher that other PDAC structures. It indicates the high accuracy of this PDAC. Finally, we are interested in comparing the performance of this PIC-based PDAC with electronic approaches selecting a medium-high 8-bit resolution ( Table 1). We use our experimentalnumerical cross-validation approach from the 4-bit prototype for the 8-bit resolution performance analysis. Focusing on the core-functionality of DACs, namely the ratio of the conversion speed divided by the dynamic power we find that the PDAC has an 10x higher sampling-speed-efficiency than a speed-comparable electronic on-chip implementations and an about two orders of magnitude higher than a non-chip integrated system. Additionally, the PIC offers a competitive footprint being 5-times smaller than the off-chip commercially available DACs but having double the size of the electronic on-chip counterpart. This may come at a surprise to electronic circuit designers, since the critical dimension in PICs is often regarded being 10-100x larger compared to electronics. The reason for only a 2x areal increase lies in the small footprint of GeSi EAM and elegant weighting mechanism by introducing sequence of asymmetrical directional couplers 25 which do not require large on-chip space. Speed (GS/s) Moreover, the operating speed and power consumption of the BW-DAC is mainly limited by the performance of the modulators . Thus, sample speed will linearly increase with component (modulator) speed improvement in future generations, which is unlike electronic circuits where circuit delay is dominated by interconnect delay which does not improve with device performance improvement such as scaling . 33 Regarding the maximum resolution, the PDAC is limited by the signal discrimination (i.e. extinction ratio) of the modulators; here the optical power of the LSB needs to exceed the leakage power of the MSB when the input digital signal for the MSB is '0'. Numerical results predict a possible resolution up to 14-bit considering absorption modulator with a 4.6-dB extinction ratio (see methods and SOM). 31 Meanwhile, we compare this work with other off-chip parallel and serial photonic approach ( In conclusion we designed and engineered a novel, easy-to-fabricate, Electro-Optical-DAC (EO-DAC) scheme, in which the intensity of optical carriers is split by unbalanced directional couplers, weighted according to multiple input digital signals which drive foundry-ready electro-absorption modulators, and ultimately summed using combiners at the end of the photonic link. The design guarantees a super-linear intensity weighting functionality with a prospective operating bandwidth of 50 GS/s, consuming as low as 3 pJ/S. We experimentally demonstrate a 4-bit passive iteration of the proposed EO-DAC which is in perfect agreement with both full-wave and integrated circuits simulations. Additionally, the proposed scheme does not require the signal to be converted in the electrical domain as other parallel photonic DAC, and therefore could support the I/O interface to novel photonic integrated neuromorphic computing engines and edge-computing platforms for routing. Mathematica description of the PDAC operations: The optical carrier of each bit-waveguide is fed into an EAM, which tunes the intensity of the electric field travelling in the waveguide Where r is the splitting ratio of directional coupler. The most (least) significant bit, MSB (LSB) digital signal controls the modulator with the highest (smallest) amount of optical power i.e. =N ( =1). The driving voltage thresholds associated to the bit sequence is attenuated using the EAMs or unaltered (insertion losses of the modulator). According to the presence of a '1' or a '0', a systematic phase variation is introduced in each arm, which was designed to be compensated using thermal phase tuner. The electric field from two consecutive bit-waveguides is linearity additive. = 1 + 2 For a monomodal waveguide, the local intensity is related to the amplitude of electric field I =
4,488.4
2019-11-03T00:00:00.000
[ "Physics" ]
Iranian and Non-Iranian Social Networks ’ Structures ; A Comparative Study This research attempts to present a picture of the structure and design of virtual social networking sites such as Facenama, Cloob, Facebook, and Google+. The aim of this research is to study what are the differences between Iranian and non-Iranian social networks structures mainly used by Iranian users. Content analysis method has been used in the present research. Elements such as real images, personal photos, high attractiveness, image plus text, warm and cool colors, lots of comments (more than ten comments), and minimal use of symbolic signs have been used more than other types. Moreover, it was identified that there is no significant difference between the type of selected structures by users in Iranian and non-Iranian social network websites. The results of the research could be use to reach a pattern of use of these websites for creating and developing personal social networking. 1. The society moves from the hierarchical structure towards the network and multi-group structure. 3. Comprehending the patterns is important in making analytical methods and recognizing the network. 4. Structured relations help to explain macro social systems. Social structures can affect interpersonal relationships. 6. The world is composed of networks not groups (Babaei, 2011). In 2009, Steve Jones and Sarah Millermaierpublished a research entitled " Whose space is MySpace? A content analysis of MySpace profiles " in the Journal of Computer-Mediated Communication. In this study, they identified the type of personal information that users shared in their profiles. The researchers used content analysis method in conducting the research in order to find the personal characteristics of the users and the type of shared content in users' profiles.The results showed that such virtual social networks are used for creating and developing individual identity and establishing online relationships. The findings also revealed a high level of the privacy control by the users. The results indicated that MySpace virtual network is used not only as a communication tool but also for self-disclosure and the construction of Facebook. The number of uploaded photos on the page and the content of photos have been analyzed, and the researchers have found that the number of uploaded photo differ significantly by gender. The participants in the research were male and female student aged 18-23. The content and amount of Facebook profile photographs also did not significantly vary by gender. Objectives and Questions The main objective of this research is to identify the structure and design of Iranian and non-Iranian social networks in order to somewhat compare the structure of Iranian and foreign social networks and to determine their structural similarities and differences. The results of the research could be use to reach a pattern of use of these websites for creating and developing personal social networking. The main question of the research is: What are the differences between the structure and design of Iranian and non-Iranian social networks? To answer the question variables such as the kind of uploaded photos, profile picture, design template attractiveness, colors, number of comments, number of visitors, type of the shared content in social networks are analyzed and compared in Iranian and non-Iranian social networks. Research Method In this research, content analysis method is used. The analysis unit is profile page. The research population is made up of two Iranian (Facenama and Cloob), and two foreign social networking websites (Facebook and Google+). The sample size has been estimated according to Cochran's sample size formula. 384 pages of these four networks have been investigated. The samples for the research were selected by simple random samplingamong four websites. The collected data were analyzed by SPSS software. Table (2) shows the highest rate of real photos belonged to personal photos and the lowest rate belonged to the photos of friends in both Iranian and non-Iranian social networks. Table (3) indicates that the highest rate of shared content format for both Iranian and foreign social networks was the format of (text + image) and the lowest rate of content production format for both Iranian and non-Iranian social networks was the format of (text + audio) which the lowest rate among all was related to Iranian social networks. It seems that the space which is provided for the users by social networks is an open and free space. This space is displayed in the symbolic signs which somewhat refer to the group or organization that an individual belongs to. According to the results most users have not used symbolic signs. In other words, they didn't belong to a special group or organization and fewer users have used such signs in their profiles. As mentioned earlier and according to the theory of networking society and network theory, structure in social networks is an important feature. This feature refers to what is noticed by the user at the first glance which is seen in photos, formats, colors, content production form, and comments. Social networking structures are different from each other, but they all provide a series of facilities for their users, so that the users can display according to their own interests. In general, structure is the framework of a network and each user can make it more attractive and attract more users to his/her own profile. Users in Iranian social networks have somewhat observed Iranian culture in their structure and design and have tried to use structures that match local and national culture. Moreover, the users in non-Iranian social networks have been relatively more active than the users in Iranian social networks and have tried to make the structure and design of their profiles more attractive. This broad, yet deep study also shows that Iranians, particularly young ones ( Iranian society. In other words, these networks, structuring relations, are not only confined to more palpable realms, but also include the intersection of the practice of everyday life, collective action, and politics (Reisinezhad, 2014).They are embedded in quotidian relationships and thus more impervious to state control (Scott 1990, Zuo 1995, Zhao 1998, Loveman 1998.It islife within groups that transforms the culture of the young.In short, informal networks construct "free spaces" in which ordinary people build and expand theirmutual ties . The integration of domestic virtual spaces in Iran and their similarity with famous virtual spaces stressesthe significance of informal aspect of social networks and their effectiveness, particularly in less open polities where visibility is dangerous and in high-risk milieu where the informal social ties provide bonds of trust and solidarity beyond the regimesurveillance (Reisinezhad, 2014).Tightly knit networks nurture collective identities and solidarity; provide informal organization and contacts, and supply information otherwise unavailable to individuals (Pfaff, 1996). For future studies the cultural role of these networks should be underscored. Social networks are spaces for themeaning production. The ultimate outcome of such acovertly cultural process is the quotidian production of alternative meaning which nourishes the Iranian young.
1,596
2015-07-15T00:00:00.000
[ "Sociology", "Computer Science" ]
Signatures, sums of hermitian squares and positive cones on algebras with involution We provide a coherent picture of our efforts thus far in extending real algebra and its links to the theory of quadratic forms over ordered fields in the noncommutative direction, using hermitian forms and"ordered"algebras with involution. (and also [23]) we extended these results in the noncommutative direction, more precisely to central simple F -algebras with involution and hermitian forms over such algebras. The study of central simple algebras with involution was initiated by Albert in the 1930s [1] and is still a topic of current research as testified by The Book of Involutions [19]; see also [10] and the copious references therein for a list of open problems in this area. A large part of present day research in algebras with involution is driven by the deep connections with linear algebraic groups, first observed by Weil [35]; see also Tignol's 2 ECM exposition [34]. Some work has been done on algebras with involution over formally real fields, for example [22], [30], but this part of the theory is relatively underdeveloped. This observation, together with the fact that algebras with involution are a natural generalization of quadratic forms, are motivating factors for our research. This article is an expanded version of the prepublication [8], from the Séminaire de Structures Algébriques Ordonnées, Universities Paris 6 and 7. Signatures Let (A, σ) be an F -algebra with involution, by which we mean that A is a finite dimensional simple F -algebra with centre a field K ⊇ F and σ is an F -linear anti-automorphism of A of order 2 (which implies that [K : F ] 2). Let W (A, σ) denote the Witt group of (A, σ), i.e. the W (F )-module of Witt equivalence classes of nondegenerate hermitian forms h : M × M → A, where M is a finitely generated right A-module (cf. [18,Chap. I] or [32,Chap. 7]). We identify hermitian forms with their Witt class in W (A, σ), unless indicated otherwise. Given an ordering P ∈ X F we wish to define a signature at P , i.e. a morphism of groups W (A, σ) → Z. Following the approach of [11] we do this by extending scalars to a real closure F P of F at P and realizing that, by Morita equivalence, the Witt group of any F P -algebra with involution is isomorphic to either Z, 0 or Z/2Z. In the last two cases, the only sensible definition is to take the signature at P to be identically zero. In this case we call P a nil-ordering and we write Nil[A, σ] for the set of all nil-orderings, noting that it only depends on the Brauer class of A and the type of σ. Furthermore, Nil[A, σ] is clopen in X F , cf. [4,Corollary 6.5]. In the first case, the Witt group where − denotes (quaternion) conjugation, each one in turn being isomorphic to Z via the usual Sylvester signature of quadratic or hermitian forms. The composite map s P , given by At first sight, one way to fix a sign would be to demand that s P ( 1 σ ) is positive, as is the case for quadratic forms. This is the approach taken in [11], but it may not always work, since it may happen that s P ( 1 σ ) is in fact 0, as illustrated in [4,Rem. 3.11 and Ex. 3.12]. Our solution to this dilemma is to show that there exists a hermitian form η over (A, σ), called a reference form, such that s P (η) is always nonzero whenever P ∈ X F := X F \ Nil[A, σ], cf. [5,Prop. 3.2]. Using this, given P ∈ X F , we define the signature at P with respect to the reference form η, sign η P : W (A, σ) → Z, to be the map s P , multiplied by −1 in case s P (η) < 0, so that sign η P (η) > 0. The map sign η P does not depend on the Morita equivalence used in its computation and so we may use the explicit Morita equivalence presented in [24] in all practical situations. Remark 2.1. In case (A, σ) = (F, id F ), we may take η = 1 and sign η P is then the usual Sylvester signature sign P of quadratic forms. Remark 2.2. The signature map is defined for all hermitian forms over (A, σ), not just the nondegenerate ones as the notation above (which makes use of W (A, σ)) might suggest. It suffices to replace a form by its nondegenerate part (cf. [5, §3]. In fact, this is the approach used in [4]. We collect some immediate properties of the signature map: (1) Let h be a hyperbolic form over (A, σ), then sign η P h = 0. (2) Let h 1 , h 2 ∈ W (A, σ), then sign η P (h 1 ⊥ h 2 ) = sign η P h 1 + sign η P h 2 . (3) Let h ∈ W (A, σ) and q ∈ W (F ), then sign η P (q · h) = sign P q · sign η P h. (4) (Going-up) Let h ∈ W (A, σ) and let L/F be an algebraic extension of ordered fields. Then Property (4) is complemented by the following going-down result: [4,Thm 8.1]). Let L/F be a finite extension of ordered fields and assume P ∈ X F extends to L. Let h ∈ W (A⊗ F L, σ ⊗ id). Then where Tr * A⊗ F L h denotes the Scharlau transfer induced by the A-linear homo- for all h ∈ W (A, σ) and all P ∈ X F . . For every f ∈ C(X F , Z) [A,σ] there exists n ∈ N such that 2 n f ∈ Im sign η . In other words, the cokernel of sign η is a 2-primary torsion group. The stability index of (A, σ) is the smallest k ∈ N such that 2 k C(X F , Z) [A,σ] ⊆ Im sign η if such a k exists and ∞ otherwise. It is independent of the choice of η. The group coker sign η is up to isomorphism independent of the choice of η. We denote it by S η (A, σ) and call it the stability group of (A, σ). Ideals and morphisms Let R be a commutative ring and let M be an R-module. We introduce ideals of R-modules as follows: An ideal of M is a pair (I, N ) where I is an ideal of R and N is a submodule of M such that I · M ⊆ N . An ideal (I, N ) of M is prime if I is a prime ideal of R (we assume that all prime ideals are proper), N is a proper submodule of M , and for every r ∈ R and m ∈ M , r · m ∈ N implies that r ∈ I or m ∈ N . These definitions are in part motivated by the following natural example: The pair (ker sign P , ker sign η P ) is a prime ideal of the W (F )-module W (A, σ) whenever P ∈ X F . We obtain a classificationà la Harrison and Lorenz-Leicht [25]: (1) If 2 ∈ I, then one of the following holds: (i) There exists P ∈ X F such that (I, N ) = (ker sign P , ker sign η P ). (ii) There exist P ∈ X F and a prime p > 2 such that (I, N ) = ker(π p • sign P ), ker(π • sign η P ) , where π p : Z → Z/pZ and π : Im sign η P → Im sign η P /(p · Im sign η P ) are the canonical projections. Remark 3.2. When 2 ∈ I, N is completely determined by I. This is however not the case when 2 ∈ I, cf. [5, Ex. 6.8]. The pair (sign P , sign η P ) is again a natural example of a (W (F ), Z)-morphism from W (A, σ) to Z and is trivial if and only if P ∈ Nil[A, σ]. The classification of prime ideals of W (A, σ) yields the following description of signatures as morphisms: Sums of hermitian squares In the field case, Pfister's local-global principle can be used to give a short proof of the fact that sums of squares are exactly the elements that are nonnegative at every ordering. In [7] we showed that the same approach directly yields a similar result for F -division algebras with involution and, with some extra effort, for all F -algebras with involution. Let A × denote the set of invertible elements of A, Sym(A, σ) the set of σsymmetric elements of A and Sym(A, σ) × := Sym(A, σ) ∩ A × . We say that an element a ∈ Sym(A, σ) is η-maximal at an ordering P ∈ X F if sign η P a σ is maximal among all sign η P b σ for b ∈ Sym(A, σ). In the field case, this means sign P a = 1, in other words a ∈ P \ {0}. For elements b 1 , . . . , b t ∈ F × we denote the Harrison set {P ∈ X F | b 1 , . . . , b t ∈ P } by H(b 1 , . . . , b t ). H(b 1 , . . . , b t ). Assume that a ∈ Sym(A, σ) × is η-maximal at all P ∈ Y . Let u ∈ Sym(A, σ). The following statements are equivalent: The presence of the element a as well as the hypothesis on η-maximality correspond in the field case to the fact that 1 belongs to every ordering. Here 1 does not play a particular role since it may not have maximal signature at some orderings. We replace it by the element a and only consider a set of orderings Y on which a has maximal signature. The general answer to this question is negative as shown in [17], but we can now describe cases where the answer is positive, and also propose a natural reformulation (inspired by signatures of hermitian forms) of the question that has a positive answer. Positive cones The results presented thus far suggest that there could be a notion of "ordering" on central simple algebras with involution, whose behaviour would be similar to that of orderings on fields. The purpose of this final section is to present such a notion. (P2) P + P ⊆ P; (P3) σ(a) · P · a ⊆ P for every a ∈ A; (P4) P F := {u ∈ F | uP ⊆ P} is an ordering on F . (P5) P ∩ −P = {0} (we say that P is proper ). We say that a prepositive cone P is over P ∈ X F if P F = P . Remark 5.2. Axiom (P4) is necessary if we want our prepositive cones to consist of either positive semidefinite (PSD) matrices with respect to P , or of negative semidefinite (NSD) matrices with respect to P , in the case of (M n (F ), t), see [6,Rem. 3.13]. If P is a prepositive cone, then −P is also a prepositive cone. This is due to the fact that prepositive cones are meant to contain elements of maximal signature, and the sign of the signature can vary with a change of the reference form. It can be shown that there is a prepositive cone over P ∈ X F on (A, σ) if and only if P ∈ X F , cf. [6, Prop. 6.6]. (2) The set of PSD matrices, and the set of NSD matrices with respect to some P ∈ X F are both prepositive cones over P on (M n (F ), t). Remark 5.4. Other notions of orderings have been introduced for division rings with involution, most notably Baer orderings, * -orderings and their variants and an extensive theory has been developed around them. Craven's surveys [13] and [14] provide more information on these topics. Without going into the details, the main difference in the definitions is that positive cones were developed to correspond to a pre-existing algebraic notion, namely signatures of hermitian forms (e.g. axiom (P4) reflects the fact that the signature is a morphism of modules, cf. Proposition 2.4(3); see also the sentence after Theorem 5.8) and as a consequence are not required to induce total orderings on the set of symmetric elements. We obtain the desired results linking prepositive cones and W (A, σ): where P is a prepositive cone on (M n (D), ϑ t ). We use prepositive cones to consider the question of the existence of positive involutions: Theorem 5.7 ([6, Thm. 6.8]). Let P ∈ X F . The following statements are equivalent: (i) There is an involution τ on A which is positive at P and of the same type as σ; The notion of prepositive cone can be seen as somewhat equivalent to that of preordering or Prestel's pre-semiordering [27], [28], so it is natural to consider in more detail the maximal prepositive cones, which we simply call positive cones. They can be completely described and match the examples provided above. To see this we define, for P ∈ X F and S ⊆ Sym(A, σ), (the smallest, possibly nonproper, prepositive cone over P containing S), and we denote by X (A,σ) the set of all positive cones on (A, σ). In particular, the only positive cones over P on (D, ϑ) are M η P (D, ϑ) and −M η P (D, ϑ) and therefore the examples above are essentially the only positive cones on (A, σ), cf. [6,Props. 4.3 and 4.9]. It follows that the PSD matrices over P and the NSD matrices over P are the only positive cones over P on (M n (F ), t). (See also Proposition 5.6.) Using this description, it is possible to make the link with the results presented in Section 4, and to obtain results similar to the Artin-Schreier and Artin theorems. The second statement is a trivial consequence of the first one, but it is still included here to point out that while the element a in it obviously belongs to a prepositive cone (namely C P (a)), the element b in the third statement may not belong to any prepositive cone on (A, σ), contrary to what could be expected from the field case (see [6,Rem. 7.10]). and let a ∈ Sym(A, σ) × be such that, for every P ∈ X (A,σ) with P F ∈ Y , a ∈ P ∪ −P. Then As a consequence of our study of positive involutions, given Q ∈ X F , there always exist a and Y that satisfy the hypothesis of Theorem 5.10 with Q ∈ Y , cf. Remark 4.2. The element a in this theorem plays the same role as the element a in Theorem 4.1, and chooses a prepositive cone from {P, −P} in a uniform way. This is not necessary in the field case, because 1 belongs to every ordering. In the special case where a = 1 can be used for this purpose, we obtain a result more similar to the usual one: Corollary 5.11 ([6,Cor. 7.15]). Assume that for every P ∈ X (A,σ) , 1 ∈ P ∪ −P. Then The hypothesis of Corollary 5.11 is exactly X σ = X F in the terminology of Section 4. More precisely, as seen therein, this property characterizes the algebras with involution for which there is a positive answer to (PS'), cf. [7,Section 4.2]. A natural question is to ask if signatures of hermitian forms over (A, σ) can now also be defined with respect to positive cones on (A, σ). As shown in [6, §8.2], this can indeed be done using decompositions of hermitian forms, reminiscent of Sylvester's decomposition for quadratic forms: Theorem 5.12 ([6, Cor. 8.14, Lemma 8.15]). There exists an integer t, depending on (A, σ), such that for every P ∈ X (A,σ) and for every h ∈ W (A, σ) there exist u 1 , . . . , u t ∈ P := P F , a 1 , . . . , a r ∈ P ∩ A × and b 1 , . . . , b s ∈ −P ∩ A × such that n 2 P × u 1 , . . . , u t ⊗ h ≃ a 1 , . . . , a r σ ⊥ b 1 , . . . , b s σ , where n P is the matrix size of A ⊗ F F P , and r and s are positive integers, uniquely determined by P and the rank of h. Proposition 5.13 ([6, Prop. 9.2]). The topologies T σ and T × σ are equal. Recall that spectral topologies, defined in [16], are precisely the topologies of the spectra of commutative rings, and that a map between spectral spaces (i.e. spaces equipped with spectral topologies) is called spectral if it is continuous and the preimage of a quasicompact open set is quasicompact.
4,104.8
2017-06-05T00:00:00.000
[ "Mathematics" ]
Determining the opinions and expectations of the local farmers about the potential impacts of kocaeli kandira food specialized Organized Industrial Zone, Turkey This research analyzed the potential impacts of the establishment of Food Specialized Organized Industrial Zones on the region and its neighborhood. The villages of Kocakaymaz and Goncaaydin in Kandıra district of Kocaeli province were included in the research, and the data obtained from 131 farmers through survey was used. In line with the data, the farmers were grouped as the farmers whose land in Kandira Food Specialized Organized Industrial Zone (FSOIZ) was expropriated (directly affected) and the farmers whose land in Kandira FSOIZ was not expropriated (indirectly affected), and then they were evaluated. For this purpose, both the socio-economic characteristics of the farmers included in the research and their opinions and expectations about the probable impacts of the establishment of Kandira FSOIZ on farmers’ income, product pattern, raw material procurement, product marketing, employment and neighborhood were presented. The obtained results showed that, of the farmers, 71.76% were planning to engage in livestock production to meet the raw material needs of the FSOIZ and 90.07% desired their family members to work in Kandira FSOIZ (desirable and very desirable), which may prevent the young population in the establishments from leaving their village and have a positive impact on agricultural employment. Introduction Agricultural production makes a great contribution to the independent existence of countries. Likewise, in our age, industrialization has a great importance in reaching a higher level of welfare. The efforts of the world's most developed countries to obtain not only million tons of agricultural products but also more industrial products using more advanced technology emphasize the growing share of industry in the economies of our century. Today, one of the major problems of developing countries is to succeed in the race of development. To this end, developing countries follow the example of developed countries in their efforts and intend to become industrialized. The establishment of such zones in certain localities of the country and the development of both economy and industry of such localities allow for interregional balance and organized industrial settlement (Berberoglu, 1984). Established for creating more organized industrial areas, ensuring the equal distribution of economic development to regions, contributing to the development of Small and Medium-Sized Enterprises (SMEs) and minimizing the environmental problems caused by industrialization, Organized Industrial Zones (OIZs) are a kind of industrialization method (Alacadagli, 2004;Cevik & Alperen, 2009). OIZs are spatial incentive instruments used for directing private sector industrial investments to certain localities, supporting and promoting existing investments and meeting the land requirements of developing industries. On the other hand, OIZs are, in general terms, production and settlement units where technical and general services are provided in a suitable area equipped with transportation, water, electricity, sewerage, banks, canteens, phone, internet, natural gas and first aid (Cam & Esengun, 2011). Organized Industrial Zones (OIZs) are among the best examples of specific development in our country. While many development models of foreign origin fail, OIZs, which are in harmony with our country's internal characteristics, structure and social characteristics, succeed in financial terms by creating an economic clustering. If the benefits of specialization and the condition of food industry in our country are combined, it is seen that the most important area to contribute to today's development thrust is the establishment of Food Specialized Organized Industrial Zones (FSOIZ) (Eroglu, 2011), suggesting how important it is to examine the probable impacts of the establishment of Food Specialized Organized Industrial Zones. Kandira Food Specialized Organized Industrial Zone is the first and leading food specialized OIZ of our country. Although food industry is a traditional branch of industry in our country, no specialized OIZ has been established in this sector yet. Besides, Kandira region is an agricultural zone of Kocaeli province. The region meets agricultural needs as much as possible with its arable land of approximately 55,000 hectares, grain production of 57,750 tons, livestock potential of 20,000 cattle, and 261 poultry houses. Kandira district of Kocaeli province -i.e. the area of researchhas an important place in the agricultural production of Marmara Region. Kandira Food Specialized Organized Industrial Zone is expected to contribute greatly from the marketing of agricultural products in terms of logistics to the development of agricultural industry. Thus, revealing the opinions and expectations of the local agricultural producers about its impacts on the region may set an example for other food specialized organized industrial zones planned to be established. For this reason, this research is of great importance in terms of assessing the probable impacts of the establishment of Kocaeli-Kandira Food Specialized Organized Industrial Zone for the producers and identifying the problems and determining the solutions. The purpose of this research is to present the farmers' opinions and expectations about the probable impacts of the establishment of Kandira FSOIZ on product pattern, raw material procurement, product marketing, employment and neighborhood. In the research, first the farmers' socio-economic status was examined and then the producer data was analyzed, the local farmers' opinions and expectations about the probable impacts of the Food Specialized Industrial Zone were determined, and some suggestions were made. Materials and methods The main material of the research was the primary data obtained through survey from the farmers in the villages of Kocakaymaz and Goncaaydin in Kandira district of Kocaeli province. The research was based on the data related to the production period of 2013. However, surveys were conducted in 2014. In the examination of the farmers in Kocakaymaz and Goncaaydin, where Kandira FSOIZ is located, they were grouped as the farmers affected directly (whose land was expropriated) and the farmers affected indirectly (whose land was not expropriated). According to the Kandıra FSOIZ records, 127 farmers were affected by the FSOIZ (120 farmers from Kocakaymaz and 7 farmers from Goncaaydin) and 81 farmers were not affected by the FSOIZ (22 farmers from Kocakaymaz and 59 farmers from Goncaydin) (Kandira FSOIZ Directorate, 2011). In the scope of the research, a face-to-face interview was made and a survey was conducted using the complete count method with a total of 131 farmers who lived in the village and accepted to have an interview (47 farmers from Goncaaydin and 84 farmers from Kocakaymaz). In this way, 81 farmers affected by Kandira FSOIZ as a result of expropriation and 50 farmers not affected by Kandira FSOIZ as a result of non-expropriation were included in the research (Table 1). In the analysis of the data obtained, first the socio-economic characteristics of the farmers were identified. At that stage, age and educational level, number of family members, availability and use of labor force, availability and use of land, and capital of farmers were analysed. In addition, survey data was evaluated in terms of the possible impacts of the establishment of Kandira FSOIZ on local farmers' production, branch selection, employment, land market and neighborhood as well as in terms of its support to the local farmers. The then-current number of animals owned by the establishments subject to examination was calculated in Cattle Units (CU). The coefficient was taken as 1.00 for cows and heifers and 0.10 for sheep and goats (Acil, 1980). In the research, among all attitude and behavior scales, 5-point Likert Scale was used to find out the farmers' opinions about the goals of the FSOIZ and determine their viewpoints and attitudes about the socio-economic effects that the FSOIZ would have. In the Likert Scale, the intensity of attitude was scaled in such a way as to increase positively from 1 to 5. For the purpose of determining which factors (and to what extent) affected the farmers' choice of growing/not growing crops for Kandira FSOIZ, one of the logistic regression analysis methods -i.e. binary logistic regression analysis -was applied. In logistic regression, the dependent variable is discrete and the predicted probability values range from 0 to 1. The logistic In terms of land ownership, 48.66% had their own land, 17.14% were tenant, and 34.20% had joint ownership over their land. Probable impacts on the selection of branch of production The examination of the relationship between whether or not the farmers' land was affected and the crops they produced showed that the farmers whose land was affected mostly produced wheat and barley, respectively, and the farmers whose land was not affected mostly produced oat and wheat, respectively. In general, wheat was the most grown crop with 1.54 hectares on average. Other field crops grown by the farmers included hazelnut, walnut, eggplant, tomato, pepper, and lettuce. In the subject to examination, the average number of animals per farmer was 5.41 for cattle and 0.96 for sheep and goats. The highest number of cattle per farmers (6.48) was encountered in the farmers whose land was not affected. Upon the examination of the impact of the establishment of Kandira FSOIZ on the selection of production branch, it was found that, in general, they planned to grow the products needed by the FSOIZ because the FSOIZ would allow for the free circulation of agricultural products, develop packaging, storage and marketing methods, and boost the export. As a result of the variance analysis conducted to examine the relationship between the number of animals owned by the farmers and planning to engage in livestock production for Kandira FSOIZ, the relationship was found to be statistically significant (p = 0.000) ( Table 3). Those farmers with a higher number of animals planned to engage in livestock production for meeting the needs of Kandira FSOIZ. They stated that they planned to do so in the belief that Kandira FSOIZ would enable them to sell their products more easily and regularly. For the purpose of determining which factors (and to what extent) affected the farmers' choice of growing/not growing crops regression model based on the cumulative logistic probability function is denoted as follows (Gujarati, 1998;Equation 1). (1) P i = Probability of selecting a certain option by the ith individual, F= Cumulative probability function, z= α+βXi, α= Constant, β= Parameter to be predicted for each explanatory (independent) variable, X i = The ith independent variable. It was statistically tested whether or not there were between-groups differences in the research. For continuous variables, first a normal distribution test was made using the Kolmogorov-Smirnov test, and the variables with and without normal distribution were identified. A variance analysis (One Way Anova) was performed for the variables with normal distribution (Ozdamar, 2004). Farmers' socio-economic characteristics Information about the socio-economic characteristics of the farmers is given in Table 2. According to the Table, the farmers whose land was affected were 58.89 years old, studied for 4.81 years, and had 40.63 years of experience in agriculture, 3.86 household size, a family labor force potential utilization rate of 26.27%, 6.37 hectares of land, and an equity rate of 88.89%, on average. On the other hand, the farmers whose land was not affected were 59.90 years old, studied for 4.36 years, and had 44.44 years of experience in agriculture, 3.17 household size 5.50 hectares of land, and an equity rate of 91.54%, on average. In addition, 62.5% of the farmers were aged 55 and over. The variables indicating land size, age, educational level, number of animals owned, non-agricultural income and number of family members were included in the analysis as the factors affecting the dependent variable. The prediction model developed in relation to these variables is given in Table 5. In the predicted model, the probability of growing crops for Kandira FSOIZ for the Farmers with a land size between 2.5 and 5 hectares is 1.32 times higher than the farmers with a land size less than 2.5 hectares. In terms of the number of animals owned by the farmers, the probability of growing crops for Kandira FSOIZ for the farmers having less than 5 CUs is 5.58 times for Kandira FSOIZ, the Binary Logistic Regression Model was applied as the Logistic Regression method. The variables that can affect the choice of growing/not growing crops for Kandira FSOIZ and their categories and descriptions as well as descriptive statistics are given in Table 4. In the logistic regression analysis, an answer was sought to the questions "Is there any difference in terms of the impact on growing crops to meet the regional need when Kandira FSOIZ is established?" and "If yes, what is the rate". To this end, a dummy variable representing growing/not growing crops for Kandira FSOIZ was used as the dependent variable. country. The farmers does not know the sales price of his/her product at the harvest time and cannot plan his/her future. If he/she does not have a place to store his/her product and has a perishable product, he/she sells it at the price set by the broker following the harvest and incurs a considerable revenue loss. Kandira FSOIZ is expected to provide a great opportunity to the farmer in avoiding such revenue losses. The companies to be located in the FSOIZ will execute a contract with the farmers. Thus, both the companies will guarantee their supply and the farmers will foresee the amount of revenue they will earn. That the companies to be located in the FSOIZ plan to perform contract production will make a significant contribution to the increase of agricultural production in the region. It is known that, in the neighborhood of Kandira, there are people who have completed their education but are unemployed. Kandira FSOIZ will create job opportunities for those unemployed people, support employment and prevent young people from emigrating from the region. In the region, some of the farmers conduct production activities without adequate technical information. The companies to be located in the FSOIZ will provide technical support to the farmers, leading to more conscious agriculture in the region. General opinions and expectations The answers given to the questions intended to find out the farmers' opinions, attitudes and expectations about Kandira FSOIZ were assessed. Farmers' agreement with the goals of the FSOIZ was examined, and their opinions about the subject matter were presented. The frequencies, percentages and average points of their agreement with the statements are given in Table 7. Accordingly, of the farmers who "strongly agreed" with the goals of Kandira FSOIZ,22.90% agreed to "Develop packaging, storage and marketing methods", 19.85% agreed to "Ensure the free circulation of agricultural products", and 13.74% agreed to "Boost the export through synergy". The average point of their agreement with the goals of the FSOIZ was 3.77 for "Develop packaging, storage and marketing methods" and 3.66 for "Ensure that the industry has common infrastructure and social facilities". That most of the average points are above 3 indicates the farmers agreed with the founding purpose of Kandira FSOIZ. Most of the producers think positively especially about the marketing of their products because the establishment of an organized industrial zone will enable them to sell their products without need for a broker. higher than those having 5-15 CUs. It is seen that the farmers in need of agricultural income are more willing to grow crops. In terms of the number of family members, the probability of growing crops for Kandira FSOIZ for the families with 5-7 members is 8.05 times higher than those with 7 or more members. Since the farmers that engage in plant production form the majority in growing crops to meet the needs of Kandira FSOIZ, it is possible to say that land size has a significant impact. It is also possible to think that, urged by necessity, the farmers with a lower income level believe, by growing products for Kandira FSOIZ, they will not have difficulty in marketing their products. Probable impacts on employment The establishment of Kandira FSOIZ is expected to result in the creation of considerable employment opportunities in the locality. It was found that the companies in the region would need employees to work especially in production and marketing. As a result of the variance analysis conducted to examine the relationship between the number of family members of the farmers and their desire for the employment of family members in Kandira FSOIZ, the relationship was found to be statistically significant (p = 0.014) ( Table 6). The majority of the farmers with a higher number of family members leaned towards the employment of their family members in Kandira FSOIZ. The farmers stated that the establishment of Kandira FSOIZ would offer a job opportunity to many people and contribute to the regional economy. Probable impacts on marketing According to the findings, the farmers in the research region do not perform planned production and the regional capacity is not efficiently used in terms of agricultural area. 60.97% of the companies to be located in the FSOIZ plan to procure most of the raw materials they need from the producers in the region. The companies that will carry out activities based on animal production in general plan to perform mostly contract production. The establishment of an organized industrial zone that is specialized in food and processes agricultural products is expected to provide an advantage in the stabilization of agricultural product prices, which change year by year and thus sometimes become disadvantageous to the farmer. As is known, commodity exchanges allowing for transparent price formation for agricultural products have not become widespread in our capacity is not efficiently used in terms of agricultural area. The companies to be located in the FSOIZ plan to procure most of the raw materials they need from the producers in the region, leading to an increase in contract production. It is known that there are people in the neighborhood of Kandira who have completed their education but are unemployed. Kandira FSOIZ will create job opportunities for those unemployed people and prevent them from emigrating from the region. In the region, some of the farmers conduct production activities without adequate technical information. The companies to be located in the FSOIZ will provide technical support to the producers, leading to more conscious agriculture in the region. As a conclusion; today, the world's population grows rapidly. The use of existing resources in the most efficient way before offering to consumers will allow our country to reach a position with a high competitive potential in the world trade. In consideration of the fact that the organized industrial zones to be established will enable the products of the farmers living in the rural areas of the Marmara Region, especially in Kocaeli and nearby cities, and engaging in agriculture and stockbreeding to be sold at fair prices under today's conditions and attain competitive power in the world trade, the realization of the Kandira FSOIZ project as soon as possible will have a significant impact in terms of the agriculture and stockbreeding sector. Discussion and conclusion Agricultural lands are the primary natural economic assets in Kandira district of Kocaeli province. Most of them are suitable for marginal agriculture. However, given the fact that the majority of them are not used in agricultural production, this OIZ project, which the investor companies try to realize in Kandira district using their own financial resources, will create an opportunity for Kandira district. Being a food industry-based initiative in the Eastern Marmara Region, Kandira FSOIZ will serve as a model for our country. As the food industry develops, the agricultural sector will revive and the farmers' revenues will experience a considerable increase. As is known, young people are unwilling to engage in farming. The main reason is that they think agricultural production has a low profit. However, with the development of food industry and the increase in the need for agricultural raw materials, the scale of agricultural production will get larger, resulting in an increase in agricultural revenues. In return, increasing agricultural revenues will make young people become more willing to do agricultural production. The establishment of an organized industrial zone that is specialized in food and processes agricultural products is expected to provide an advantage in the stabilization of agricultural product prices, which change year by year and thus sometimes become disadvantageous to the farmer. The companies to be located in the FSOIZ will execute a contract with the farmers. Thus, both the companies will guarantee their supply and the farmers will foresee the amount of revenue they will earn. According to the findings, the farmers in the research region do not perform planned production and the regional
4,716.6
2020-12-01T00:00:00.000
[ "Agricultural and Food Sciences", "Economics" ]
Cholesterol Sensitivity of Endogenous and Myristoylated Akt The serine-threonine kinase, Akt, has been linked to cholesterol sensitive signaling mechanisms, suggesting a possible means whereby cholesterol might affect tumor cell growth and survival. However, it has not been shown whether Akt itself, as distinct from upstream components of the pathway (e.g., membrane phosphoinositides), can be directly responsible for cholesterol-mediated effects. Consistent with this possibility, we identified an Akt1 subpopulation in cholesterol rich lipid raft fractions prepared from LNCaP human prostate cancer cells. Phosphorylation of this Akt subspecies was ablated with methyl-B-cyclodextrin, a cholesterol-binding compound, under conditions where nonlipid raft-resident Akt was unaffected. A myristoylated Akt1 (MyrAkt1) fusion protein expressed in LNCaP cells was found to be highly enriched in lipid rafts, indicating that oncogenic Akt is over-represented in cholesterol-rich membranes compared with wild-type Akt. Notably, lipid raft-resident MyrAkt1 exhibited a markedly distinct substrate preference compared with MyrAkt1 immunoprecipitated from cytosol and nonraft membrane fractions, suggesting a redirection of signal transduction when the protein is present in cholesterol-rich membranes. Expression of MyrAkt1 in LNCaP cells overcame their characteristic dependence on constitutive signaling through the phosphoinositide 3 ¶-kinase pathway. This protective effect was substantially diminished with cyclodextrin treatment. Phosphorylation of Akt substrates in lipid raft fractions, but not in cytosol/nonraft membrane fractions, was ablated with cyclodextrin. In addition, in control (LacZ transfected) cells, lipid raft fractions were relatively enriched in phosphorylated Akt substrates. Collectively, these data show that a subpopu-lation of Akt is cholesterol sensitive and that the oncogenic effects conferred by myristoylation arise, in part, from the tendency of the membrane-targeted form of the protein to reside in cholesterol-rich membrane microdomains. Introduction Cholesterol is a critical component of biological membranes.In addition to regulating membrane fluidity, cholesterol is an important constituent of a class of detergent-resistant microdomains, generally referred to as ''lipid rafts'' (1).The invaginated, vesicular structures known as ''caveolae'' are a specialized form of lipid raft that contain caveolin proteins.Noncaveolar, ''flat'' lipid rafts are also believed to exist, based on experimental and theoretical evidence (2).The structural and biophysical properties of lipid rafts result in the retention and exclusion of certain classes of proteins, such that these microdomains can be viewed as ''privileged'' sites that promote interaction between discrete subsets of signaling intermediates, thereby serving as platforms for signal transduction (reviewed in ref. 3).In cancer cells, lipid rafts/ detergent-resistant microdomains may provide an important subcellular microenvironment in which signals are processed that are central to tumor cell growth, resistance to apoptotic signals, and other aggressive characteristics. Although elevation in circulating cholesterol levels has long been associated with cardiovascular disease, there is now increasing evidence to suggest a link between cholesterol accumulation and the risk of certain malignancies.Several recent epidemiologic studies have described a reduction in incidence of certain cancers in patients taking 3-hydroxy-3-methyl-glutaryl CoA (HMG-CoA) reductase inhibitors (''statins'') for cardiovascular indications (reviewed in ref. 4).Statins inhibit the rate-limiting step in cholesterol biosynthesis (conversion of HMG-CoA to mevalonate) and thereby reduce synthesis of cholesterol and its isoprenoid precursors, geranylgeranyl pyrophosphate and farnesyl pyrophosphate.The effect of statin therapy on incidence of solid tumors may vary with organ site.Current evidence supports the hypothesis that prostate cancer may be particularly sensitive to this intervention (5,6).This may be a reflection of aspects of cholesterol metabolism characteristic of prostate cells and tissues, including high endogenous levels of cholesterol seen in the normal prostate, the abnormal accumulation of cholesterol in prostate tumors, and the sensitivity of prostate cancer cells to cholesterol depletion (7)(8)(9). The Akt/protein kinase B family (Akt1, Akt2, and Akt3) of serinethreonine kinases processes signals in tumor cells that mediate tumor cell proliferation, survival, and migratory behavior (10)(11)(12).Akt has also been linked to pathways sensitive to changes in membrane cholesterol.Data from our group and others have shown that constitutive and epidermal growth factor-stimulated Akt activation and cell survival are regulated by cholesterolsensitive signaling mechanisms in prostate cancer cells (9,13).Elevation of circulating cholesterol levels in mice promoted growth, kinase activation, and survival signaling in human prostate tumor xenografts (14).In cell culture studies, simvastatin preferentially inhibited phosphorylation at a key regulatory site, Ser 473 on Akt1 present in lipid rafts, whereas Akt1 at other locations in the cell was relatively resistant to the effects of the drug (14).These findings suggest that prostate cancer and possibly other tumor cells contain discrete Akt populations that process distinct signals depending on subcellular location.These results also implicate lipid raft microdomains as sites where the signaling effects of cholesterol may influence the regulatory dynamics of Akt, a critical node in cancer cell signaling. Although evidence has been obtained implicating membrane cholesterol as a direct regulator of signal transduction of relevance to cancer, it has not been shown that Akt itself, as opposed to upstream effectors, such as growth factor receptors or membrane phospholipids, is a direct target for such cholesterol-mediated effects.For example, because statin drugs, in addition to their cholesterol-lowering ability, affect post-translational isoprenylation and activation of proteins, such as Rho, Ras, and Rac, (4), it is possible that inhibitory effects of simvastatin on raft-resident Akt, observed previously (14), are not primarily the result of cholesterol synthesis inhibition.In this study, we provide evidence that Akt itself is cholesterol sensitive as a result of the localization of an Akt subpopulation within lipid raft microdomains.Our results also indicate that the raft microenvironment processes distinct Aktdependent signals. Cell culture and transfections.LNCaP human prostate cancer cells were cultured in RPMI 1640/10% fetal bovine serum (FBS).Human embryonic kidney (HEK) 293 cells were cultured in DMEM/10% FBS.All media were supplemented with penicillin/streptomycin and L-glutamine, and cells were maintained in a humidified atmosphere of 5% CO 2 at 37jC.Cells in 150-mm dishes at f80% confluence were transfected using Fugene 6 according to the manufacturer's instructions.In selected experiments, LNCaP cells were transduced with viral supernatants of 293FT cells transfected with pLenti6-MyrAkt1 or pLenti6-LacZ, and stable populations were isolated following selection with 2 Ag/mL blasticidin. Preparation of membrane fractions.Lipid raft membrane fractions were isolated using two methods.In the first method, lipid rafts were isolated from LNCaP cells using sucrose gradient ultracentrifugation as described (15).In the second method, a procedure involving successive detergent extraction of cell membranes was used essentially as described (13,14,16,17).In some experiments, the cytosolic fraction was isolated before membrane fractionation.Briefly, cell pellets were resuspended in 50 mmol/L HEPES (pH 7.4), 10 mmol/L NaCl, 1 mmol/L MgCl 2 , 1 mmol/L EDTA, 1 mmol/L phenylmethylsulfonyl fluoride (PMSF), and 1 mmol/L Na 3 VO 4 and subjected to mechanical disruption with 12 strokes of a Dounce homogenizer (1,800 rpm).Homogenized samples were centrifuged at 14,000 Â g for 20 min at 4jC and the supernatant was removed as the cytosolic fraction.Membrane pellets were washed with buffer A and lysed as described above to extract Triton-soluble and raft membrane fractions.The protein content of fractions was determined using the Microbicinchoninic acid (BCA) assay (Pierce Chemical Co.). Cholesterol assay.Cholesterol determinations were done on 300 AL fractions from either membrane preparations or sucrose gradients prepared as described (15).Lipids were solubilized in chloroform, extracted twice through H 2 O, dried, and analyzed with the Infinity cholesterol determination assay kit (Sigma Chemical). Immunoprecipitations and Akt kinase assay.Equal amounts of protein from cytosol and nonraft membrane (C+M fraction) or lipid raft fractions were precleared with protein A-Sepharose or protein G-Sepharose beads for 1 h at 4jC.In selected experiments, membrane fractions were subjected to buffer exchange using BioSpin6 gel filtration columns (Bio-Rad).Antibodies were added to precleared lysates and incubated overnight at 4jC, before addition of 40 AL protein A or G beads (50% v/v slurry) for a further 2 h at 4jC.Immunoprecipitates were washed four times with lysis buffer [20 mmol/L Tris-Cl (pH 7.5), 150 mmol/L NaCl, 1 mmol/L EDTA, 1 mmol/L EGTA, 1% Triton X-100, 2.5 mmol/L NaPPi, 1 mmol/L h-glycerophosphate, 1 mmol/L Na 3 VO 4 , 1 Ag/mL leupeptin, 1 mmol/L PMSF] and resuspended in 2Â SDS loading buffer.To assay Akt kinase activity, a nonradioactive assay kit was used (Cell Signaling Technology).Briefly, immunoprecipitates were washed twice with lysis buffer and twice with kinase assay buffer [25 mmol/L Tris-Cl (pH 7.5), 5 mmol/L h-glycerophosphate, 2 mmol/L DTT, 0.1 mmol/L Na 3 VO 4 , 10 mmol/L MgCl 2 ] to equilibrate the beads before assay.Beads were resuspended in kinase reaction mix, comprising 40 AL kinase assay buffer, 1 Ag GSK3 fusion protein substrate, and 200 Amol/L ATP, and incubated for 30 min at 30jC.Reactions were terminated by the addition of 20 AL 3Â SDS loading buffer supplemented with 150 mmol/L DTT. For determination of Akt kinase activity against myelin basic protein (MBP) or the histone subunit H2B, Akt immune complexes were incubated with 1 Ag substrate and 5 ACi [g-32 P]dATP (3,000 Ci/mmol).Samples were resolved by gel electrophoresis, and gels were fixed with acetic acid and destained before exposure of dried gels to X-ray film to visualize signal.To quantitate incorporation of 32 P, gel slices were excised and radioactivity was determined by scintillation counting.Kinase activity was also assessed using Crosstide as substrate.Briefly, immune complexes were resuspended in kinase buffer [25 mmol/L HEPES (pH 7.4), 10 mmol/L MgCl 2 , 1 mmol/L DTT] containing 5 ACi [g-32 P]dATP and 1 Ag Crosstide (GRPRTSSFAEG) in a volume of 30 AL and incubated at 30jC for 20 min.To terminate the reactions, 25 AL aliquots were spotted onto p81 phosphocellulose paper and the incorporated radioactivity was determined essentially as described (18). Preparation of whole-cell lysates and immunoblot analysis.Cells were washed twice in ice-cold PBS and lysed in a minimum volume of 1Â cell lysis buffer (Cell Signaling Technology) supplemented with 60 mmol/L octylglucoside and 1 mmol/L PMSF.Protein content was determined using the Micro-BCA protein assay reagent.Cell extracts (10 Ag/lane) and immunoprecipitates were resolved by 12% SDS-PAGE and electrotransferred to nitrocellulose membranes.Following transfer, membranes were stained with Ponceau S to confirm equal protein loading, where appropriate.Membranes were blocked with PBS/0.1% Tween 20/5% IgG-free bovine serum albumin (BSA) and incubated with antibodies overnight at 4jC.Following incubation with species-specific horseradish peroxidase-conjugated secondary antibodies, signals were detected using SuperSignal chemiluminescent reagent (Pierce Chemical) and exposure of blots to X-ray film.In selected experiments, densitometric analysis of bands was done using the public domain NIH Image (version 1.63) program.Relative Akt kinase activity was determined by dividing the signal for p-GSK3a/h by the signal for total Akt within the same fraction.A value of 100% was assigned to the normalized kinase activity (p-GSK3 signal divided by Akt signal) in the cytosolic/Triton-soluble membrane (C+M) fraction following treatment (e.g., pervanadate) and all other values were expressed relative to this. Results Targeting membrane cholesterol with cholesterol-binding compounds, or by inhibiting cholesterol synthesis endogenously, was shown to attenuate phosphorylation of Akt1 at Ser 473 in androgensensitive LNCaP prostate cancer cells (13,14).The following experiments were conducted to determine whether cholesterol regulates Akt1 directly.We initially assessed the extent to which Akt1 is present in cholesterol-rich (lipid raft) membrane fractions compared with nonraft subcellular compartments.We used two complementary methods to fractionate cells into lipid raftenriched and nonraft components.First, LNCaP cells homogenized by mechanical disruption in Triton X-100-containing buffer were subjected to sucrose density gradient ultracentrifugation.In the second method, cells were fractionated by successive detergent extraction as described previously (13,14,16,17) to isolate Tritonsoluble cytosolic and nonraft membrane components (C+M) and Triton-insoluble, octylglucoside-soluble raft fractions.Equal amounts of gradient, C+M, or raft fractions were resolved by SDS-PAGE and blotted with antibodies to Akt.The fidelity of fractionation was confirmed by blotting of fractions with antibodies to h-tubulin or G ia2 as markers of nonraft membranes/ cytosol and lipid rafts, respectively. As shown in Fig. 1, the patterns of Akt distribution in cells analyzed by density gradient (Fig. 1A) or by differential solubility in nonionic detergent (Fig. 1B) were similar.The majority of Akt was present in higher density fractions in LNCaP cells (Fig. 1A) that correspond to the cytosol + nonraft membrane (C+M) fraction (Fig. 1B), with only a small proportion of total Akt present in rafts (Fig. 1A, fraction 4; Fig. 1B).Consistent with the PTEN-null status of LNCaP cells, Akt in both C+M and raft fractions was phosphorylated on Thr 308 and Ser 473 , with the extent of phosphorylation commensurate with the level of Akt.To confirm the cholesterol sensitivity of lipid raft-resident Akt, LNCaP cells were treated with the cholesterol-binding agent, methyl-h-cyclodextrin, and fractionated into C+M and raft components.Akt was immunoprecipitated from each fraction and blotted with antibodies to total and S473-P Akt.Cyclodextrin treatment did not appreciably alter the amount or extent of phosphorylation of Akt isolated from the C+M fraction.In contrast, cyclodextrin ablated phosphorylation of raft-resident Akt (Fig. 1C).We also assessed the subcellular localization of Akt by immunofluorescence imaging of LNCaP cells treated with pervanadate (0.5 mmol/L), the most potent known Akt activator (19,20).Under basal conditions, S473-P Akt was located predominantly in the cytoplasm (Fig. 1D, i-iv).However, following pervanadate treatment of LNCaP cells, there was a marked translocation of P-Akt to the plasma membrane (Fig. 1D, v-viii).Notably, a small amount of this Akt cohort was found to colocalize with the lipid raft marker ganglioside GM1, as visualized by CTxB staining, following pervanadate treatment (Fig. 1D, ix), consistent with the existence of a subpopulation of raft-resident Akt. To determine whether raft-resident Akt is active as a kinase, we compared Akt kinase activity between C+M and raft fractions of LNCaP cells.Cells were serum depleted for 48 h before harvesting because this has been reported previously to increase the activity of both phosphoinositide 3 ¶-kinase (PI3K) and Akt in this cell line (21).Equivalent amounts of protein from C+M or raft fractions were subjected to immunoprecipitation with an anti-Akt antibody that enriches for Akt phosphorylated at Ser 473 .The kinase activity of Akt immune complexes was measured by determining the extent of phosphorylation of a GSK3 fusion protein substrate.Consistent with the PTEN-null status of LNCaP cells, robust Akt kinase activity was detected in the C+M fraction following serum depletion (Fig. 2A).In contrast, the kinase activity of Akt isolated from rafts was attenuated.When normalized to total Akt levels, kinase activity of raft-resident Akt was f8% of that detected in the C+M fraction.To circumvent the concern that low levels of Akt kinase activity simply resulted from low levels of Akt protein in rafts, we used two approaches to increase the amount of raft-resident Akt.First, we treated LNCaP cells with pervanadate to promote movement of endogenous Akt into rafts (Fig. 2B, left).Second, we transiently overexpressed WT Akt1 in LNCaP cells before pervanadate treatment (Fig. 2B, right).In either case, Akt was immunoprecipitated from C+M or raft fractions and assayed for kinase activity toward GSK3 (Fig. 2B).Pervanadate treatment induced a marked increase in kinase activity of Akt immune complexes isolated from the C+M fraction of cells expressing endogenous or overexpressed Akt.Despite eliciting a substantial enrichment of Akt in the raft fraction, however, the pervanadatestimulated activity of raft-resident Akt complexes toward GSK3 was markedly attenuated, displaying only up to f20% of the activity present in the C+M fraction.The relatively low level of kinase activity in Akt immune complexes isolated from rafts was not due to enzyme inactivation during fractionation because we were able to recover active endogenous Akt from the rafts of PC3 prostate cancer and MC3T3 osteoblast-like cells treated with pervanadate and isolated under identical conditions (Supplementary Fig. S1).These findings suggest that the low Akt activity toward GSK3 observed in LNCaP cells is a reflection of the raft environment specifically in this cell type. To further investigate the apparent attenuation in activity of Akt immune complexes isolated from lipid rafts, we measured the activity of both WT Akt (Akt1-WT) and constitutively active, MyrAkt1 expressed transiently in HEK293 and LNCaP cells.Unlike endogenous or overexpressed WT Akt, MyrAkt1 partitioned almost equally between the nonraft and raft membrane compartments.MyrAkt1 immune complexes isolated from the C+M fraction of HEK293 cells elicited robust phosphorylation of the GSK3 substrate (Fig. 3A and B), consistent with the reported activity of this Akt fusion protein (22).Mutation of Thr 308 in the catalytic loop to alanine ablated GSK3 phosphorylation in agreement with the absolute requirement for phosphorylation at T308 for Akt kinase activity (Fig. 3A; ref. 23).In contrast, no GSK3 phosphorylation was observed following incubation of the substrate with Akt1-WT or MyrAkt1 immune complexes isolated from rafts (Fig. 3A and B), despite the high level of Akt present in these fractions.A similar result was obtained with LNCaP cells (Fig. 3C).Despite strong Akt kinase activity in the C+M fraction of LNCaP cells, no GSK3 phosphorylation was observed in MyrAkt1 complexes precipitated from rafts and subjected to kinase assay (Fig. 3C, left).The lack of kinase activity of raft-localized MyrAkt1 was not due to lack of phosphorylation at Thr 308 or Ser 473 because these sites were heavily phosphorylated in Akt immune complexes isolated from rafts (Fig. 3C, right).We confirmed our observations in LNCaP cells in an independent assay using Crosstide as substrate.As shown in Fig. 3D, the activity of MyrAkt1 immune complexes isolated from rafts toward Crosstide was dramatically reduced compared with that precipitated from the C+M fraction. To determine whether the activity of Akt isolated from rafts was also attenuated when assayed against other substrates, we measured the activity of MyrAkt1 immune complexes toward histone H2B or MBP, both of which have been used previously as Akt substrates (24)(25)(26)(27).In contrast to the activity observed against GSK3, MyrAkt1 immune complexes isolated from rafts elicited robust incorporation of 32 P into H2B and displayed more than 10 times the activity observed with MyrAkt1 precipitated from the C+M fraction (Fig. 4A).Similar results were obtained with MyrAkt1 transiently expressed in HEK293 cells (Fig. 4B), with raft-resident Akt complexes approximately four times more active against H2B than was Akt isolated from the C+M fraction.The enhanced activity of raft-resident MyrAkt1 relative to MyrAkt1 in the C+M fraction was also evident when MBP was used as the substrate (Supplementary Fig. S2).Collectively, these findings suggest that raft-resident Akt is functionally distinct from Akt present at other subcellular locations. To further understand how signals transmitted from raftresident Akt differed from signaling downstream of Akt in other locations within the cell, we generated populations of LNCaP cells stably expressing MyrAkt1 (Supplementary Fig. S3).Localization of MyrAkt1 within rafts was confirmed by sucrose density gradient analysis (Fig. 5A), which showed enrichment of the ectopically expressed protein in the light buoyant density fractions.In addition, immunofluorescence imaging showed membrane localization of MyrAkt1 as well as colocalization with the raft-restricted ganglioside GM1 (Fig. 5B).To assess the cholesterol sensitivity of raft-resident MyrAkt1, membrane cholesterol levels in LNCaP transfectants were manipulated by treatment with either cyclodextrin alone, water-soluble cholesterol alone, or cyclodextrin followed by cholesterol as described in Materials and Methods.As shown in Fig. 5C (top), 5 mmol/L cyclodextrin treatment led to a decrease in membrane cholesterol of f38% compared with untreated cells and essentially a complete loss of MyrAkt1 from rafts (Fig. 5C, bottom).Cholesterol repletion restored membrane cholesterol levels to f80% of the basal level and reestablished the basal distribution of MyrAkt1.Cholesterol treatment in the absence of depletion increased membrane cholesterol by f12% and led to a modest but detectable increase in MyrAkt1 in rafts.Previous findings from our group showed that treatment of LNCaP cells with PI3K inhibitors induces apoptosis in this PTEN-null cell type (26).To determine whether raft-resident Akt could function to promote cell survival, we exposed LNCaP/MyrAkt1 and control LNCaP/ LacZ transfectants to the PI3K inhibitor, LY294002 (10 Amol/L), for 24 h and assessed apoptotic effects by flow cytometry.As shown in Fig. 5D, LNCaP/MyrAkt1 cells were almost completely insensitive to PI3K inhibition, in contrast to LNCaP/LacZ cells that displayed significant induction of apoptosis (Fig. 5D, inset).However, the cytoprotective effect of MyrAkt1 was diminished by depletion of membrane cholesterol before treatment with LY294002, suggesting that antiapoptotic signals are transmitted, at least in part, by the raft-resident cohort of Akt. To determine the effect of raft-resident Akt on downstream substrates, we used an antibody raised against a phospho-peptide corresponding to the Akt recognition motif RXRXXS/T and that specifically recognizes substrates phosphorylated by Akt (28).In LNCaP/LacZ and LNCaP/MyrAkt1 cells cultured in serum, the number of phosphorylated Akt substrate species was increased in C+M fractions of LNCaP/MyrAkt1 cells compared with LNCaP/ LacZ transfectants (Supplementary Fig. S4).Surprisingly, the profiles of phosphorylated Akt substrates were remarkably similar in raft fractions of LacZ-expressing versus MyrAkt1-expressing LNCaP cells under these conditions (Supplementary Fig. S4), B, HEK293 cells transiently expressing WT T7-Akt1, MyrAkt1, or vector alone were fractionated into cytosolic (Cyto ), Triton-soluble membrane, or raft membrane fractions (Raft).Akt was immunoprecipitated with anti-Akt1 pAb and analyzed by kinase assay as in (A).Arrow, the p-GSK3 substrate.C, LNCaP cells transiently expressing MyrAkt1 or vector alone were subjected to immunoprecipitation kinase assay and blotted as in (A ).Immunoprecipitated eluates were also blotted with antibodies to T308-P Akt or S473-P Akt.D, C+M or raft fractions from LNCaP cells transfected as in (C ) were subjected to immunoprecipitation kinase assay using Crosstide as substrate, in the presence of [ 32 P]ATP. Cancer Research Cancer Res 2007; 67: (13).July 1, 2007 despite the enrichment for activated Akt in this compartment in LNCaP/MyrAkt1 cells.To determine the effect of cholesterol manipulation on signaling from Akt to its substrates, whole-cell lysates, C+M, or raft fractions were prepared from LNCaP/LacZ and LNCaP/MyrAkt1 cells subjected to cholesterol depletion and/ or repletion and blotted with the Akt substrate antibody.In control LNCaP/LacZ cells, cyclodextrin treatment elicited a modest reduction in phosphorylation of certain Akt substrates in rafts (Fig. 6A, compare lanes 2 and 4).In marked contrast, cyclodextrin treatment led to ablation of Akt substrate phosphorylation in rafts of LNCaP/MyrAkt1 transfectants (Fig. 6A, compare lanes 6 and 8).Interestingly, signals in the C+M fraction of LNCaP/MyrAkt1 transfectants, which were markedly enhanced relative to LNCaP/ LacZ cells (Fig. 6A, compare lanes 1 and 5), were also diminished with cyclodextrin treatment in LNCaP/MyrAkt1 (Fig. 6A, compare lanes 5 and 7), suggesting cross-talk between the raft and nonraft cohorts of Akt.Repletion of membrane cholesterol led to the reappearance of Akt substrate phosphorylation predominantly in the raft fraction of LNCaP/MyrAkt1 transfectants (data not shown), confirming the cholesterol sensitivity of Akt-dependent signaling in these cells. We also assessed the effect of cyclodextrin treatment on known Akt effectors, including p70S6K, FKHR, and GSK3.As shown in Fig. 6B, MyrAkt1-stimulated phosphorylation of p70S6K, FKHR, and GSK3 was diminished by cyclodextrin treatment, consistent with cholesterol-sensitive signaling to these targets.Interestingly, the raft fraction was highly enriched for phosphorylated Akt substrates in the control cells compared with the C+M fraction (Fig. 6A), suggesting that in the PTEN-null background, Akt signaling within cholesterol-rich membranes is extensive. Discussion In this study, we show that Akt1 is regulated directly by a cholesterol-sensitive mechanism.The evidence for this conclusion is the following: (a) a population of endogenous Akt resides in a cholesterol-rich membrane fraction; (b) phosphorylation of lipid raft-resident Akt1 can be attenuated by membrane cholesterol depletion; (c) MyrAkt1, which is an oncogene, is overrepresented in lipid raft fractions compared with WT Akt1; (d) localization of MyrAkt1 within rafts is modulated reversibly by manipulation of membrane cholesterol; (e) cholesterol depletion attenuates the cytoprotective effect of MyrAkt1 in LNCaP cells exposed to PI3K inhibition; ( f ) signaling from raft-resident Akt to its effectors is inhibited by cholesterol depletion and restored by cholesterol repletion; and (g ) multiple targets of Akt phosphorylation reside in cholesterol-rich membranes.Together, these findings indicate that a subpopulation of Akt1 is regulated in a cholesterolsensitive manner.They further imply that the alterations in cholesterol synthesis and homeostasis observed in malignancy play a role in the regulation of signal transduction through the Akt pathway. The initial characterization of Akt revealed that, unlike the proto-oncogene c-akt, the oncogenic form (v-akt) possessed a myristoylation sequence arising from its fusion to the viral protein Gag (29,30).Myristoylation of v-Akt was found to promote its membrane association, which in turn enhanced the tumorigenicity of v-Akt-expressing cells in xenograft experiments (31).More recently, a synthetic MyrAkt1 fusion protein was shown to cause prostatic intraepithelial neoplasia when expressed in the mouse prostate (32).The transforming properties of MyrAkt are believed to result from constitutive membrane targeting and constitutive phosphorylation at Thr 308 and Ser 473 (25).However, in view of our current findings showing that myristoylation results in overrepresentation of the kinase in cholesterol-rich membranes, localization of Akt within lipid raft microdomains, as opposed to simply plasma membrane localization, is likely to be a key determinant of oncogenicity. Consistent with this idea, we found that MyrAkt1 isolated from rafts in several cell types displayed attenuated kinase activity toward GSK3 and Crosstide despite (a) robust phosphorylation of the regulatory sites Thr 308 and Ser 473 and (b) measurable activity toward these substrates in MyrAkt1 immune complexes isolated from nonraft fractions.Interestingly, this conclusion is consistent with an unexplained finding in a previous report that a myristoylated phospho-mimetic Akt1 mutant (Akt1-T308D/ S473D), which is functionally equivalent to the MyrAkt1 construct used in our analyses, was kinase inactive toward Crosstide, despite robust activation of WT Akt1 under identical conditions (33).Based on our own data, we interpret this result to mean that myristoylation targets proteins to lipid rafts (34)(35)(36)(37), where the kinase complex is functionally redirected toward alternative substrate targets.In contrast to our observations with GSK3/ Crosstide, the activity of raft-resident MyrAkt1 immune complexes toward H2B was 5 to 15 times greater compared with immune complexes precipitated from C+M fractions.These findings suggest for the first time that the raft environment can modulate the activity of raft-resident kinases toward target proteins.The difference in substrate preference cannot be explained by differential phosphorylation of Akt regulatory sites, Thr 308 and Ser 473 , but might result from a change in conformation of the enzyme in rafts.Alternatively, this difference may arise from differential association of Akt with interacting proteins enriched in distinct subcellular compartments.In support of this second possibility, we recently identified a novel Akt binding partner that associates preferentially with Akt in cholesterol-rich membranes. 5 These findings emphasize the concept that kinase activity per se is not the sole determinant of Akt oncogenicity.For example, the constitutively active Akt mutant, Akt1-T308D/S473D, displays minimal transforming activity in chick embryo fibroblasts, whereas the kinase-inactive mutant MyrAkt1-T308A forms tumors in chickens, albeit with an extended latency period (25).In contrast, Akt3 possesses high kinase activity but poor transforming ability (38) consistent with the possibility that Akt exerts signaling activities independently of its kinase function.In support of this hypothesis, two recent studies show that Akt modulates the activity of the transforming growth factor-h effector Smad3 in a manner that is independent of Akt kinase activity (39,40).In those studies, kinase-active and kinase-inactive forms of Akt were found to interact directly with Smad3 and thereby inhibit its activity. Another implication of our findings is that signals processed by raft-resident Akt are distinct from those transduced by Akt located elsewhere in the cell.Consistent with this possibility, localization of MyrAkt1 to rafts seemed to sensitize Akt signaling to cholesterol depletion.Whereas signaling to Akt targets in rafts of LNCaP/LacZ cells did not change appreciably in response to cyclodextrin treatment, phosphorylation of Akt substrates in rafts of MyrAkt1expressing cells was essentially ablated.Under these conditions, the cytoprotective effect of MyrAkt1 in LNCaP cells exposed to PI3K inhibition was also diminished, suggesting that raft localization alters the nature of Akt signaling to its effectors.Consistent with these effects, changes in membrane lipid composition, either through the generation of sphingosine-1-phosphate or by exogenous application of omega-3 fatty acids, have been shown to redirect signaling in other cell types (41,42).These observations may be consistent with recent findings showing increased sensitivity to apoptosis induced by cholesterol depletion in cells harboring higher amounts of cholesterol-rich membranes (9).Our findings may also partially explain the reported reduction in incidence of tumors/aggressive tumors that accumulate cholesterol, such as prostate cancer, in patients on long-term statin therapy (5,6). Several recent studies have reported a link between elevated cholesterol levels and cancer.Li et al. (9) showed that breast and prostate tumor cells contained higher levels of membrane cholesterol and rafts/caveolae compared with their normal counterparts.In addition, elevated synthesis of the cholesterol precursor, mevalonate, resulting from higher expression and/or activity of HMG-CoA reductase, has been reported in several tumor types.Direct administration of mevalonate was found to promote the growth of orthotopic mammary tumor xenografts in nude mice (43).Interestingly, Akt was shown recently to induce sterol-regulatory element binding protein-dependent transcription of multiple genes involved in fatty acid and cholesterol metabolism, including HMG-CoA reductase, HMG-CoA synthase, and fatty acid synthase (44).In that study, a variant of MyrAkt1 fused to the hormone-binding domain of the estrogen receptor (MyrAkt-ER) was used to induce gene expression changes in retinal pigment epithelial cells.Together with our current findings showing enrichment of MyrAkt1 in rafts, the existence of a functional link between raft-resident Akt, regulation of cholesterol metabolism, and membrane lipid composition is strongly suggested.Our results suggest the existence of a potential feedback mechanism, whereby signals transmitted by raft-resident Akt can lead to enhanced cholesterol synthesis, expansion of the cholesterol-rich compartment, and increased localization of Akt to rafts.transfectants treated without (À) or with (+) 5 mmol/L cyclodextrin were blotted with antibodies to HA, Akt1, h-tubulin, G ia2 , and phosphorylated Akt substrates.B, 10 Ag whole-cell lysates prepared from LNCaP/ MyrAkt1 transfectants treated as in (A) were blotted with antibodies to phosphorylated forms of p70S6K, FKHR, and GSK3.The S6K-P antibody is known to cross-react with phosphorylated p85S6K; the FKHR-P antibody cross-reacts with phosphorylated FoxO4; and the GSK3-P antibody recognizes both GSK3a and GSK3h phosphorylated at Ser 21 and Ser 9 , respectively.C, the model summarizes the findings of the study.Akt present in lipid rafts displays a distinct substrate preference to Akt in the nonraft compartment, consistent with a redirection of Akt-dependent signaling when the protein resides in cholesterol-rich membranes.Following cholesterol depletion, signaling from raft-resident Akt is essentially ablated, whereas signaling from Akt in the nonraft compartment is largely unaffected. In summary, we have obtained evidence that cholesterol is a direct regulator of Akt-dependent signaling in caveolin-negative cells and that localization of Akt to lipid raft microdomains alters the substrate preference of the Akt kinase complex.These findings suggest a direct mechanistic link between cholesterol and cell survival signaling in tumor cells and may be functionally relevant to the reported chemopreventive benefit of long-term use of cholesterol-lowering drugs in certain cancers. Figure 1 . Figure 1.A population of endogenous Akt1 resides in a cholesterol-rich membrane fraction.A, LNCaP cells were subjected to sucrose density centrifugation as described.One milliliter fractions were analyzed for cholesterol content or blotted with antibodies to total Akt, G ia2 , c-Crk, and h-tubulin.The protein content of individual fractions was visualized by Ponceau S staining of membranes after transfer (bottom ).Star, the cholesterol-enriched lipid raft microdomain (fraction 4).AU, absorbance units.B, LNCaP cells growing in serum were fractionated into cytosolic/Triton-soluble membrane (C +M ) and lipid raft (Raft ) fractions as described.Ten micrograms C+M and raft fractions were blotted with antibodies to total Akt, Thr 308 -phosphorylated Akt (T308-P ), S473-P Akt (S473-P ), h-tubulin, and G ia2 .C, Akt was immunoprecipitated from LNCaP cells exposed to cyclodextrin (CD ) and fractionated into C+M and raft fractions using IG1 anti-Akt mAb.Immunoprecipitated (IP ) eluates were blotted with antibodies to total or S473-P Akt.D, serum-starved LNCaP cells treated without (left, i-iv ) or with (right, v-viii ) 0.5 mmol/L pervanadate for 15 min were incubated with 10 Ag/mL tetramethylrhodamine-conjugated CTxB subunit on ice for 30 min, before incubation with anti-S473-P Akt antibody (1:100) and FITC-conjugated secondary antibody (1:200).Arrowhead, a region of colocalization of S473-P Akt with CTxB as shown in (ix ).The image in (ix ) has been enlarged to show red, green, and yellow pixels. Figure 2 . Figure 2. Akt kinase activity toward GSK3 is attenuated in lipid rafts isolated from LNCaP cells.A, C+M and raft fractions from LNCaP cells serum depleted for 48 h were immunoprecipitated with IG1 anti-Akt mAb and Akt kinase activity against GSK3 determined.Kinase assay eluates were blotted with antibodies to total Akt or p-GSK3a/h.Relative activity was determined as described in Materials and Methods.Data are representative of duplicate determinations.B, nontransfected LNCaP cells or cells transiently expressing T7-Akt1 were serum depleted for 24 h and treated without (À) or with (+) 0.5 mmol/L pervanadate.Immunoprecipitation kinase assay and immunoblotting were done as in (A).Data are representative of triplicate determinations.Ctrl, fraction incubated with isotype control IgG and subjected to immunoprecipitation kinase assay. Figure 3 . Figure3.The activity of constitutively active Akt toward GSK3 is attenuated in lipid rafts.A, HEK293 cells transiently expressing MyrAkt1, MyrAkt1-T308A, or vector alone (pcDNA3) were fractionated into Triton-soluble membrane (TS ) or raft membrane (R ) fractions and immunoprecipitated with anti-HA or isotype control antibodies.Kinase assays were done as described in Fig.2A.B, HEK293 cells transiently expressing WT T7-Akt1, MyrAkt1, or vector alone were fractionated into cytosolic (Cyto ), Triton-soluble membrane, or raft membrane fractions (Raft).Akt was immunoprecipitated with anti-Akt1 pAb and analyzed by kinase assay as in (A).Arrow, the p-GSK3 substrate.C, LNCaP cells transiently expressing MyrAkt1 or vector alone were subjected to immunoprecipitation kinase assay and blotted as in (A ).Immunoprecipitated eluates were also blotted with antibodies to T308-P Akt or S473-P Akt.D, C+M or raft fractions from LNCaP cells transfected as in (C ) were subjected to immunoprecipitation kinase assay using Crosstide as substrate, in the presence of [ 32 P]ATP. Figure 4 . Figure 4. Constitutively active Akt1 isolated from lipid rafts displays altered substrate specificity.LNCaP (A) or HEK293 (B ) cells transiently expressing Myr-HA-Akt1 were fractionated into C+M or raft fractions.Akt was immunoprecipitated from each fraction with HA antibody and immune complexes were subjected to kinase assay using histone H2B as substrate.Bottom, inputs and immunoprecipitated eluates were blotted (WB ) with antibodies to total Akt; top, incorporation of 32 P into histone H2B under each condition.Data are representative of two independent trials.Columns, mean of duplicate determinations; bars, SD. Figure 5 . Figure 5. Oncogenic Akt1 is enriched in lipid raft fractions and confers resistance to apoptosis induced by PI3K inhibition.A, LNCaP cells stably expressing MyrAkt1 (LNCaP/MyrAkt1) were subjected to sucrose density centrifugation.One milliliter fractions were blotted with antibodies to total Akt and HA.Fractions 5 to 8, raft fraction.B, lipid rafts in LNCaP/ MyrAkt1 cells were stained with 0.5 Ag/mL Alexa 594-CTxB for 10 min before staining with anti-S473-P Akt (1:100) and FITC-conjugated secondary antibody (1:100).Nuclei were counterstained with DAPI before imaging.Original magnification, Â63.C, bottom, LNCaP transfectants were treated with either 5 mmol/L cyclodextrin for 1 h, 45 Ag/mL water-soluble cholesterol (Chol ) alone for 1 h, or cyclodextrin for 1 h followed by cholesterol for 1 h.Cells incubated in serum-free medium served as controls.Ten micrograms of C+M or raft fractions were blotted with antibodies to total HA, Akt1, h-tubulin, or G ia2 .Top, membrane cholesterol levels following the indicated treatments.Columns, mean of duplicate determinations; bars, SD.D, inset, LNCaP/MyrAkt1 or control cells expressing LacZ (LNCaP/LacZ) were treated without or with 10 Amol/L LY294002 (LY) for 24 h and the extent of apoptosis was determined by flow cytometry.LNCaP/ MyrAkt1 cells were treated without (Ctrl ) or with 5 mmol/L cyclodextrin (1 h), 10 Amol/L LY294002 (24 h), or both agents (cyclodextrin for 1 h followed by LY294002 for 24 h) and harvested for flow cytometry.Data are presented as apoptotic cells (sub-G1 peak) expressed as a percentage of the total cell population and are representative of two independent trials. Figure 6 . Figure 6.Phosphorylation of Akt substrates is sensitive to cholesterol depletion.A, 10 Ag C+M and raft fractions from LNCaP/LacZ and LNCaP/MyrAkt1transfectants treated without (À) or with (+) 5 mmol/L cyclodextrin were blotted with antibodies to HA, Akt1, h-tubulin, G ia2 , and phosphorylated Akt substrates.B, 10 Ag whole-cell lysates prepared from LNCaP/ MyrAkt1 transfectants treated as in (A) were blotted with antibodies to phosphorylated forms of p70S6K, FKHR, and GSK3.The S6K-P antibody is known to cross-react with phosphorylated p85S6K; the FKHR-P antibody cross-reacts with phosphorylated FoxO4; and the GSK3-P antibody recognizes both GSK3a and GSK3h phosphorylated at Ser 21 and Ser 9 , respectively.C, the model summarizes the findings of the study.Akt present in lipid rafts displays a distinct substrate preference to Akt in the nonraft compartment, consistent with a redirection of Akt-dependent signaling when the protein resides in cholesterol-rich membranes.Following cholesterol depletion, signaling from raft-resident Akt is essentially ablated, whereas signaling from Akt in the nonraft compartment is largely unaffected.
8,359.4
2007-07-01T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Lipidomic profiling of exosomes from colorectal cancer cells and patients reveals potential biomarkers Strong evidence suggests that differences in the molecular composition of lipids in exosomes depend on the cell type and has an influence on cancer initiation and progression. Here, we analyzed by liquid chromatography–mass spectrometry (LC‐MS) the lipidomic signature of exosomes derived from the human cell lines normal colon mucosa (NCM460D), and colorectal cancer (CRC) nonmetastatic (HCT116) and metastatic (SW620), and exosomes isolated from the plasma of nonmetastatic and metastatic CRC patients and healthy donors. Analysis of this exhaustive lipid study highlighted changes in some molecular species that were found in the cell lines and confirmed in the patients. For example, exosomes from primary cancer patients and nonmetastatic cells compared with healthy donors and control cells displayed a common marked increase in phosphatidylcholine (PC) 34 : 1, phosphatidylethanolamine (PE) 36 : 2, sphingomyelin (SM) d18 : 1/16 : 0, hexosylceramide (HexCer) d18 : 1/24 : 0 and HexCer d18 : 1/24 : 1. Interestingly, these same lipids species were decreased in the metastatic cell line and patients. Further, levels of PE 34 : 2, PE 36 : 2, and phosphorylated PE p16 : 0/20 : 4 were also significantly decreased in metastatic conditions when compared to the nonmetastatic counterparts. The only molecule species found markedly increased in metastatic conditions (in both patients and cells) when compared to controls was ceramide (Cer) d18 : 1/24 : 1. These decreases in lipid species in the extracellular vesicles might reflect function‐associated changes in the metastatic cell membrane. Although these potential biomarkers need to be validated in a larger cohort, they provide new insight toward the use of clusters of lipid biomarkers rather than a single molecule for the diagnosis of different stages of CRC. Strong evidence suggests that differences in the molecular composition of lipids in exosomes depend on the cell type and has an influence on cancer initiation and progression. Here, we analyzed by liquid chromatographymass spectrometry (LC-MS) the lipidomic signature of exosomes derived from the human cell lines normal colon mucosa (NCM460D), and colorectal cancer (CRC) nonmetastatic (HCT116) and metastatic (SW620), and exosomes isolated from the plasma of nonmetastatic and metastatic CRC patients and healthy donors. Analysis of this exhaustive lipid study highlighted changes in some molecular species that were found in the cell lines and confirmed in the patients. For example, exosomes from primary cancer patients and nonmetastatic cells compared with healthy donors and control cells displayed a common marked increase in phosphatidylcholine (PC) 34 : 1, phosphatidylethanolamine (PE) 36 : 2, sphingomyelin (SM) d18 : 1/16 : 0, hexosylceramide (HexCer) d18 : 1/24 : 0 and HexCer d18 : 1/24 : 1. Interestingly, these same lipids species were decreased in the metastatic cell line and patients. Further, levels of PE 34 : 2, PE 36 : 2, and phosphorylated PE p16 : 0/20 : 4 were also significantly decreased in metastatic conditions when compared to the nonmetastatic counterparts. The only molecule species found markedly increased in metastatic conditions (in both patients and cells) when compared to controls was ceramide (Cer) d18 : 1/24 : 1. These decreases in lipid species in the extracellular vesicles might reflect function-associated changes in the metastatic cell membrane. Although these potential biomarkers need to be validated in a larger cohort, they provide new insight toward the use of clusters of lipid biomarkers rather than a single molecule for the diagnosis of different stages of CRC. Introduction Exosomes, extracellular nanovesicles (50-200 nm in diameter) of endosomal origin secreted by living cells into the extracellular environment [1], harbor a bioactive cargo of proteins, nucleic acids, and lipids [2]. These molecules can be transported by exosomes to different cell targets influencing their phenotype and physiological behavior. Tumor-derived exosomes have been reported to play a major role in cancer initiation and progression for instance in colorectal cancer (CRC) [3]. Dysregulation of lipid metabolism can affect cellular homeostasis and signaling pathways, which subsequently influences the process of cell proliferation and differentiation. Such change in the dynamic structure of the plasma membrane lipid bilayer has a major contribution to the onset of various diseases including cancer [4]. The majority of exosomal lipids are mainly localized in the membrane and have been reported to play a role in the biogenesis, secretion, fusion, and uptake of exosomes [5]. Although the molecular composition of lipids in exosomes depends on the cell type, it has been found that the membrane of the exosomes compared to that of the cell from which they originate is enriched in cholesterol (C), sphingomyelin (SM), glycosphingolipids, and glycerophospholipids [6]. Exploration of exosomal lipids as noninvasive circulant cancer biomarkers has only recently started. So far, just a few studies have analyzed the lipidomic profile of exosomes derived from breast [3], ovarian [7], and prostate [1] cancer cell lines. For CRC, only the lipid composition analysis of exosome-derived from the colorectal cancer LIM1215 cell line by mass spectrometry has been reported [8]. Therefore, further lipidomic analysis in colorectal cancer cells-derived exosomes is needed to understand in-depth the role of exosomes in cancer initiation and progression and to identify specific diagnostic/prognostic lipid biomarkers for different stages of CRC. In this pilot study, we analyzed the lipidomic signature of exosomes derived from CRC cell lines and patients by LC-MS. The results revealed that exosomes from both nonmetastatic and metastatic cell lines and those from the plasma of patients displayed similar significant variations in the lipidomic signature of certain lipid molecular species, particularly in glycerophospholipids and sphingolipids compared with their corresponding controls. Cell lines and patients The normal colonic epithelial cell line NCM460D (RRID:CVCL_IS47) was purchased from In Cell (San Antonio, TX, USA). HCT116 (RRID:CVCL_0291) and SW620 (RRID:CVCL_0547) cell lines were purchased from American Type Culture Collection (Manassas, VA, USA). All experiments were performed with cell-free mycoplasma using a mycoplasma detection kit (MycoAlert, Lonza Pharma&Biotech, Basel, Switzerland). Cell lines were grown in Dulbecco 0 s Modified-Eagle 0 s Medium (DMEM) supplemented with 10% FBS. The patient's blood samples were obtained from the University Hospital of Dijon (France). The study was conducted in accordance with the Declaration of Helsinki with an approved written consent form for each patient (CPP ESTI: 2014/39; N°ID: 2014-A00968-39). This study was approved by the local ethics committee (IRB 00010311). Isolation of exosomes Cells were cultured in DMEM supplemented with 10% FBS (exosome depleted) until reached 80% confluence. Exosomes derived from this conditioned medium and from the plasma of patients were performed by differential ultracentrifugation and filtration as previously described [9]. The concentration and size distribution of exosomes were measured by Nanoparticle tracking analysis (NanoSight NS300, Malvern, UK) and stored at À80°C until use. Lipid extraction LC-MS/MS quality grade chemicals were from Sigma Aldrich (Saint-Quentin Fallavier, France) and solvents were purchased from Fischer Scientific (Illkirch, France). Lipids were extracted according to the method of Bligh and Dyer as previously described [10]. Targeted lipidomics Phospholipids and ceramides were analyzed on a 1200 6460-QqQ LC-MS/MS system equipped with an Electrospray ionization (ESI) source (Agilent Technologies) as previously described [10]. Cholesterol was measured by gas chromatography-mass spectroscopy (GC-MS) using 10 or 15 µL of the Bligh and Dyer extracts obtained from plasma or cellular exosomes, respectively [11]. Statistics Lipid species were normalized to total cholesterol and analyzed by Two-way ANOVA followed by Tukey's multiple comparisons test. Data were considered statistically significant when P values ≤ 0.05. The statistical analysis was performed using GRAPHPAD PRISM version 8.0.0 for Windows, GraphPad Software (San Diego, CA, USA). Results and Discussion Analysis of exosomes derived from normal colon mucosa NCM460, nonmetastatic HCT116 and metastatic SW620 CRC cells by nanosight tracking analysis (NTA) did not reveal any significant differences in their average size. However, NCM460 and HCT116 showed a higher average concentration of exosomes compared with SW620 ( Fig. S1A). Concerning NTA analysis of plasma-derived exosomes (n = 12) of nonmetastatic, metastatic CRC patients, and healthy donors, no significant differences were found among the three groups in both size and concentration (Fig. S2A). Western blot analysis showed that the isolated nanovesicles from all cells (Fig. S1B) and patients ( Fig. S2B) were positive for the exosomal marker proteins Tsg101, Alix, syntenin-1, CD9, and CD63 while negative for the endoplasmic reticulum (ER) marker calnexin. The lipidomic profile of exosomes, analyzed by LC-MS and normalized to total cholesterol (nm) lead to the quantification of 175 lipid species in exosomes from both NCM460 and HCT116, 132 lipid species in SW620, and 178 lipid species in the three groups of plasma-derived exosomes (healthy donors, nonmetastatic, and metastatic, Table S1). The relative distribution of lipid compositions was considerably different among the exosomes. However, all exosomes were relatively abundant in sphingolipids (Figs S1C and S2C) and PC (Figs S1D and S2D), which is in agreement with the hypothesis that exosomal membranes harbor lipid raft-like domains [12] and are enriched in PC subclasses [8,13]. In metastatic patients, like metastatic SW620, exosomes possessed a smaller mole ratio of PS compared with nonmetastatic patients (Fig. S2D). Cholesterol was chosen to normalize the lipidomic, as it was an abundant lipid in all samples and no significant differences were detected among the different exosomes in the mole ratio of cholesterol (Fig. S3). The lipidomic analysis was next extended to the individual molecular species of the identified lipid subclasses. Figures 1-3 show the subclasses for which differences were obtained when comparing controls with cancer and/or nonmetastatic from metastatic conditions (raw data obtained for all subclasses analyzed are shown in Figs S4-S8). Considering PC subclass, the molecular species, PC 30:0, 32:1, 34:2, 34:1, and 36:2 were significantly increased in HCT116 compared with control NCM460 (Fig. 1A). Interestingly, all these PC species were decreased in SW620 along with PC 32:0, 36:1, and 38:2 when compared both with the control and the nonmetastatic HCT116 (Fig. 1C,E). Only the molecular species phosphorylated PC 34:0 (pPC 34:0) was markedly increased in SW620 (Fig. 1C,E). In accord with this result, an increased level of PC molecular species 32:1 was reported in CRC tissues [14]. For CRC plasma-derived exosomes, nonmetastatic patients revealed a significant enrichment in the PC 34:1 and 36:5 molecular species compared with the healthy controls and metastatic patients ( Fig. 1B,F). Moreover, exosomes derived from cancer patients, compared with the healthy donor, showed a decrease in the level of PC 34:2 and 36:4 individual species (Fig. 1B,D). Interestingly, the significant increase in the PC molecular species 34:1 in nonmetastatic HCT116-exosomes was also observed in exosomes derived from plasma of nonmetastatic CRC patients when compared with their corresponding normal counterparts (Fig. 1A,B). It should be noted that the level of PC 34:1 was also found to be increased in the exosomes derived from NB26 and PC-3 prostate cancer cell lines [1]. For the PE subclass, the molecular species PE 32:1, 34:2, 36:2, and 36:1 were significantly decreased in SW620 compared with NCM460 and HCT116 (Fig. 1C, E). Like the nonmetastatic HCT116-exosomes, plasma exosomes derived from nonmetastatic patients revealed a significant increase in the PE individual species 36:2, compared with the control and metastatic patients (Fig. 1A,B). Similarly, exosomes derived from metastatic both SW620 cells and patients displayed a significant decrease in the level of PE 34:2 and 36:2 molecular species compared with their nonmetastatic counterparts (Fig. 1E,F). In addition, PE 38:5 and 38:4 were also found decreased in exosomes from metastatic patients (compared with healthy donors and nonmetastatic patients; Fig. 1D,F). Conclusion In summary, targeted lipidomic analysis can enable the description of potential diagnostic/prognostic cancer biomarkers. Some signature profiling can already be proposed. For instance, markers when comparing controls and primary cancers might be Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Error bars represent the standard error mean (AESED) values of four independent replicates (n = 4). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Fig. S3. Determination of cholesterol by gas chromatography-mass spectroscopy (GC-MS) in exosomes derived from both colorectal cancer (CRC) cell lines and patients compared with their corresponding controls (exosomes from NCM460 cells and healthy control, respectively). As depicted in the figure, no significant change in cholesterol was observed in the exosomes derived from both cell lines and patients. Data were analyzed by two-way ANOVA followed by the Tukey's multiple comparison test. Error bars represent the standard error mean (AESED) values of four independent replicates (n = 4). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Fig. S4. Phosphatidylcholine (PC) species analysis normalized to total cholesterol of exosomes derived from (A) normal colon mucosa NCM460D, nonmetastatic HCT116, and metastatic SW620 colorectal cancer (CRC) cell lines and from (B) Plasma-derived exosomes of healthy donors and CRC patients (nonmetastatic and metastatic) illustrating an overall enrichment of 34:1 PC molecular species in both nonmetastatic cells and patients compared with their corresponding controls and metastatic counterparts. Data were analyzed by two-way ANOVA followed by the Tukey's multiple comparison test. Error bars represent standard error mean values (AESEM, n = 4). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Fig. S5. Phosphatidylethanolamine (PE) and plasmalogen (pPE) molecular species analysis normalized to total cholesterol in exosomes derived from (A) normal colon mucosa NCM460D, nonmetastatic HCT116 and metastatic SW620 colorectal cancer (CRC) cell lines and from (B) plasma-derived exosomes of healthy donors and CRC patients nonmetastatic and metastatic (n = 4 for each group, pooled). Exosomes from both nonmetastatic HCT116 cells, and patients showed a significant increase of PE species 36:2 compared with their corresponding controls and metastatic counterparts. Metastatic SW620 cells and patients revealed a significant decrease in p16:0/20:4 pPE level compared with their nonmetastatic counterparts. Data were analyzed by two-way ANOVA followed by the Tukey's multiple comparison test. Error bars represent standard deviation (AESD, n = 4) values. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Fig. S6. Phosphatidylinositol (PI) and phosphatidylserine (PS) molecular species analysis of exosomes derived from (A) normal colon mucosa NCM460D, nonmetastatic HCT116, and metastatic SW620 colorectal cancer (CRC) cell lines and from (B) plasma-derived exosomes of healthy donors and CRC patients nonmetastatic and metastatic (n = 4 for each group, pooled) normalized to total cholesterol. HCT116-derived exosomes are enriched in the PI molecular species PI 34:1, 36:2, and 36:1 compared with NCM460D and SW620. No significant change in the level of PI species was detected in all exosomes derived from the plasma of healthy donors and patients. Data were analyzed by two-way ANOVA followed by the Tukey's multiple comparison test. Error bars represent standard deviation (AESD, n = 4) values. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Fig. S7. Analysis of sphingomyelin (SM) molecular species in exosomes derived from (A) normal colon mucosa NCM460D, nonmetastatic HCT116, and metastatic SW620 colorectal cancer (CRC) cell lines and from (B) plasma-derived exosomes of healthy donors and CRC patients nonmetastatic and metastatic normalized to total cholesterol (n = 4 for each group, pooled). Both nonmetastatic HCT116-and patientderived exosomes revealed a marked increase in the level of d18:1/16:0 SM molecular species compared with their corresponding controls and metastatic counterparts. Data were analyzed by two-way ANOVA followed by the Tukey's multiple comparison test. Error bars represent standard deviation (AESD, n = 4) values. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Fig. S8. Ceramide (Cer) molecular species analysis of exosomes derived from (A) normal colon mucosa NCM460D, nonmetastatic HCT116, and metastatic SW620 colorectal cancer (CRC) cell lines and from (B) plasma-derived exosomes of healthy donors and CRC patients nonmetastatic and metastatic, normalized to total cholesterol (n = 4 for each group, pooled). Nonmetastatic HCT116-and patient-derived exosomes had an increase in the level of hexosylceramide d18:1/24:1 HexCer and d18:1/24:0 HexCer molecular species compared with their controls and metastatic counterparts. Both metastatic SW620-and patient-derived exosomes displayed a significant increase in the ceramide molecular species d18:1/24:1 compared with their controls. Data were analyzed by two-way ANOVA followed by the Tukey's multiple comparison test. Error bars AESD, n = 4. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. Table S1. Total lipid ions quantified by liquid chromatography-mass spectrometry (LC-MS) in exosomes derived from normal colon mucosa NCM460D, nonmetastatic HCT116, and metastatic SW620 colorectal cancer (CRC) cell lines and from plasma-derived exosomes of healthy controls (HC), and CRC patients nonmetastatic (NM) and metastatic (M).
3,799.6
2022-05-06T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
MOMSense: Metal-Oxide-Metal Elementary Glucose Sensor In this paper, we present a novel Pt/CuO/Pt metal-oxide-metal (MOM) glucose sensor. The devices are fabricated using a simple, low-cost standard photolithography process. The unique planar structure of the device provides a large electrochemically active surface area, which acts as a nonenzymatic reservoir for glucose oxidation. The sensor has a linear sensing range between 2.2 mM and 10 mM of glucose concentration, which covers the blood glucose levels for an adult human. The distinguishing property of this sensor is its ability to measure glucose at neutral pH conditions (i.e. pH = 7). Furthermore, the dilution step commonly needed for CuO-based nonenzymatic electrochemical sensors to achieve an alkaline medium, which is essential to perform redox reactions in the absence of glucose oxidase, is eliminated, resulting in a lower-cost and more compact device. NiO has been commonly utilized in NEG sensors because of its catalytic properties, for which Ni(II) and Ni(III) are responsible for the required redox reaction 3 . To improve the stability and sensing performance reported by the available Ni-based sensors 34,[40][41][42][43] , NiO-based hybrids have been investigated. Nanoparticle-assembled NiO nanosheets prepared using graphene oxide film, which is used as a template, have been recently explored for glucose sensing 44 . Although this system shows enhanced stability and selectivity over the available NiO-based sensors, a smaller linear detection range has been reported (0.001 mM -0.4 mM). It is noteworthy that an alkaline medium (pH > 7) is needed for NiO/NiO hybrid-based sensors to accomplish the redox reaction 3,45 . As a cost-effective material with negligible toxicity, ZnO has been widely used for fabricating enzymatic glucose sensors 13,45 . Dar et al. 46 were the first to report ZnO nanorods working as NEG sensors. The fabricated device was able to detect glucose at a neutral pH. However, the obtained linear range was very small (0.001 mM -0.01 mM). To enhance the sensing performance, a combination of ZnO with NiO or CuO has been shown to be an effective approach to improve the overall catalytic performance of the fabricated sensor 45 . Nevertheless, the sensing medium must be diluted to achieve alkaline conditions and consequently attain the synergistic effects of the combined materials. Among the metal oxide materials used, CuO is considered one of the best materials to be used in NEG sensing. This is due to its natural abundance, low production cost, high stability and appropriate redox potential. Equations (1) and (2) describe the dominant reactions taking place in CuO-based NEG sensors to allow electro-oxidation of glucose 9 . Furthermore, a substantial number of nonenzymatic CuO-based glucose sensors 47-57 require a high pH (≥13) medium to perform glucose sensing. CuOOH e glucose CuO OH glucose acid (2) In this paper, we present a CuO-based glucose sensor structure, named MOMSense. The structure is capable of differentiating dissolved glucose levels in a liquid sample from as low as 2.2 mM to at least 10 mM when the liquid sample is at neutral pH. Achieving glucose sensing at a neutral pH is essential to improve the sensitivity of the detection unit. Tang et al. 58 showed that performing sensing at a pH outside the neutral level affects the accuracy of the results, especially at diabetic glucose levels. Moreover, eliminating the dilution step needed for the sensing devices that work in an acidic or alkaline medium results in a cost-effective and compact device. The ability of the sensor to operate at a neutral pH facilitates its integration with other blood substance sensors. The ability to operate at a neutral pH is advantageous for the development of future lab-on-chip structures for real-time health monitoring. As shown in Fig. 2, MOMSense can be integrated into a microfluidic platform that serves as a miniature lab-on-chip [59][60][61][62] . The selective sample preparation and preconcentration steps enhance the sensitivity of the detection method. The improved selectivity starts by using a human fluid that is fed to the sensor through a microfluidic channel, where glucose molecules are extracted using a suitable separation technique. After this, separated fluid samples with glucose molecules are processed by the MOMSense device. The electrical response is measured and analyzed by the measurement and processing units to calculate the corresponding glucose level. Electrochemical detection integrated with a microfluidic paper-based analytical device (µPAD) is well-studied in literature and it is shown to play a significant role in glucose sensing due to its low cost, high sensitivity and selectivity, minimal sample preparation and short response time 63 . The microfluidic separation suggested in Fig. 2 is in line with the glucose sensing device proposed in 64 . In contrast, in this framework, the µPAD allows detection of low glucose molecules levels by pushing these molecules to the surface of the MOMSense through utilizing the www.nature.com/scientificreports www.nature.com/scientificreports/ capillary action of the µPAD structure. As a result, the current passing through the device will change as function of glucose concentration in the sample. The MOMSense device presented in this work is fabricated in a planar structure and can be mass produced using a wafer-style fabrication process, as shown in Fig. 3(a). Each device consists of a CuO layer and one pair of first and second Pt electrodes arranged on the oxide and separated by a gap containing the CuO layer, as shown in Fig. 3(b). The CuO surface extends around and below the metal electrodes and rests on a substrate layer, which can be any suitable inert structural layer, such as, but not limited to, glass. Figure 3(c) presents a scanning electron microphotograph of the device cross-sectional view, which shows a CuO thickness of 26.7 nm with another 20.8 nm Pt layer on a glass substrate. Results MoMsense Glucose test. For each measurement, an unused device is selected randomly from the same wafer to investigate the sensing ability of MOMSense devices for the following glucose concentrations: 3.9 mM, 5.6 mM and 7.8 mM. The two measurement steps used in performing these tests are illustrated in Fig. 4. Step 1: A dc voltage of 1 V is applied across the MOMSense sensors, and this voltage is the minimum working voltage for the sensor. (i) The resulting current level passing through the device is recorded. (ii) Next, the electrical stability of the device is checked in the absence of glucose. Step 2: Under the same dc value, a 2 µl drop of glucose solution is added on the top of the sensor. This solution covers the oxide area and is simultaneously allowed to touch both electrodes. This testing mechanism follows the well-reported amperometric glucose sensing approach detailed in 15 , which mainly involves the application of a constant bias potential, followed by an electric current measurement. This current is linearly related to the glucose concentration. As presented in Fig. 5, MOMSense devices show an instantaneous response at t = 10 s, which is the time when the glucose solution is applied to the device surface. It is clear from these plots that the measured current level after addition of the solution depends on the glucose concentration. Despite the fact that each measurement is conducted across seven separate devices with different concentrations, the error bars for the variation in the measured average currents are statistically significant. Such variation in responses is expected due to the variation associated with the patch device fabrication. The error bars can be significantly reduced by careful optimisation of the patch fabrication process. After confirming the repeatability, reproducibility and stability of MOMSense devices, the study is expanded to determine their linear range. This is achieved by testing a set of fresh devices using the following glucose concentrations: 2.2 mM, 3.9 mM, 5.6 mM, 7.8 mM, 10.0 mM and 12.2 mM. As presented in Fig. 6(a), MOMSense devices show instantaneous responses at t = 10 s, which is the time when the glucose solution is applied to the device. The current level for each concentration is relatively stable after one second of glucose application. The www.nature.com/scientificreports www.nature.com/scientificreports/ current value is read at t = 18 s and plotted versus the corresponding glucose concentration, as presented in Fig. 6(b). This set time point is selected because it provides the best linear fitting at the shortest time. The sensor has a linear characteristic between 2.2 mM and 10.0 mM, where the measured current consistently increases with the increase in glucose concentration. Moreover, it can be observed that the device sensitivity saturates at glucose concentrations above 10 mM (180 mg/dl). This is due to the high dependency of glucose adsorption on the available sensor surface area. Anion competition limits the extent of glucose oxidation, and therefore, the linearity of the oxidation current to the glucose concentration substantially degrades when the sensor surface is saturated 9,15,65,66 . It is clear that the empirical equation provided in Fig. 6(b) shows a nonzero passing model, which means that MOMSense devices have a different regime for lower concentrations. Table 1 summarizes the CuO-based NEG sensors available in the literature. It is clear that MOMSense devices exhibit a wide linear range and high sensitivity at a neutral pH. The concept of an integrated lab-on-chip separation and detection platform presented in Fig. 2 would facilitate employing excessively corrosive environments to increase the sensitivity and maintain the chemical stability of the device. Figure 7 shows the equivalent circuit diagram of a MOMSense device, where • resistor R P represents the resistance of the Pt electrodes; this resistance is not affected by the added glucose, as the electrical current always passes through the conducting metal; • resistor R S is the CuO interface resistance, which is affected by the added glucose solution, and its value depends on the following electrochemical reactions; • glucose and Pt are in volumes Vol 1 and Vol 3 ; and • glucose, Pt and CuO are in volume Vol 2 . Cu/Cuo/Cu devices. In this structure, the Pt electrodes in the MOMSense device are replaced by Cu electrodes to investigate the sensitivity of the device in the absence of platinum. This is realized by depositing 3 mm × 3 mm Cu electrodes on the CuO layer synthesized using the same process described for MOMSense devices and detailed in the Methods section. As presented in Fig. 8(b), there is no trend to relate the increase in the current passing through the device to the glucose concentration in the added drop. This confirms the role of Pt electrodes that act as catalytic electrodes that easily distinguish the number of electron transfers and consequently result in an electron flow that is proportional to the number of existing glucose molecules 67 . Glucose oxidation in Cuo system. Copper oxide is well documented as a multiplex electrochemical catalyst in an aqueous medium due to the various oxidized/hydroxylated species that can be present within the neutral to alkaline pH range, depending on the applied potential 68 . A widely accepted nonenzymatic mechanism associates the electro-oxidation of glucose with the presence of the redox active couple Cu 2+ /Cu 3+ in alkaline conditions (e.g., pH [11][12][13] in the form of CuO/CuO(OH) species 3 . Accordingly, the oxidation of glucose has been widely explained as per the following two-step process: First, a half-oxidation reaction of Cu 2+ to Cu 3+ occurs under a sufficient voltage supply: Second, a nonenzymatic oxidation-reduction reaction between the formed Cu III oxyhydroxide species and the adsorbed glucose takes place, allowing for further regeneration of CuO species: In addition to being widely accepted for alkaline conditions, a recent work 57 also claimed this mechanism for establishing glucose oxidation on graphene-modified CuO particles in neutral pH. On the other hand, a thorough analytical study of the electrochemical CuO system by Barragan et al. 69 pinpointed several controversies of the widely accepted mechanism above to justify a new hypothesis for the electrocatalytic behavior of CuO that claims little to no role of Cu 3+ species in the electro-oxidation process of glucose. Barragan et al. attributed the electron transfer process to the synergistic role between the adsorbed hydroxide ions and the semiconductive behavior of the CuO system that involve ion-pairing and partial charge transfer models rather than direct involvement of Cu 3+ ions. As for MOMSense devices, some initial experiments (see Fig. 9) are carried out with our devices under alkaline conditions (pH = 13). These results show that a glucose sensing signature with enhanced sensitivity can be established for an increased pH level, where the ratio between the responses of the blank and the glucose sample is enhanced from 1.1 to 1.9 for a pH = 7 and pH = 13, respectively. This indicates that some of the hypotheses reported in the literature can still be applicable, and it also corroborates the electrocatalytic behavior of CuO. We believe that other redox active couples, such as Cu + /Cu 2+ , could be highly involved under neutral conditions. In fact, the involvement of the cupric ions Cu 2+ (i.e., Cu(OH) 2 and CuO species) in the electrochemical oxidation of carbohydrates is a well-known metabolic pathway, which is also the basis of several biochemical tests for glucose sensing, including Fehling's test and Benedict's test 70,71 . However, explaining the mechanism at pH = 7 with the novel MOM structure reported in this work requires further study of the fabricated CuO layer to identify the exact nature of the electrochemical reactions taking place. www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion We successfully presented the design, fabrication and testing of an efficient nonenzymatic biomedical sensor that is capable of detecting different glucose concentrations ranging from 2.2 mM -10.0 mM. It was demonstrated that the novel planar Pt/CuO/Pt structure enables the nonenzymatic sensing mechanism. The MOMSense device exhibits a synergistic role for the interfaces between the Pt electrodes and the CuO surface to act as electrocatalysts and consequently facilitate the glucose oxidation needed for glucose detection in the absence of GDH or GOx. The role of the CuO layer and Pt electrodes in the sensing process was demonstrated through fabricating and testing Pt/Glass/Pt and Cu/CuO/Cu structures. These results confirm the synergistic contribution of the Pt electrodes attached to CuO in MOMSense devices. CuO is reported as a promising material to be deployed in NEG sensors. It can perform glucose oxidation on modified CuO-based electrodes in an alkaline solution 34,72-74 . As our goal in this work is to perform glucose testing at a neutral pH, the fabricated Cu/CuO/Cu devices presented in the preceding section are incapable of differentiating glucose concentrations. On the other hand, the Pt electrodes used in MOMSense devices enable the glucose oxidation to take place in neutral solution. The cyclic voltammetry reported in 62 for Pt electrodes in the presence of glucose at a pH of 7 showed three different oxidation peaks that reflect the electrochemically oxidized glucose at a platinum electrode. However, using Pt electrodes solely in glucose detection has been limited due to the many drawbacks of the material 15,62,65 . The sensing mechanism associated with MOMSense devices fabricated and presented in this paper generally provides new perspectives on the design and testing approaches for biomedical sensors and for glucose sensing specifically. Furthermore, the presented properties of MOMSense devices are in line with the requirements for a viable nonenzymatic glucose sensor 65 in terms of sensitivity, stability, accuracy, ability to meet the ISO standard (International Organization for Standardization), no oxygen dependency, low cost and ease of fabrication. Evaluating the combined detection of MOMSense devices with µPAD using actual blood samples is beyond the scope of current work and is considered as a future work. Methods Device fabrication. A low-cost standard photolithography fabrication process is followed in fabricating the MOMSense devices. As illustrated in Fig. 10, 99.9% pure Cu is sputtered on a 4″ Borofloat glass wafer using a Q300T T coating tool by Quorum Technologies. To form the CuO layer, the wafer is heated at 500 °C on a hot plate for three hours. After cooling to room temperature, the lithography step is performed by spin coating 1.4 µm thick MICROPOSIT ™ S1813 ™ positive photoresist. Prior to photoresist deposition, an HMDS primer is used to improve adhesion. A UV exposure system (KLOE 650) is used to pattern the photoresist layer on the wafer, followed by a one-minute development step using an appropriate developer. Next, 99.99% pure Pt is sputtered onto the wafer. Finally, the photoresist layer is lifted off using acetone to produce the final wafer presented in Fig. 3(a). Device Characterization. The cross section of a sample MOMSense device is inspected using high-resolution scanning electron microscopy (FEI Nova NanoSEM 650). A Keithley 4200-SCS Parametric
3,841.6
2019-04-02T00:00:00.000
[ "Materials Science", "Engineering", "Chemistry", "Medicine" ]
Recent Advances on Metal-Based Near-Infrared and Infrared Emitting OLEDs During the past decades, the development of emissive materials for organic light-emitting diodes (OLEDs) in infrared region has focused the interest of numerous research groups as these devices can find interest in applications ranging from optical communication to defense. To date, metal complexes have been most widely studied to elaborate near-infrared (NIR) emitters due to their low energy emissive triplet states and their facile access. In this review, an overview of the different metal complexes used in OLEDs and enabling to get an infrared emission is provided. Introduction During the past decades, a great deal of efforts has been devoted to improving the device-stacking as well as the materials used to fabricate organic light-emitting diodes (OLEDs). This extensive work is notably supported by the promising prospects and the wide range of applications in which OLEDs are involved, ranging from lighting to flat panel display and signage technology. These intense research efforts are also supported by the fact that OLEDs have been identified as the next generation of devices that could replace the present inorganic technology developed on glass substrates, heavier than plastic substrates and shatter-prone [1,2]. As the main advantages of OLEDs, these devices can be lightweight, designed on flexible substrates, and extremely thin. Since the pioneer works of Tang and Van Slyke in 1987 [3], a clear evolution of the materials used to fabricate OLEDs has been observed, and the light-emitting materials have not been exempted. In this field, three main periods can be identified, corresponding to the development of the first generation of light-emitting materials (fluorescent materials) rapidly substituted by the triplet emitters (phosphorescent materials). In 2012, a breakthrough was achieved by Chihaya Adachi who evidenced the benefits of the third generation of emitters, that is, the Thermally Activated Delayed Fluorescence (TADF) emitters which could easily compete with the metal-based phosphorescent light-emitting materials while being metal-free [4]. This evolution of structure is the result of the observation that the elongation of the excited state lifetime of emitters and the possibility to harvest both singlet and triplet excitons with phosphorescent and TADF materials can greatly improve the overall electroluminescence efficiencies of electroluminescent devices, an internal quantum efficiency (IQE) close to the unit being achievable. More precisely, for fluorescent materials, only the singlet excitons can contribute to light emission, limiting the IQE to 25% [5]. Considering that the singlet and triplet excitons are produced in a 1:3 ratio according to spin statistics, and that 75% of the generated excitons are lost in non-radiative processes with fluorescent materials, rapidly transition metal complexes in which both singlet and triplet excitons can be utilized for light emission have rapidly discarded the first generation of emitters [6]. By an efficient intersystem crossing, all singlet excitons can be converted to triplet excitons, optimizing the IQE to 100%. Indeed, Platinum Complexes Capitalizing on the remarkable performance of tris(8-hydroxyquinolinato)aluminum (Alq 3 ) reported by Tang and VanSlyke in 1987 [3], a platinum complex Pt-1 comprising an 8-hydroxyquinoline ligand was proposed in 1995 as a NIR emitter (see Figure 1) [40]. Interestingly, the photoluminescence quantum yield (PLQY) in the solid state was low (0.3% compared to 10% for Alq 3 ). Besides, OLEDs were fabricated with this material, and different device structures were examined. Best performances were obtained by using the following device structure: indium-tin-oxide (ITO)/N,N -diphenyl-N,N -bis(1-naphthyl)-1,1 -biphenyl-4,4"-diamine (NPB) (40 nm)/4,4'-Bis(carbazol-9-yl)biphenyl (CBP): Pt-1 (10 wt %, 40 nm)/bathocuproine (BCP) (40 nm)/Alq 3 (40 nm)/Mg:Ag (1:10, 100 nm), and a peak power efficiency of 0.16 lm/W was determined. Higher external quantum efficiencies were obtained while using terdentate ligands for the synthesis of platinum complexes [41]. Contrarily to Pt-1, Pt-2-Pt-4 are square planar complexes, and this specific geometry is favorable to the formation of triplet excimers [42]. This is notably demonstrated by comparing the photoluminescence (PL) spectra in solution and in thin films for the three complexes. A red-shift of about 40 nm was evidenced for all complexes, consistent with emission from the excimer. In a conventional device structure consisting of ITO/N,N -bis(3-methylphenyl)-N,N -diphenylbenzidine (TPD) (70 nm)/CBP (20 nm)/emissive layer (EML) (60 nm)/2,5diphenyl-1,3,4-oxadiazole (OXA) (30 nm)/Ca, an emission peaking at 720, 715, and 705 nm could be determined for Pt-2-Pt-4-based devices, respectively. For Pt-2, the emission detected beyond 750 nm represented 40% of the total electroluminescence (EL) emission, and a tail extending until 900 nm could be detected. Following this initial work, the same authors revisited Pt-2 in a new device structure (ITO/75 wt % TPD: 25 wt % PC (60 nm)/CBP (10 nm)/Pt-2 (30 nm)/OXA (30 nm)/Ca/PbO 2 ) and examined the influence of the cathode as well as the electron injection layer (EIL) on the performance of EL [24]. Best OLEDs were fabricated while using Ca as the cathode and PbO 2 as the electron injection layer, and an external quantum efficiency (EQE) of 14.5% was obtained. In this early work, authors demonstrated that the insertion of a thin buffer layer of PbO 2 as the EIL could not only minimize the mismatch between the cathode and the electron-transport layer (ETL) but also could reduce the width of the recombination zone by facilitating the electron transportation until the CBP/Pt-2 interface. Benefiting from the two effects, the driving voltage could be drastically reduced. A broad EL emission extending from 620 to 760 nm with a full width at half maximum of 140 nm was determined, and the portion of NIR emission represented 50% of the total emission. From an emission viewpoint, a similar result could be obtained while designing bis(8-hydroxyquinolato)platinum(II) derivatives [43]. Upon increasing the dopant concentration from 1.5 wt % to 10 wt % for Pt-5, Pt-6, and Pt-7, a red-shift of the EL emission was found, resulting from the formation of excimers at high complex concentration. As observed with Pt-2, the three complexes produced a NIR EL emission with the main peak centered between 650 and 702 nm, together with a shoulder in the 720-755 nm region. One of the key-element to fabricate highly emissive OLEDs is the photoluminescence quantum yield (PLQY) of the light-emitting materials, and this characteristic is difficult to get for materials emitting in the NIR region. As mentioned in the introduction section, the vibrational overlap between the low-lying excited state and the ground state favor quenching processes for NIR materials [44]. This problem could be overcome with a series of square-planar 2-pyrazinyl pyrazolate Pt(II) complexes Pt-8-Pt-11 which could also furnish highly emissive thin films by the specific horizontal orientation of the molecules [21]. By controlling the π-π stacking interaction between complexes in the solid state, an emission at long wavelength could be achieved, and a peak EQE of 24% could be realized with Pt-8. As specificity, the four complexes investigated in this study were not emissive in aerated and deaerated solutions at room temperature but highly emissive in thin films. PLQYs determined for Pt-8, Pt-9, and Pt-10 were high and were 81, 55, and 82%, respectively. Examination of the solid-state packings of complexes Pt-8-Pt-10 and Pt-11 revealed an ordered arrangement of the molecules in thin films, resulting in an organized transition dipole distribution. By performing an angle-dependent luminescence measurement, a preferred horizontal orientation of the transition dipole could be evidenced for all materials. By theoretical calculations and due to the aggregation in the solid state, the close packing of complexes gives rise to the formation of dimers, trimers, etc. In these aggregated structures, it could be determined by theoretical calculations that the HOMO level was dominated by a dz 2 contribution whereas the LUMO level is mainly centered on the ligand π* orbitals. By the specific orientation of the complexes in thin films and the formation of infinite aggregated structures, the limitation imposed by the energy gap law could be overcome. In this series, the most representative example is the complex Pt-8 that could produce an EL emission at 740 nm with an EQE of 24%. Seventy-eight percent of the EL emission was located beyond 700 nm. This percentage decreased to 42 and 33% for complexes Pt-9 and Pt-10, respectively (See Table 1). The last class of platinum complexes being examined to produce a NIR emission is the porphyrins and more precisely the benzoporphyrins. Numerous Pt-based porphyrins exhibiting an emission located in the 630-650 nm region have been reported in the literature [6,[45][46][47][48][49][50][51][52][53]. To drastically red-shift the emission of porphyrins, the introduction of benzopyrrole moieties in the porphyrin scaffold is required, providing tetrabenzoporphyrins. Owing to a more extended π-conjugation of the porphyrin core and the introduction of bulky groups to the meso-positions of the porphyrin core [54], a significant red-shift of both the absorption and the emission spectra could be obtained. Compared to the previous Pt(II) complexes reported in this review, the rate of the intersystem crossing between the singlet and the triplet states, as well as the rate of the radiative decay from the T 1 state, were increased in metalloporphyrins, limiting the adverse excited state quenching processes. The first example of Pt-tetrabenzoporphyrin used as a dopant for OLEDs and producing an EL emission in the NIR region was reported in 2007 by Thompson and coworkers [55]. Examination of the photophysical properties of Pt-12 revealed an emission centered at 765 nm, with a radiative decay rate of 1.3 × 10 4 s −1 and a PLQY of 0.8. An excited state lifetime of 53 µs was also determined. While using Pt-12 as a dopant for Alq 3 , OLEDs exhibiting a maximum EQE of 3% was obtained, with an EL emission close to the PL emission (769 nm vs. 765 nm, respectively). Device stability of OLEDs was also examined and after 1000 h, OLEDs could retain 90% of the initial luminance while being driven at 40 mA/cm 2 . These results are consistent with the device lifetime determined for another platinum complex, that is, Pt-13 for which a device lifetime of 100,000 h could be obtained while driving OLEDs at low luminance [56]. In a subsequent study, the same authors optimized the EL performance by introducing a hole blocking layer of bathocuproine (BCP) and by reducing the dopant concentration [57]. Precisely, the dopant concentration could be decreased from 6 to 4 wt %, reducing the concentration quenching and triplet-triplet (T-T) annihilation which is the dominant non-radiative deexcitation channel [58]. Benefiting from these two improvements, a maximum EQE of 8.5% was obtained owing to better confinement of excitons within the emissive layer. However, a marked efficiency roll-off, that is, a decrease of the EQE with the current density was evidenced. By co-doping the emissive layer with an iridium complex, authors could elucidate the mechanism of the efficiency roll-off, which is dominated by the T-T annihilation. This demonstration was carried out by co-doping the EML with the triplet emitter tris(2-phenylpyridine)iridium Ir(ppy) 3 of shorter excited state lifetime than that of the Pt-complex. Indeed, by introducing an efficient cascade energy transfer between the host matrix and the Ir dopant, and subsequently on the Pt-complex, the concentration of triplets on the Pt-complex could be significantly reduced, and a comparison of the EL characteristics established with and without Ir dopant evidenced a severe reduction of the maximum EQE in the absence of Ir dopant, demonstrating thus the reduction of the self-quenching effects. Pt-12 was also examined in the context of solution-processed OLEDs, and devices were fabricated by using poly(N-vinylcarbazole) (PVK) as the host polymer [59]. Carbazole-based polymers are extensively used for the design of solution-processed OLEDs due to their exceptional film-forming and charge-transport abilities [60][61][62]. The minimum dopant concentration to get a NIR emission was 1 wt %, and the maximum luminance of 0.2 mW/cm 2 was obtained. Performance remained limited due to the simplicity of the OLED architecture, which is a single-layered polymer LED (PLED): ITO/PEDOT:PSS/PVK:OXD-7: Pt-12/CsF/Al/Ag. A few years later, Pt-12 was revisited in the context of a series of nine metalloporphyrins in an effort to understand the effects of both the substituents and the π-extended conjugation [63]. It has to be noticed that the pioneering work of Thompson and co-workers [55] on Pt-tetrabenzoporphyrin have demonstrated the feasibility to elaborate high emissive complexes with this ligand while getting an emission centered around 770 nm, and this initial work paved the way for additional studies devoted to extending the π-conjugation of the porphyrin core and the emission in the NIR region. In this study, all the emitters were used for the design of PLEDs and vacuum processed OLEDs. Several trends could be deduced. First, and as predicted by the energy gap law, red-shift of the emission of metalloporphyrins was accompanied by a reduction of the PLQYs, as well as of the excited state lifetimes [64][65][66][67]. To illustrate this, the emission maximum of Pt-12, Pt-14, and Pt-15 shifted from 773, 891, to 1022 nm, with triplet lifetimes reducing from 29.9 to 12.7 and 3.2 µs, respectively [67]. Fabrication of PLEDs with these three emitters furnished devices with EL emissions that coincide their PL emissions, except for Pt-15, which was determined to be prone to degrade and for which a contribution in the visible range was detected. As anticipated, EQEs decreased from 2.07, 0.75, and 0.12% for Pt-12, Pt-14, and Pt-15, respectively, consistent with a red-shift of their PL/EL emissions. It has to be mentioned that a low dopant concentration was used, minimizing the aggregation and reducing the concentration quenching. When tested in vacuum-processed OLEDs (ITO/NPB (40 nm)/emissive layer/BPhen (80 nm)/LiF (1 nm)/Al), a significant enhancement of the EL characteristics was obtained for Pt-12 and Pt-13, with an EQE peaking at 8.0 and 3.8%. These results are consistent with previous results reported in the literature [68]. It has to be noticed that no evaporated OLEDs were fabricated with Pt-15, this material being not stable enough. Impact of the extension of the π-conjugation on the photophysical properties of porphyrins was also examined, and a series of six porphyrins were designed for this purpose. First, the comparison between di and tetra-substituted porphyrins revealed both the PLQYs and the excited state lifetimes of tetra-substituted porphyrins to be lower than that of di-substituted porphyrins in solution, resulting from larger degrees of out-of-plane distortion for the tetra-substituted porphyrins. Notably, the PLQY and the excited state lifetime of Pt-16 (0.33, 32 µs) was lower than that of its di-substituted Pt-17 counterpart (0.59, 53 µs) or the analogs Pt-18 (0.45, 52 µs), Pt-19 (0.44, 52 µs), or Pt-20 (0.3, 28 µs) (See Figure 2). Similarly, a low PLQY and a short excited-state lifetime were determined for Pt-21 (0.26, 20 µs) despite the presence of four fluorenes units that are well-known to be highly emissive groups. In this series, the most representative example is the complex Pt-8 that could produce an EL emission at 740 nm with an EQE of 24%. Seventy-eight percent of the EL emission was located beyond 700 nm. This percentage decreased to 42 and 33% for complexes Pt-9 and Pt-10, respectively (See Table 1). The last class of platinum complexes being examined to produce a NIR emission is the porphyrins and more precisely the benzoporphyrins. Numerous Pt-based porphyrins exhibiting an emission located in the 630-650 nm region have been reported in the literature [6,[45][46][47][48][49][50][51][52][53]. To drastically red-shift the emission of porphyrins, the introduction of benzopyrrole moieties in the porphyrin scaffold is required, providing tetrabenzoporphyrins. Owing to a more extended πconjugation of the porphyrin core and the introduction of bulky groups to the meso-positions of the porphyrin core [54], a significant red-shift of both the absorption and the emission spectra could be obtained. Compared to the previous Pt(II) complexes reported in this review, the rate of the intersystem crossing between the singlet and the triplet states, as well as the rate of the radiative decay from the T1 state, were increased in metalloporphyrins, limiting the adverse excited state quenching processes. The first example of Pt-tetrabenzoporphyrin used as a dopant for OLEDs and producing an EL emission in the NIR region was reported in 2007 by Thompson and coworkers [55]. Examination of the photophysical properties of Pt-12 revealed an emission centered at 765 nm, with a radiative decay rate of 1.3 × 10 4 s −1 and a PLQY of 0.8. An excited state lifetime of 53 μs was also determined. While using Pt-12 as a dopant for Alq3, OLEDs exhibiting a maximum EQE of 3% was While examining the same properties in thin films, a significant elongation of the excited state lifetime, from 50 to 140%, was determined for the tetra-substituted porphyrins. This modification of the excited state lifetimes was assigned to the suppression of non-radiative decay channels in the solid state. By contrast, only minor variations of the excited state lifetime were determined for the di-substituted porphyrins. Therefore, it can be concluded that the photophysical properties determined in solution do not follow the trend observed in thin films and that the examination of these properties in thin films is compulsory. Three notable trends could be determined from the fabrication of PLEDs: (1) the introduction of bulky substituents could increase EQEs by decreasing the aggregation in the solid state. An optimum was found for the substitution of the porphyrin core with tert-butyl groups, and an attempt to further increase the size of the peripheral groups did not significantly impact the EL performances. (2) di-substituted porphyrins could furnish higher EL characteristics than the tetra-substituted ones. (3) lifetimes determined in thin films show a good correlation with the PLED efficiencies. In contrast, other trends could be determined for vacuum processed OLEDs: (1) di-substituted porphyrins could furnish lower EL characteristics than the tetra-substituted ones in OLEDs, this opposite trend was assigned to interactions that are different from that observed in PLEDs, especially with the host matrix (Alq 3 vs. PVK:PBD blend). (2) porphyrins substituted with bulky groups gave lower EL characteristics than the non-substituted ones, and this counter-performance was once again assigned to unexpected interactions with the host matrix. From these results, it was concluded that the non-substituted porphyrins are sufficiently dispersed within the emissive layer (EML) to avoid concentration quenching and T-T annihilation. Among all the emitters tested in PLEDs and OLEDs, the most red-shifted EL emission was evidenced for PLEDs fabricated with Pt-15, with emission peaking at 1005 nm and a maximum EQE of 0.12%. In a parallel study, the same authors developed a comprehensive study concerning the influence of the π-conjugation on the position of the EL emission with another set of Pt-porphyrins varying by the number of aromatic rings fused to the pyrrole unit [67]. The conclusions were the same as of the previous ones. By replacing the porphyrin core by a tetraarylbenzoporphyrin (Pt-12), a tetraarylnaphthoporphyrin (Pt-14), and then a tetraarylanthroporphyrin (Pt-15), a red-shift of the PL emission accompanied by a decrease of the PLQYs and the phosphorescence lifetimes was demonstrated, consistent with the energy gap law. Thus, if a PL emission at 773 nm was determined for the parent Pt-12 benzoporphyrin, a phosphorescence emission clearly in the NIR was detected for the three others, shifting from 891 to 883 and 1022 nm for Pt-14, Pt-22, and Pt-15, respectively. Examination of the EL performance of Pt-12 in solution-processed OLEDs evidenced devices to exhibit an interesting emission at 896 nm but combined with an extremely low EQE, peaking at 0.4% [68]. Further, the fabrication of multilayered OLEDs with this material showed a maximum EQE of 3.8% at 0.1 mA/cm 2 and a maximum luminance of 1.8 mW/cm 2 . As a drawback, vacuum-deposited OLEDs showed a severe efficiency roll-off, still resulting from T-T annihilation at high current density. The last examples of Pt-porphyrins used as emitters for OLEDs are the azatetrabenzoporphyrins [69]. Only one article has reported the use of such emitters in the literature, and this is justified by the difficulty of synthesis of such porphyrin derivatives. As a starting point of this study, authors did observe the previous strategy, that is, the introduction of the fused aromatic ring onto the pyrrole unit was an efficient strategy to red-shift the emission, except that the molecular weight of the final compound was too high and the thermal stability too low to be sublimable. This strategy was also ineffective to shift the emission of tetrabenzoporphyrins centered between 770 nm and 1000 nm. Another possible route to tune the color emission was thus envisioned by Li and co-workers, consisting of the replacement of meso carbon atoms of tetrabenzoporphyrins by nitrogens. Using this strategy, a red-shift emission of the PL of approximately 72 nm for Pt-23 (λ em = 842 nm) compared to the parent tetrabenzoporphyrin Pt-12 (λ em = 770 nm) could be obtained, resulting from a stabilization of the LUMO energy level of the porphyrin ring. Conversely, a bathochromic shift of only 60 nm was observed for Pt-24 (λ em = 830 nm) which comprises of two nitrogen atoms, assigned to a localization of the triplet state only on the half-moiety of the porphyrin cycle comprising one nitrogen atom and one meso-carbon atom. Therefore, it can be concluded that the introduction of second nitrogen has a detrimental effect on the emission wavelength. When tested in a standard device structure (ITO/PEDOT:PSS/NPD (30 nm)/TAPC (10 nm)/Alq 3 :4% dopant (25 nm)/BCP (40 nm)/LiF/Al), an EQE of 2.8 and 1.5% were, respectively, obtained for Pt-23 and Pt-24. As an interesting feature, the full width at half maximum (FWHM) was narrow (27 nm for Pt-23 contrarily to 40 nm for the reference Pt-12), ensuring that the emission only occurs in the NIR. This strategy was also ineffective to shift the emission of tetrabenzoporphyrins centered between 770 nm and 1000 nm. Another possible route to tune the color emission was thus envisioned by Li and co-workers, consisting of the replacement of meso carbon atoms of tetrabenzoporphyrins by nitrogens. Using this strategy, a red-shift emission of the PL of approximately 72 nm for Pt-23 (λem = 842 nm) compared to the parent tetrabenzoporphyrin Pt-12 (λem = 770 nm) could be obtained, resulting from a stabilization of the LUMO energy level of the porphyrin ring. Conversely, a bathochromic shift of only 60 nm was observed for Pt-24 (λem = 830 nm) which comprises of two nitrogen atoms, assigned to a localization of the triplet state only on the half-moiety of the porphyrin cycle comprising one nitrogen atom and one meso-carbon atom. Therefore, it can be concluded that the introduction of second nitrogen has a detrimental effect on the emission wavelength. When tested in a standard device structure (ITO/ PEDOT:PSS/ NPD (30 nm)/ TAPC (10 nm)/ Alq3:4% dopant (25 nm)/ BCP (40 nm)/LiF/Al), an EQE of 2.8 and 1.5% were, respectively, obtained for Pt-23 and Pt-24. Iridium Complexes Iridium complexes have long been studied for the design of visible light electroluminescent devices, and cationic, anionic or neutral complexes have been examined for this purpose [70][71][72]. Only recently, iridium complexes have been explored to elaborate NIR OLEDs. Contrarily to platinum complexes that possess a square planar structure and long-living excited state lifetimes favorable to T-T annihilation and facilitating the efficiency roll-off by increasing the current density, iridium complexes differ by their octahedral geometries and their reduced excited state lifetimes. Iridium is also a cheaper metal than platinum, so d 6 iridium complexes have been identified as a viable alternative to platinum complexes. Here again and capitalizing on the strategies developed for platinum complexes, the efficient method to induce a significant bathochromic shift of the emission and to elongate the π-conjugation of the cyclometalated ligands of iridium complexes and the introduction of electron-rich heteroaromatic rings was applied [73]. To illustrate this, the replacement of a 2-phenylpyridine by a 2-naphthylisoquinoline ligand could shift the emission spectrum of a tris(cyclometalated)iridium complex from more than 100 nm [74,75]. Alternatively, a destabilization of the energy levels can be achieved by use of an ancillary ligand, but only a slight shift of the emission can be obtained with this strategy (10-15 nm) [76][77][78]. Besides, the combination of the two approaches proved to be effective for developing NIR emitters based on iridium. This strategy was notably applied for the design of a family of [Ir(iqbt) 2 L] complexes where an electron-rich cyclometalated ligand, that is, iqbt which stands for 1-(benzo[b]thiophen-2-yl)isoquinoline was combined with three different ancillary ligands, namely 2,2,6,6-tetramethyl-3,5-heptanedione (Hdpm) (furnishing Ir-1), 2-thienoyltrifluoroacetone (Htta) (furnishing Ir-2), and 1,3-di(thiophen-2-yl)propane-1,3-dione (Hdtdk) (furnishing Ir-3) (See Figure 3) [79]. Precisely, the last two ancillary ligands have been selected for the presence of electron-rich groups, that is, thiophene units. In solution, Ir-1-Ir-3 displayed an emission at 710, 704, and 707 nm, respectively, consistent with the electronic enrichment of the ancillary ligand. From these results, the weak influence of the chemical modification of the ancillary ligand and the introduction of thiophene units, the bathochromic shift of the emission being of only 3 nm between complexes Ir-2 and Ir-3, can also be concluded. Conversely, if the photophysical properties of Ir-1 and Ir-3 were almost identical (PLQY = 0.16 and 0.14, excited state lifetimes = 1.40 µs and 1.44 µs for complexes Ir-1 and Ir-3, respectively), a significant decrease was observed for complex Ir-2 (0.07 and 0.72 µs). Examination of the non-radiative decay rate evidenced this constant to be two times higher than that determined for complexes Ir-1 and Ir-3 whereas similar excited state lifetimes could be measured for all complexes at 77K. Therefore, it was concluded that specifically, for complex Ir-2, a non-radiative deexcitation pathway was thermally favored at room temperature. The EL performances of complexes Ir-1-Ir-3 in solution-processed devices followed the trend observed for the photophysical properties, complexes Ir-1 and Ir-3 furnishing the highest EQE (3.07 and 2.44%, respectively) whereas the performances of complex Ir-2 were clearly behind (1.28%). A NIR emission was detected for all complexes, the emission wavelength ranging from 714 nm for complexes Ir-1 and Ir-3 to 709 nm for complex Ir-2 (see Table 2). As a positive point, all devices showed a negligible efficiency roll-off, lower than 10% between 0 and 1 W·sr −1 ·m −2 . There are numerous examples of heteroleptic iridium complexes with cyclometalated ligands of extended poly-aromaticity to produce NIR emitting materials in the literature. For instance, the introduction of pyrene units into a cyclometalated ligand [31] or anthracene units [80] can be cited as examples. However, all these NIR emitters have not been designed for OLEDs applications, and some of these structures have been prepared for biological applications [80]. Table 1. Summary of electroluminescent properties of organic light-emitting diodes (OLEDs) fabricated with Pt-complexes. Emitters Device Structure While coming back to Ir-4, a NIR EL emission at 720 nm and an EQE of 0.27% could be obtained with this complex when tested as triplet emitter for solution-processed OLEDs (See Table 2). A two-fold enhancement of EQE could even be obtained by introducing a hole-transport triphenylamine (Ir-5) at the peripheral side of the pyrene-based cyclometalated ligand [81]. EQE could be improved to 0.56% while doping the emissive layer at 4 wt % with Ir-5. A NIR emission extending from 697 nm (main peak) to 764 nm (shoulder) could also be determined, mirroring the PL spectrum. Pt Enhancement of the EL performance can be assigned not only to the presence of the hole-transport unit onto the complex facilitating the charge transportation but also to the introduction of bulky substituents in charge to drastically reduce the aggregation in the solid state. Finally, the replacement of the acac ligand of Ir-5 by a picolinate ligand (pic) in Ir-6 does not significantly alter the EL spectrum (main peak at 698 nm with a shoulder at 762 nm), and a higher EQE could be obtained with this value peaking at 1.29% for vacuum-processed OLEDs [82]. While coming back to complexes comprising of acac ligand, the heteroleptic complex Ir-7 comprising of cyclometalated ligand 2-methyl-3-phenylbenzo[g]quinoxaline (mpbqx-g) could emit at 777 nm with a shoulder at 850 nm [83]. An EQE of 2.2% was obtained while doping the emissive layer at 20 wt %. A low-efficiency roll-off was also evidenced resulting from a relatively short phosphorescence lifetime (0.28 µs). Based on the extended π-conjugated benzo[g]phthalazine ligand, which is of similar structure as that of mpbqx-g, the homoleptic complex fabricated with this ligand, that is, Ir-8, could exhibit a peak emission at 760 nm with an EQE of 4.5% for evaporated OLEDs and a dopant concentration of 12 wt % [84]. By developing more sophisticated cyclometalated ligands, EQE of Ir-9 could be increased up to 3.4% for an EL emission at 702 nm and devices prepared by solution process. As specificity, this complex has been designed with bulky peripheral substituents so that the complex is itself "encapsulated" by its own substituents, reducing the possible intermolecular interactions, T-T annihilation and addressing the efficiency roll-off issue. To overcome the problems inherent with polyaromatic structures, that is, the low solubility, alkyl chains were introduced onto the fluorene units. Authors also evidenced light emission to originate from charge trapping by the complex, resulting in a significant increase of the driving voltage upon increase of the dopant concentration. Concerning the low-efficiency roll-off, authors attributed this specificity to the short-excited state lifetime of the complex and the bulkiness of the peripheral groups. Finally, iridium complexes can also be synthesized under the cationic form, and a few examples of NIR cationic complexes have been reported in the literature. As a drawback, cationic iridium complexes can't be sublimed and use of this emitter, therefore, imposes the elaboration of devices by solution-process. As first examples of cationic complexes, Ir-10 and Ir-11 could produce a true NIR emission at 715/788 and 791 nm with EQEs of 0.50 and 0.34%, respectively [85]. In these structures, the benzo[g]phthalazine ligand could induce a much stronger Ir-N bond than the benzo[g]quinoline ligands, providing emitters with higher thermal stability. The insensitivity of OLEDs to the current density was also demonstrated, addressing the efficiency roll-off issue. Finally, Ir-12 is another cationic complex of interest [86]. Here again, use of 2-methyl-3-phenylbenzo[g]quinoxaline (mpbqx) as the ancillary ligand enabled to produce a true NIR emission (753 nm) together with an acceptable EQE (0.30%). Concerning cationic iridium complexes, several strategies have been developed over the years to red-shift their emissions and investigate their incorporation into light-emitting electrochemical cells (LECs). As specificity, LECs differ from OLEDs by the presence of mobile ions within the emissive layer so that a delay occurs between the application of a driving voltage and light emission [87]. Contrarily to OLEDs where their characterizations are realized by sweeping the driving voltage between zero and a maximum voltage defined by the manipulator in order to determine their current-voltage-luminance (I-V-L) characteristics, LECs require, prior to light emission, a step consisting of doping both interfaces to facilitate charges injection. Doping of interfaces can be obtained by applying a constant voltage, enabling ions pair separation, and the migration of ions at both interfaces, reducing the energy barrier to inject electrons and holes. Consequently, a delay occurs between turn-on time and light emission due to the time required to form the p-n junction. While coming to the light emitting materials, and considering that for iridium complexes the HOMO energy level is centered on the cyclometalated ligands and the metal center, several studies were devoted to destabilize the HOMO energy level by mean of electron-releasing groups, such as methoxy groups (Ir-13) [88], electron-rich groups, such as thiophene (Ir-14-Ir-17) [89], or extended polyaromatic groups, such as benzo[g]quinoline (Ir-12) (see Figures 3 and 4). As specificity, by applying a driving voltage of 4V to LECs containing Ir-13, the maximum luminance was achieved after operating LECs for one hour (18 cd/m 2 ), and a half-life of two hours was also determined for these devices. An extremely low EQE of 0.05% was obtained. Interestingly, LECs emit at 650 nm, with a broad emission band extending from 550 to 850 nm. Similar behavior was observed with Ir14-Ir-17, for which a maximum emission was detected at ca. 600 nm for all complexes. However, the emission was also broad, the electroluminescence (EL) peaks extending from 550 to 800 nm. Contrarily to Ir-13 for which a short device lifetime was determined, half-lives of 101 and 9.7 h were obtained with Ir-14 and Ir-16, respectively, possessing a 6-phenyl-2,2'-bipyridine ligand. This ligand is notably extensively used to improve the chemical stability of iridium complexes by generating π-π interactions between the cyclometalated ligands and the ancillary ligand [90]. Another strategy commonly used to decrease the HOMO-LUMO consists of stabilizing the LUMO energy level, which is achievable upon extending the π-conjugation of the ancillary ligand. In this context, OLEDs could even be prepared with Ir-18 and Ir-19 which are proved to be sublimable cationic complexes [91]. However, the limitation of this second strategy is obvious, since an emission at 608 nm was found for the two complexes, the emission peak extending from 500 to 800 nm. 2,2 -Bithiazoles and 2,2 -bibenzo [d]thiazoles that belong to a new family of ancillary ligands prove to be a more efficient strategy to tune the LUMO energy level of iridium complexes [92]. By extending the aromaticity of the ancillary ligand in Ir-21 relative to that of Ir-20, the EL peak could be shifted from 661 to 705 nm for Ir-20 and Ir-21, respectively. However, for the two complexes, EQE obtained for OLEDs remained low, peaking at 0.13 and 0.33% for Ir-20 and Ir-21, respectively (see Table 3). Recently, a breakthrough has been achieved by combining both the extension of aromaticity of the ancillary ligands and the cyclometalated ligands [93]. To evidence the benefits of this strategy, six complexes Ir-22-Ir-27 were synthesized. Almost similar photoluminescence properties were found for the six complexes, varying between 827 for Ir-26 to 852 nm for Ir-22. A near-infrared emission detected beyond 800 nm could be determined for all complexes, irrespective of the substitution pattern or the ancillary ligands. However, the most red-shifted emission was found for complexes comprising 2-(quinolin-2-yl)quinazoline as the ancillary ligand (849 and 846 nm for Ir-24 and Ir-25, respectively) or 2,2 -biquinoline (852 and 840 nm for Ir-22 and Ir-23, respectively). Among all synthesized complexes, only Ir- 24 and Ir-27 were tested as solid-state emitters for LECs. In a conventional device stacking, LECs fabricated with Ir-24 could emit at 882 nm whereas the emission of Ir-27-based LECs was blue-shifted compared to that of Ir-24-based devices, peaking at 790 nm. If the electron-to-photon conversion remained low with these complexes, the device lifetime was extremely low, and the overall lifetimes of LECs of approximately 2 min before a complete and irreversible degradation of the emitters was evidenced. 1-Butyl-3methylimidazolium hexafluorophosphate, TPBi : 2,2',2"-(1,3,5-benzinetriyl)-tris(1-phenyl-1-H-benzimidazole). Ruthenium Complexes Ruthenium complexes have also been extensively studied for the design of LECs as the high molecular weight of these complexes is a major impediment for the design of OLEDs by vacuum processes. Historically, ruthenium complexes have been the first family of triplet emitters to be tested as light-emitting materials for solid-state devices, but their relative long excited state lifetimes on the basis of numerous quenching processes (triplet-triplet (T-T) annihilation, triplet polaron annihilation) and the weak color tunability have rapidly discarded these complexes in favor of iridium complexes [94]. The first examples of Ru complexes exhibiting a near-infrared emission were reported in 2008, and mononuclear and di-nuclear complexes were indifferently investigated in this study [95]. Seven complexes Ru-1-Ru-7 were designed, varying by the nature of the ligands (see Figure 5). While examining their photoluminescence properties, a red-shift was Thus, a red-shift of the PL emission from 650 nm for Ru-1 to 1040 nm for Ru-7 could be obtained. When tested in LECs with a standard device configuration of ITO/Ru-1-Ru-7 (100 nm)/Au, a contribution in the near-infrared region could be found for all emitters, the EL emission peaking at 630 nm for Ru-1 to 1040 nm for Ru-7. Good accordance between the EL and PL spectra could be found for all complexes. If Ru-7 gave LECs with the most red-shifted emission, the maximum luminance of external quantum efficiency could not be determined for this complex due to the low light intensity. Interestingly, all complexes (i.e., Ru-4, Ru-6) comprising 2-(2-pyridyl)-benzimidazole as the ligand Thus, a red-shift of the PL emission from 650 nm for Ru-1 to 1040 nm for Ru-7 could be obtained. When tested in LECs with a standard device configuration of ITO/Ru-1-Ru-7 (100 nm)/Au, a contribution in the near-infrared region could be found for all emitters, the EL emission peaking at 630 nm for Ru-1 to 1040 nm for Ru-7. Good accordance between the EL and PL spectra could be found for all complexes. If Ru-7 gave LECs with the most red-shifted emission, the maximum luminance of external quantum efficiency could not be determined for this complex due to the low light intensity. Interestingly, all complexes (i.e., Ru-4, Ru-6) comprising 2-(2-pyridyl)-benzimidazole as the ligand furnished devices that could only be driven at higher voltage compared to that measured with the other complexes. A turn-on time varying from a few seconds to a hundred of seconds could be determined for all complexes, depending on the applied voltage. The fastest response time was obtained for Ru-3, existing under the form of a mixed valence state (Ru 2+ /Ru 3+ ) during the doping step, facilitating charge transport. As previously mentioned, the device-stacking is an important parameter influencing the overall performance. A magistral demonstration was done with Ru-1, revisited in the context of a polymerbased LEC [96]. In this work, poly(vinyl)alcohol was used as the host material, and maximum luminance of 6.89 cd/m² could be obtained while maintaining the EL emission at 620 nm and introducing a reduced graphene oxide layer between the anode and the emissive layer. Parallel to the improvement of the electron-to-photon conversion, a severe improvement of the device stability was obtained, enhanced from a few minutes in the former study to 37 min in this work by using the following device structure ITO/ reduced graphene oxide (rGO)/Ru-1/Ag. If the rGO layer was beneficial concerning the device stability, the performance could be even improved by removing this layer, enabling LECs to reach a peak efficiency of 14.42 cd/m². In 2016, an unusual ligand, namely 2-(5-(pyridin-2-yl)-2H-tetrazol-2-yl) acetic acid, was used as the key ligand for the design of a series of six complexes Ru-8-Ru-13 (see Figure 6) [97]. Compared to the former series Ru-1-Ru-7 comprising 2-(2-pyridyl)benzimidazole or 2,3-bis(2-pyridyl)-benzoquinoxaline, a decrease of the HOMO-LUMO gap was less efficient since EL emissions ranging from 568 nm for Ru-13 to 612 nm for Ru-8 were determined for LECs comprising these emitters. Noticeably, the EL emission was broad so that a As previously mentioned, the device-stacking is an important parameter influencing the overall performance. A magistral demonstration was done with Ru-1, revisited in the context of a polymer-based LEC [96]. In this work, poly(vinyl)alcohol was used as the host material, and maximum luminance of 6.89 cd/m 2 could be obtained while maintaining the EL emission at 620 nm and introducing a reduced graphene oxide layer between the anode and the emissive layer. Parallel to the improvement of the electron-to-photon conversion, a severe improvement of the device stability was obtained, enhanced from a few minutes in the former study to 37 min in this work by using the following device structure ITO/reduced graphene oxide (rGO)/Ru-1/Ag. If the rGO layer was beneficial concerning the device stability, the performance could be even improved by removing this layer, enabling LECs to reach a peak efficiency of 14.42 cd/m 2 . In 2016, an unusual ligand, namely 2-(5-(pyridin-2-yl)-2H-tetrazol-2-yl) acetic acid, was used as the key ligand for the design of a series of six complexes Ru-8-Ru-13 (see Figure 6) [97]. Compared to the former series Ru-1-Ru-7 comprising 2-(2-pyridyl)benzimidazole or 2,3-bis(2-pyridyl)-benzoquinoxaline, a decrease of the HOMO-LUMO gap was less efficient since EL emissions ranging from 568 nm for Ru-13 to 612 nm for Ru-8 were determined for LECs comprising these emitters. Noticeably, the EL emission was broad so that a contribution in the NIR region could be found for all complexes. Considering that numerous combinations of ligands were used in this study, several conclusions could be established. Thus, the 2-pyridine (1H-tetrazol-5-yl) ligand in Ru-13 greatly contributed to blue-shift the EL emission (568 nm) compared to that of Ru-12 comprising 2-(5-(pyridin-2-yl)-2H-tetrazol-2-yl)acetic acid (600 nm). Similarly, the choice of the ancillary ligand also proved to be crucial. A comparison between Ru-11 and Ru-12 differing by a phenanthroline or a bipyridine ancillary ligand evidenced a difference of the maximum EL emission to vary from 25 nm. To improve the device stability, a four-layer LEC structure was used, using the following device stacking: ITO/PEDOT-PSS/PVK/Ru complex/PBD/Al. Notably, the emissive layer was separated from the electrodes by the introduction of a hole-injection layer (PEDOT:PSS) and a hole-transport layer (poly(N-vinyl)carbazole (PVK)) at the anode side and by an electron transport layer (2-(4-tert-butylphenyl)-5-(4-biphenylyl)-1,3,4-oxadiazole PBD) at the cathode interface to avoid electrons and holes to drift at both interfaces and initiate quenching processes. PVK is notably extensively used for the design of solution-processed devices due to its ability to drastically reduce the surface roughness of the indium-tin-oxide (ITO) anode by its remarkable film-forming ability [98][99][100]. Influence of the counter-anion on the device stability was also examined. Concerning this point, the best stability was found with all emitters containing the tetrafluoroborate anion. On the opposite, the less stable devices were fabricated with emitters comprising thiocyanate as the anion (Ru-8, Ru-10), the latter being converted to cyanide anion by sulfur elimination during device operation [101,102]. For Ru-9, Ru-11-Ru-13, device stability higher than 20 h could be found, demonstrating 2-(5-(pyridin-2-yl)-2H-tetrazol-2-yl) acetic acid to enable the elaboration of remarkably stable complexes, despite the presence of the acetic acid group. Concerning the device stability, remarkable results were obtained with two heteroleptic ruthenium bis-chelate complexes comprising substituted tridentate 2-phenyl-4,6-dipyridin-2-yl-1,3,5-triazine ligands [103]. Choice of this ligand was dictated by a comparison established with the well-known terpyridine ligand extensively used for the design of ruthenium complexes. Notably, numerous works on 2-phenyl-4,6-dipyridin-2-yl-1,3,5-triazine ligands revealed the ruthenium complexes fabricated with this ligand to exhibit higher photoluminescence quantum yields and elongated excited state lifetimes compared to their analogs based on terpyridine [104][105][106][107]. To get a luminescence at room temperature, two complexes were designed, that is, Ru-14 and Ru-15 varying by the presence of the electron-withdrawing ester group. Photoluminescence of the two complexes only slightly varies, originating from the 3 MLCT state and peaking at 723 and 717 nm for Ru-14 and Ru-15, respectively. To get emissive layers with sufficiently smooth properties, the two complexes were mixed with 20% poly(methyl methacrylate) (PMMA). When tested in a conventional device structure ITO/PEDOT:PSS/Ru-14 or Ru-15 :PMMA/Al, presence of the saturated polymer within the emissive layer resulted in devices with low light output, around 0.6 µW, and requiring several hours to reach the maximum luminance (9 and 37 h for Ru-14 and Ru-15, respectively), indicative of a reduced ion mobility in the PMMA layer in both cases. The most stable devices were obtained with Ru-15, the time to reach half of the initial luminance being of 360 h, contrarily to 120 h, for Ru-14-based LECs. For the two complexes, maximum EQEs remained low, peaking at 0.005%. Performance of LECs can also be improved by providing more balanced charge transportation within the emissive layer. This parameter was examined with a series of three complexes Ru-16-Ru-18 where an ambipolar charge transportation ability was provided by attaching a phenanthroimidazole ligand [108]. As an interesting point and in addition to the improvement of the charge transportation, use of an ancillary ligand with extended aromaticity both contribute to reducing the HOMO-LUMO gap and red-shift the PL emission. While examining the PL emissions in solution and in thin films, a major red-shift of the maximum emission was observed for Ru-17, shifting from 630 nm in solution to 700 nm in thin films, indicating a severe aggregation in the solid state. Conversely, a more moderate shift was observed for Ru-16 and Ru-18, shifting from 609 and 594 nm in solution to 628 and 631 nm in the solid state for Ru-16 and Ru-18, respectively. LECs fabricated with Ru-16-Ru-18 were prepared with an unusual cathode, namely a Ga:In alloy that avoids the deposition of this electrode at high temperature. Examination of the device lifetimes revealed the three complexes to give LECs of comparable stability, in the order of 1000 min, corresponding to the time required to reach half of the initial luminance. The last strategy developed to induce a NIR emission is the use of polynuclear complexes. This strategy is quite unusual considering the difficulties of synthesis of such complexes and the problems of solubility encountered with these polymetallic structures. This work is notably justified Examination of the device lifetimes revealed the three complexes to give LECs of comparable stability, in the order of 1000 min, corresponding to the time required to reach half of the initial luminance. The last strategy developed to induce a NIR emission is the use of polynuclear complexes. This strategy is quite unusual considering the difficulties of synthesis of such complexes and the problems of solubility encountered with these polymetallic structures. This work is notably justified by the fact that LECs based on complexes comprising phenanthroimidazole ligands often lack the acceptable device stability for future applications, which could be improved by using di-nuclear complexes [109,110]. However, examination of LECs characteristics revealed that this challenge could not overcome with Ru-19 and Ru-20, the time for LECs to reach half of the initial luminance being of only 539 and 1104 s for Ru-19 and Ru-20-based devices, respectively (See Figure 7 and Table 4) [111]. From this work, it can be, therefore, concluded that the design of polynuclear Ru-complexes requiring hard work from the synthetic point of view is useless and non-adapted for the design of long-living LECs. complexes [109,110]. However, examination of LECs characteristics revealed that this challenge could not overcome with Ru-19 and Ru-20, the time for LECs to reach half of the initial luminance being of only 539 and 1104 s for Ru-19 and Ru-20-based devices, respectively (See Figure 7 and Table 4) [111]. From this work, it can be, therefore, concluded that the design of polynuclear Ru-complexes requiring hard work from the synthetic point of view is useless and non-adapted for the design of long-living LECs. Lanthanide Complexes Rapidly after the discovery of the electroluminescence process with Alq3, numerous works have been devoted to examining the EL properties of complexes comprising of rare earth metals. Due to the presence of 4f electrons, numerous electrons, and energetically close levels, these complexes were immediately identified as appealing candidates for optical transitions in the near-infrared region [112,113]. Indeed, lanthanide complexes are characterized by sharp EL emission bands due to the 4f electrons of the cationic center. Resulting from the important size of the metal center, complexes of rare Earth metals also exhibit relatively flexible coordination geometries, enabling to largely tune their optoelectronic properties. By combining various β-diketonates and ancillary ligands, geometries of these flexible complexes could be optimized so that the PLQYs of lanthanide complexes are greatly improved [114][115][116][117]. Among ligands, β-diketones are the most versatile ones, by their facile substitutions, their strong coordination ability, and π-π* transitions located in the UV region. When combined with O^N ancillary ligands, the coordination sphere around the lanthanide center is complete so that there is no space for high energy O-H or C-H oscillations of solvent molecules [114][115][116][117][118][119][120][121][122][123]. Indeed, lanthanide complexes are highly sensitive to their environment. Lanthanide Complexes Rapidly after the discovery of the electroluminescence process with Alq 3 , numerous works have been devoted to examining the EL properties of complexes comprising of rare earth metals. Due to the presence of 4f electrons, numerous electrons, and energetically close levels, these complexes were immediately identified as appealing candidates for optical transitions in the near-infrared region [112,113]. Indeed, lanthanide complexes are characterized by sharp EL emission bands due to the 4f electrons of the cationic center. Resulting from the important size of the metal center, complexes of rare Earth metals also exhibit relatively flexible coordination geometries, enabling to largely tune their optoelectronic properties. By combining various β-diketonates and ancillary ligands, geometries of these flexible complexes could be optimized so that the PLQYs of lanthanide complexes are greatly improved [114][115][116][117]. Among ligands, β-diketones are the most versatile ones, by their facile substitutions, their strong coordination ability, and π-π* transitions located in the UV region. When combined with OˆN ancillary ligands, the coordination sphere around the lanthanide center is complete so that there is no space for high energy O-H or C-H oscillations of solvent molecules [114][115][116][117][118][119][120][121][122][123]. Indeed, lanthanide complexes are highly sensitive to their environment. Parallel to this, the asymmetric coordination geometries around the metal center are known to give strong emission efficiencies. Among all possible metal centers of Rare Earth, the optical transition of the trivalent erbium ion Er 3+ , that is, 4 I 3/2 → 4 I 15/2 occurs at 1.5 µm which corresponds to the standard telecommunications windows, rendering this metal of crucial interest for both civil and military applications. For instance, erbium tris(8-hydroxyquinolate) Er-1 was used for the design of the early OLEDs emitting at 1.54 µm [124,125]. No quantification of the light emission properties was provided, and simple device architectures were used as exemplified by the following structure: ITO/TPD (50 nm)/Er-1 (60 nm)/Al and others [126]. In 2000, Sun and coworkers mixed another trivalent Er complex, Er-2, in PVK as the host polymer, and OLEDs emitting at 1.54 µm were also obtained [127]. However, OLEDs remained single-layered devices, limiting the EL efficiencies. Rapidly, the device architecture was improved, and the first attempt to optimize the device-stacking was carried out in 2010 by Wei et al. with Er-3 [128]. Top-emitting devices were fabricated since OLEDs were elaborated on Si wafers. The geometry of Er complexes can greatly affect the emissive properties, and over the years, a great deal of efforts has been devoted to achieving the most favorable geometry. This is notably the case for erbium (III) β-diketonate complexes [113][114][115][116][117][118][119][120][121][122][123][124]. Fluorinated β-diketonate complexes with NˆN-donor ancillary ligands have notably been developed for their remarkable solubility, allowing the design of solution-processed OLEDs with Er-4 [129] or Er-5 (See Figure 8) [130]. Fluorination of β-diketonate ligands is also an effective way to improve the solubility of complexes without significantly affecting the triplet energy level of the β-diketonate used as sensitizing ligands [131]. Solution-processed OLEDs fabricated with Er-5 showed the typical energy transfer from the organic ligand to the central Er (III) ion, with an emission detected at 1535 nm corresponding to a 4 I 13/2 → 4 I 15/2 transition. Interestingly, devices exhibited a low turn-on voltage of 7V [129] or 8V [132], depending on the study. ITO/TPD (50 nm)/Er-1 (60 nm)/Al and others [126]. In 2000, Sun and coworkers mixed another trivalent Er complex, Er-2, in PVK as the host polymer, and OLEDs emitting at 1.54 μm were also obtained [127]. However, OLEDs remained single-layered devices, limiting the EL efficiencies. Rapidly, the device architecture was improved, and the first attempt to optimize the device-stacking was carried out in 2010 by Wei et al. with Er-3 [128]. Top-emitting devices were fabricated since OLEDs were elaborated on Si wafers. The geometry of Er complexes can greatly affect the emissive properties, and over the years, a great deal of efforts has been devoted to achieving the most favorable geometry. This is notably the case for erbium (III) β-diketonate complexes [113][114][115][116][117][118][119][120][121][122][123][124]. Fluorinated βdiketonate complexes with N^N-donor ancillary ligands have notably been developed for their remarkable solubility, allowing the design of solution-processed OLEDs with Er-4 [129] or Er-5 (See Figure 8) [130]. Fluorination of β-diketonate ligands is also an effective way to improve the solubility of complexes without significantly affecting the triplet energy level of the β-diketonate used as sensitizing ligands [131]. Solution-processed OLEDs fabricated with Er-5 showed the typical energy transfer from the organic ligand to the central Er (III) ion, with an emission detected at 1535 nm corresponding to a 4 I13/2 → 4 I15/2 transition. Interestingly, devices exhibited a low turn-on voltage of 7V [129] or 8V [132], depending on the study. These values are comparable to that reported for other octacoordinated Er complexes, such as Er-6 [133] or Er-4 [130]. However, in the case of Er-7, the turn-on voltage could be lowered to 4 V, but a dramatic decrease of the maximum brightness was also demonstrated, the latter being three times lower than that of Er-6 (see Table 5). Finally, by using a neutral triphenylphosphine oxide as the ancillary ligand, a dramatic impact on both the turn-on voltage (14.0 V) and the maximum irradiance (0.069 mW/cm²) could be evidenced with Er-8 as the emitter [134]. Choice of the metal center introduced in the lanthanide complexes is of crucial importance as it governs the emission These values are comparable to that reported for other octacoordinated Er complexes, such as Er-6 [133] or Er-4 [130]. However, in the case of Er-7, the turn-on voltage could be lowered to 4 V, but a dramatic decrease of the maximum brightness was also demonstrated, the latter being three times lower than that of Er-6 (see Table 5). Finally, by using a neutral triphenylphosphine oxide as the ancillary ligand, a dramatic impact on both the turn-on voltage (14.0 V) and the maximum irradiance (0.069 mW/cm 2 ) could be evidenced with Er-8 as the emitter [134]. Choice of the metal center introduced in the lanthanide complexes is of crucial importance as it governs the emission wavelength of OLEDs. The second most widely studied metal for the design of NIR emitters is neodymium. In this case, emission of OLEDs is centered at 1065 nm. The first report mentioning the observation of electroluminescence from a neodymium complex was reported in 1999 by Kawamura et al. [135]. Table 5. Summary of electroluminescent properties of organic light-emitting diodes (OLEDs) fabricated with Er-complexes. Er-2 ITO/PVK:Er-2/Al:Li/Ag 1540 [127] Er A triple layered device was then used, comprising a hole and an electron transport layer, thus favoring the charge recombination within the emissive layer. Capitalizing on the results obtained by Tang and VanSlyke, Alq 3 was used as the electron-transport layer. The ancillary ligand of Nd-1 was 4,7-diphenyl-1,10-phenanthroline (bath), selected for its excellent charge transport ability whereas the sensitization of the neodymium cation was ensured by dibenzoylmethane ligands (see Figure 9). Upon application of a driving voltage of 19V, clear electroluminescence of the complex in the NIR region was detected, producing three sharp emission bands at 890, 1070, and 1350 nm corresponding to 4 F 3/2 → 4 I 9/2 , 4 F 3/2 → 4 I 11/2 , and 4 F 3/2 → 4 I 13/2 transitions, respectively. As a drawback, a significant peak corresponding to the green EL of Alq 3 could be detected at a high driving voltage so that the intensity of the visible emission could become comparable to that detected in the NIR region. By replacing Alq 3 by a hole-blocking layer (BCP), a pure emission of Nd-1 could be obtained by confining holes within the emissive layer [136]. The chirality of complexes can alter the emission wavelength of OLEDs, and the influence of the isomers of a same complex on the EL characteristics was demonstrated in the early work of [137]. In this work, authors could demonstrate the thermal isomerization of one isomer to another one, resulting in the enrichment of the emissive layer with one isomer. However, if the comparison of the PL spectra of the powder and thin films could evidence the phenomenon, authors could not determine which isomer could be rearranged thermally. To produce a NIR emission, the sensitization of the Nd 3+ cation is crucial, and some authors selected 1H-phenalen-1-one as the ligand due to its common use in biology as singlet oxygen sensitizer. The possibility to sensitize the Nd 3+ cation was demonstrated, and the main peak at 1065 nm could be detected for OLEDs fabricated with Nd-3 [138]. An EQE of 0.007% could be determined for these polymer LEDs. These performances could be improved by sensitizing the cation with a tridentate ligand, that is, 6-(pyridin-2-yl)-1,5-naphthyridin-4-ol, and an EQE of 0.019% could be reached (Nd-4) [139]. Finally, the best EQE was obtained in 2010, by co-depositing an iridium complex with the Nd 3+ complex Nd-5 (See Table 6) [140]. Benefits of a triplet sensitizer were demonstrated since the maximum EQE could reach 0.3%. However, the pertinence of the strategy can be still discussed, with regards to the high cost of iridium complexes used as the sensitizer. polymer LEDs. These performances could be improved by sensitizing the cation with a tridentate ligand, that is, 6-(pyridin-2-yl)-1,5-naphthyridin-4-ol, and an EQE of 0.019% could be reached (Nd-4) [139]. Finally, the best EQE was obtained in 2010, by co-depositing an iridium complex with the Nd 3+ complex Nd-5 (See Table 6) [140]. Benefits of a triplet sensitizer were demonstrated since the maximum EQE could reach 0.3%. However, the pertinence of the strategy can be still discussed, with regards to the high cost of iridium complexes used as the sensitizer. The asymmetry of the structures of lanthanide complexes is well-reported to favor more the radiative deexcitation pathways compared to the symmetric complexes [114,119]. If the former lanthanides complexes (erbium, neodymium) were developed for telecommunication and laser applications, emission of ytterbium complexes is centered around 1000 nm, and these complexes thus found applications for photodynamic therapy and/or detection of tumors [141,142]. As specificity, ytterbium complexes exhibit slightly higher PLQYs than the other lanthanides and longer-living excited state lifetime in the microsecond range. Yb 3+ also possesses 33 electrons in its 4f orbitals, and a pure emission around 980 nm can be easily obtained resulting from a transition from the ground state 2 F 5/2 to the excited state 2 F 7/2 . Based on the observation that asymmetric complexes were more emissive than the symmetric ones, ytterbium complexes displaying an asymmetry structure were designed as emitters for OLEDs. Proof of concept that NIR OLEDs could be fabricated with a ytterbium complex was done in 2000 by Kawamura et al. [143]. In a basic device structure (hole-transport layer/emissive layer/electron transport layer), a pure emission of Yb 3+ could be electrogenerated. Thus, Yb-1 that comprises a triphenylphosphine oxide as the ancillary ligand and thenoyltrifluoroacetylacetone as the monoanionic ligand could furnish a maximum irradiance of 19.29 µW/cm 2 at 15 V [144]. This value is significantly higher than that reported for Yb-2 (1.47 µW/cm 2 at 17.8 V) [145] or Yb-3 [146] (0.6 µW/cm 2 at 15.7 V) (see Figure 10). It has to be noticed that for the last complex, that is, Yb-3, the emissive layer was made of the metal complex blended with the insulating polystyrene polymer, what was not favorable for charge transportation. Later, the same author blended Yb-3 with a poly(paraphenylene) polymer, improving the charge transport and reaching a maximum irradiance of 10 µW/cm 2 at 9 V [30,147,148]. While coming back to Yb-1, the EL emission detected at 980 nm corresponds to the 2 F 5/2 → 2 F 7/2 transition. However, two other broad emissions could be detected at 410 and 600 nm, assigned to the electroplex formation at the interface between organic layers. The thenoyltrifluoroacetylacetone is a promising ligand for the design of highly emissive Yb 3+ complexes, and another asymmetric seven-coordinate complex with a square antiprism (C 4v ) geometry, that is, Yb-4, can be cited as an efficient NIR complex [149]. In this work, a series of three ancillary ligands were examined, namely diphenyl sulphoxide, dibenzoyl sulphoxide, and benzoguanamine. Diphenyl sulphoxide was found to complete the coordination sphere around the Yb 3+ cation the most efficiently. Considering that in this specific configuration, the hole-transportation is not ensured anymore by the host but by the light-emitting materials, the emitter should exhibit good charge carrier ability. That was notably the case with Yb-10. When tested as an emitter in the following device-stacking: ITO/ PEDOT:PSS/Yb-10 (30 nm)/TPBi (10 nm)/Al, a low turn-on voltage of 4.0 V was determined, and an EL emission at 978 nm with a low band at 530 nm was found. By replacing 2,2',2"-(1,3,5benzinetriyl)-tris(1-phenyl-1-H-benzimidazole) (TPBi) acting as an electron-transport and holeblocking material by 3-(biphenyl-4-yl)-5-(4-tertbutylphenyl)-4-phenyl-4H-1,2,4-triazole (TAZ), the visible EL emission could be suppressed, and an EQE of 0.14% at 14 V was determined. A further 48 µW/cm 2 could be obtained with Yb-4 whereas this value was reduced to 12.13 µW/cm 2 and 9.60 µW/cm 2 for Yb-5 and Yb-6, respectively (See Figure 10 and Table 7). The order of the maximum irradiances follows those of the PLQYs, the EL efficiency being proportional to the PLQYs. Here again, two emissions at 410 and 600 nm could be detected, once again assigned to the formation of electroplex at the organic interface. Charge recombination and energy transfer on the organic ligand are well-known and were notably observed for a complex, such as Yb-7 [150]. To favor the charge recombination within the emissive layer, a lot of efforts has been devoted to the fabrication of OLEDs comprising double emissive layers. This is the case with Yb-2 and Yb-8 that were both introduced within the emissive layer [145]. Recombination of holes and electrons within the emissive layer was facilitated by the hole-transport ability of Yb-8 and the electron-transport ability of Yb-2. Consequently, electrons and holes could recombine at the Yb-8/Yb-2 interface, and a pure emission of Yb 3+ could be obtained. A similar strategy was also developed only for Yb-9, with a Yb-9:TPD/Yb-9 bilayer [151]. In this early work published in 2001, no quantification of the maximum irradiance was done. However, the comparison carried out with devices comprising a single emissive layer evidenced a lower NIR emission intensity at comparable driving voltage. The simplification of the device fabrication constitutes a great challenge for future applications and recently, a group examined the possibility to design host-free NIR OLEDs [152]. Considering that in this specific configuration, the hole-transportation is not ensured anymore by the host but by the light-emitting materials, the emitter should exhibit good charge carrier ability. That was notably the case with Yb-10. When tested as an emitter in the following device-stacking: ITO/PEDOT:PSS/Yb-10 (30 nm)/TPBi (10 nm)/Al, a low turn-on voltage of 4.0 V was determined, and an EL emission at 978 nm with a low band at 530 nm was found. By replacing 2,2 ,2"-(1,3,5-benzinetriyl)-tris(1-phenyl-1-H-benzimidazole) (TPBi) acting as an electron-transport and hole-blocking material by 3-(biphenyl-4-yl)-5-(4-tertbutylphenyl)-4-phenyl-4H-1,2,4-triazole (TAZ), the visible EL emission could be suppressed, and an EQE of 0.14% at 14 V was determined. A further improvement was obtained by replacing Al by Ca/Al exhibiting a lower work function and facilitating electron injection. A maximum EQE of 0.21% at 12 V was thus obtained. The thermal stability of emitters during vacuum deposition is also another major concern and, in this field, a group examined the possibility to directly generate the metal complex by co-depositing the metal precursor and the ligand [153]. Using this strategy, complexes with high molecular weight can be still used for the fabrication of OLEDs. More precisely, the complex Yb-11 was synthesized while using the ligand bis [2-(diphenylphosphino)phenyl]ether oxide (DPEPO) also as the host for the thermally generated complex, the latter being frequently used as host material [154,155]. A maximum EQE of 0.15% could be realized at 1.0 mA/cm 2 . To end this part devoted to lanthanum complexes, other metals were rarely investigated for the design of NIR OLEDs. In this field, few holmium complexes were tested, even if these complexes exhibit three main peaks at 980, 1200, and 1500 nm, the last peak corresponds to a 5 F 5 → 5 I 6 transition of Ho 3+ ion, favorable to their potential applications to optical telecommunications. The EL emission of OLEDs fabricated with Ho-1 designed with standard ligands used for other lanthanum complexes with the following devices structure ITO/TPD (50 nm)/Ho-1 (50 nm)/Mg:Ag (10:1) has been proved to be adversely affected by the emission of exciplex at 660 nm, resulting from charge recombination at the TPD/EML interface (See Figure 11) [156]. As a result of this, a strong emission in the visible region competing with the NIR EL emission was evidenced. Finally, thulium complexes were only scarcely tested in NIR OLEDs, and these complexes also proved to be poor candidates for NIR emission. The EL emission of OLEDs fabricated with Ho-1 designed with standard ligands used for other lanthanum complexes with the following devices structure ITO/TPD (50 nm)/Ho-1 (50 nm)/Mg:Ag (10:1) has been proved to be adversely affected by the emission of exciplex at 660 nm, resulting from charge recombination at the TPD/EML interface (See Figure 11) [156]. As a result of this, a strong emission in the visible region competing with the NIR EL emission was evidenced. Finally, thulium complexes were only scarcely tested in NIR OLEDs, and these complexes also proved to be poor candidates for NIR emission. Indeed, a comparison of the EL characteristics of Tm-1-and Er-9-based OLEDs evidenced the erbium complex to exhibit a much stronger emission, irrespective of the device configuration (see Figure 11) [157]. Osmium Complexes Over the years, several strategies have been developed to induce an emission centered in the NIR region. Depending on the geometry of the complex, heteroleptic complexes with cyclometalated ligands of extended conjugation were developed with iridium complexes. Conversely, the planar geometry of platinum complexes is favorable to intermolecular π-π stacking interactions in the solid state, red-shifting the emission. Concerning osmium complexes, this is the first strategy that was applied, the octahedral geometry of osmium complexes impeding π-π stacking interactions in the solid state. Therefore, the development of highly conjugated isoquinolinyl triazolate chelate was studied as a tool to red-shift the emission of osmium complexes, and EL emissions ranging from 718 to 814 nm could be obtained with Os-1 and Os-2 (See Figure 12) [158]. Indeed, a comparison of the EL characteristics of Tm-1-and Er-9-based OLEDs evidenced the erbium complex to exhibit a much stronger emission, irrespective of the device configuration (see Figure 11) [157]. Osmium Complexes Over the years, several strategies have been developed to induce an emission centered in the NIR region. Depending on the geometry of the complex, heteroleptic complexes with cyclometalated ligands of extended conjugation were developed with iridium complexes. Conversely, the planar geometry of platinum complexes is favorable to intermolecular π-π stacking interactions in the solid state, red-shifting the emission. Concerning osmium complexes, this is the first strategy that was applied, the octahedral geometry of osmium complexes impeding π-π stacking interactions in the solid state. Therefore, the development of highly conjugated isoquinolinyl triazolate chelate was studied as a tool to red-shift the emission of osmium complexes, and EL emissions ranging from 718 to 814 nm could be obtained with Os-1 and Os-2 (See Figure 12) [158]. geometry of platinum complexes is favorable to intermolecular π-π stacking interactions in the solid state, red-shifting the emission. Concerning osmium complexes, this is the first strategy that was applied, the octahedral geometry of osmium complexes impeding π-π stacking interactions in the solid state. Therefore, the development of highly conjugated isoquinolinyl triazolate chelate was studied as a tool to red-shift the emission of osmium complexes, and EL emissions ranging from 718 to 814 nm could be obtained with Os-1 and Os-2 (See Figure 12) [158]. Among the two complexes, Os-1 and Os-2 tested in devices, OLEDs exhibiting the most redshifted emission were obtained with complex Os-1 (814 nm) whereas an emission at 718 nm was detected for complex Os-2 (See Figure 12). In fact, due to the steric hindrance generated by the chelating ligands, a perpendicular arrangement of the ligand occurs, destabilizing the LUMO energy level of the isoquinolyl ligand and blue-shifting the emission. Upon optimization of the structure of the devices and the replacement of the TPBi layer by a TAZ layer, a maximum EQE of 1.5 and 2.7% was, respectively, obtained for Os-1 and Os-2. It has to be noticed that the significant enhancement of the EQE for Os-2 results from the drastically blue-shifted emission relative to that of Os-1. In 2005, an unexpected strategy was developed to prepare LECs, consisting of dispersed triplet emitter (Os- Among the two complexes, Os-1 and Os-2 tested in devices, OLEDs exhibiting the most red-shifted emission were obtained with complex Os-1 (814 nm) whereas an emission at 718 nm was detected for complex Os-2 (See Figure 12). In fact, due to the steric hindrance generated by the chelating ligands, a perpendicular arrangement of the ligand occurs, destabilizing the LUMO energy level of the isoquinolyl ligand and blue-shifting the emission. Upon optimization of the structure of the devices and the replacement of the TPBi layer by a TAZ layer, a maximum EQE of 1.5 and 2.7% was, respectively, obtained for Os-1 and Os-2. It has to be noticed that the significant enhancement of the EQE for Os-2 results from the drastically blue-shifted emission relative to that of Os-1. In 2005, an unexpected strategy was developed to prepare LECs, consisting of dispersed triplet emitter (Os-3) in an ionic ruthenium complex [159]. It has to be noticed that this approach has also been later applied to the design of OLEDs, as exemplified with the well-known Ir(ppy) 3 hosted by various iridium complexes of wider bandgaps [160]. In the present case, Ru(bpy) 3 2+ was selected for its emission in the orange region, and, therefore, its energy levels were adapted to efficiently host Os-3. While examining the PL emission of the Os-3/Ru(bpy) 3 2+ thin films, a variation of the maximum emission with the dopant concentration was determined. Thus, emission at 675 nm was determined at 1% concentration, 695 nm at 5% concentration. Emission of doped films was determined as different from that of a pristine film of Os-3 (710 nm), indicating solvation effects already reported in the literature for doped films [161,162]. When tested in devices, a clear shift of the EL emission with the driving voltage was evidenced. Thus, if LECs emit at ca 710 nm when driven at 2.5 V, an emission blue-shifting to 610 nm was obtained upon operating LECs at 7 V, demonstrating a saturation effect resulting in the emission of the host materials [163]. When driven at 3 V, a maximum EQE of 0.75% and maximum luminance of 220 cd/m 2 were obtained. Examination of the device stability over time revealed LECs to retain 90% of the maximum EQE after four hours of operation (See Table 8). Phthalocyanines Phthalocyanines are an important class of metal complexes characterized by strong insolubility in most of the common organic solvents. Phthalocyanine is a fully planar macrocycle comprising 18 aromatic electrons. Due to the planarity of its structure, a strong π-π stacking occurs in the solid state, impeding to disrupt the intermolecular interaction and impeding the dissolution of the complex. Face to these considerations, the only way to fabricate OLEDs with phthalocyanines is, thus, the thermal evaporation. As the main advantage, phthalocyanines are extremely stable, even at high temperature so that the thermal deposition was envisioned to construct OLEDs. From the photophysical point of view, a strong absorption band named Q-band is observed around 700 nm [164]. Phthalocyanines also possess the good hole-transport ability, and thus several phthalocyanines have been used as hole-transport materials for OLEDs [165]. The first report mentioning the use of a phthalocyanine as NIR emitter was published in 2006 [166]. In this pioneering work, a copper phthalocyanine (Pc-1) doped at 12 wt % into CBP was used, and an EL emission centered at 1100 nm was observed (See Figure 13). Examination of the EL process revealed the excitation of phthalocyanine by direct trapping of electrons and holes. Direct charge trapping by the phthalocyanine was also demonstrated with Pc-2 [167]. Conclusions-Outlook Since the first reports in the 90′s examining the infrared emission of OLEDs, six main families of metal complexes have been reported in the literature. At present, performances of these devices remain still limited, attributable to the use of non-adapted device structures and charge transport materials. The dramatic difference of performances found for the same emitter while modifying the device structures is the reflection of the difficulty to find the right device architecture and the adapted materials. Over the years, a great deal of efforts has been devoted to developing solution-processed OLEDs due to the high molecular weight of these complexes, which are non-adapted to design vacuum-processed OLEDs. Besides, at present, the most performant devices are still vacuumprocessed OLEDs, but the insufficient thermal stability of most of the complexes reported in this review constitutes a major impediment to elaborate highly emissive infrared devices by this process. The preparation of neutral complexes is not always possible as exemplified with ruthenium complexes, and LECs have thus been designed with these non-sublimable complexes. A rapid survey of the results reported in this review reveals platinum complexes to be abandoned for the design of NIR OLEDs. This is certainly attributable to the high cost and the rarity of this transition metal. Most of the references mentioned in this review are pretty old (>10 years ago). Conversely, references concerning iridium complexes are more recent, and the number of NIR iridium complexes (27 mentioned in this review) attests of the interest of the community for this metal. Interest for iridium complexes is notably sustained by the remarkable performances obtained with visible LEDs. The easier color tunability is another parameter to consider. Ruthenium is also known as a revival of interest as numerous works on NIR devices have recently been reported in 2019 with this transition metal. The number of NIR ruthenium complexes (20 complexes) reported in the literature is comparable to that of iridium complexes, attesting of the interest for this metal. Clearly, lanthanides To enhance the EL efficiency, the sensitization of Pc-1 by an iridium complex, that is, Ir(piq) 2 (acac), enabled to reach a 15-fold enhancement of the EL intensity (see Figure 13) [168]. To get this result, Ir(piq) 2 (acac) was selected as the sensitizer due to the overlap of its emission spectrum with the absorption spectrum of Pc-1, its high PLQY, and its long-living excited state lifetime of 1.29 µs. By getting a deeper insight into the EL mechanism, it was found that the excited state lifetime of the Ir complex was shortened for the CBP:Ir(piq) 2 acac: Pc-1 blended film compared to that of the CBP:Ir(piq) 2 acac film. It could be concluded that the primary mechanism involved in the EL process was an energy transfer from the Ir(piq) 2 (acac) to Pc-1. By elaborating devices with a double emissive layer with Ir(piq) 2 (acac) and Pc-1 into two different layers, almost no improvement of the EL efficiency was detected. Therefore, it was concluded that the proximity of the sensitizer and the emitter was favoring an energy transfer by Dexter mechanism. Recently, the strategy of sensitization of Pc-1 by triplet harvesting was extended to a purely organic molecule PXZ-TRZ, exhibiting the specific property of thermally activated delayed fluorescence (TADF) [169]. Here again, an energy transfer from the triplet state of this molecule was clearly evidenced. Concerning phthalocyanine, the metal cation introduced in the macrocycle can drastically impact the emission wavelength. Thus, chloroindium phthalocyanine Pc-3 was found to emit at 880 nm [170], palladium (Pc-4) and platinum (Pc-5) phthalocyanines at 1025 and 966 nm, respectively [171], whereas an emission around 700 nm was determined for silicon phthalocyanines (Pc-6 and Pc-7) [172]. Os-3 ITO/Os-3:Ru(bpy) 3 2+ (100 nm)/Au 220 cd/m 2 0.75 [159] Pc Conclusions-Outlook Since the first reports in the 90 s examining the infrared emission of OLEDs, six main families of metal complexes have been reported in the literature. At present, performances of these devices remain still limited, attributable to the use of non-adapted device structures and charge transport materials. The dramatic difference of performances found for the same emitter while modifying the device structures is the reflection of the difficulty to find the right device architecture and the adapted materials. Over the years, a great deal of efforts has been devoted to developing solution-processed OLEDs due to the high molecular weight of these complexes, which are non-adapted to design vacuum-processed OLEDs. Besides, at present, the most performant devices are still vacuum-processed OLEDs, but the insufficient thermal stability of most of the complexes reported in this review constitutes a major impediment to elaborate highly emissive infrared devices by this process. The preparation of neutral complexes is not always possible as exemplified with ruthenium complexes, and LECs have thus been designed with these non-sublimable complexes. A rapid survey of the results reported in this review reveals platinum complexes to be abandoned for the design of NIR OLEDs. This is certainly attributable to the high cost and the rarity of this transition metal. Most of the references mentioned in this review are pretty old (>10 years ago). Conversely, references concerning iridium complexes are more recent, and the number of NIR iridium complexes (27 mentioned in this review) attests of the interest of the community for this metal. Interest for iridium complexes is notably sustained by the remarkable performances obtained with visible LEDs. The easier color tunability is another parameter to consider. Ruthenium is also known as a revival of interest as numerous works on NIR devices have recently been reported in 2019 with this transition metal. The number of NIR ruthenium complexes (20 complexes) reported in the literature is comparable to that of iridium complexes, attesting of the interest for this metal. Clearly, lanthanides (Er, Nd, Yb, Tm) are not examined anymore for the design of NIR emitters for obvious cost and toxicity issues. Face to the insufficient thermal stability of complexes, such as ruthenium complexes to be vacuum-processed, the low EQE obtained with iridium complexes which seems to be more and more popular for the design of NIR emitters, the search for new structures with more adapted energy levels, higher thermal stability and photoluminescence quantum yield are still actively researched. There is still room for improvements.
18,050.2
2019-04-01T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
Multi-state Dirac stars In this paper, we construct the multi-state Dirac stars (MSDSs) consisting of two pairs of Dirac fields. The two pairs of Dirac fields are in the ground state and the first excited state, respectively. Each pair consists of two fields with opposite spins, ensuring spherical symmetry of the system. We discuss the solutions of the MSDSs under synchronized and nonsynchronized frequencies. By varying the mass $\tilde{\mu}_1$ of the excited state Dirac field and the frequency $\tilde{\omega}_0$ of the ground state Dirac field, we obtain different types of solutions, including single-branch and double-branch solutions. These two types of solutions do not smoothly transition into each other as the parameters $\tilde{\mu}_1$ and $\tilde{\omega}_0$ continuously change, but undergo a sudden transition when $\tilde{\mu}_1$ ($\tilde{\omega}_0$) is greater than or less than the threshold value of $0.7694$ ($0.733$). Furthermore, we analyze the characteristics of the various MSDSs solutions and analyze the relationship between the ADM mass $M$ of the MSDSs and the synchronized and nonsynchronized frequencies. Subsequently, we calculate the binding energy $E_B$ of the MSDSs and discuss the stability of the solutions. Finally, we discuss the feasibility of simulating the dark matter halos using MSDSs. I. INTRODUCTION Recently, there has been rapid development in the field of gravitational wave astronomy, which has provided us with new insights into compact objects such as the black holes (BHs) and the neutron stars (NSs) [1][2][3].The advancements in gravitational wave detection technology have also made it possible to search for exotic compact objects (ECOs) similar to the BHs.One prominent class of ECOs is the bosonic stars, which are particle-like configurations of massive scalar fields [4][5][6][7][8][9] or vector fields [10][11][12][13][14][15] that form under their own gravitational attraction.The repulsive force balancing gravity is provided by the Heisenberg uncertainty principle.The bosonic stars provide a promising framework for studying compact objects, and certain models of the bosonic stars can mimic the BHs [16][17][18][19][20]. Additionally, the bosonic stars are also considered candidates for dark matter [21][22][23][24][25][26]. However, particle-like configurations can also be formed by spin-1/2 fermion fields.For non-gravitational cases, attempts to construct particle-like solutions for the Dirac equation were made as early as the 1930s by Ivanenko [27].Subsequent studies have also been conducted in this regard [28][29][30][31].However, it was not until 1970 that the exact numerical solutions for such particle-like configurations were first studied by Soler [32].When gravitational interactions are considered, numerical calculations become more challenging.In 1999, Finster et al. constructed the exact numerical solutions for the Einstein-Dirac system, which couples spinor fields with Einstein's gravity, for the first time [33].These particle-like configurations, formed by spin-1/2 fermions under their own gravitational attraction, are known as the Dirac stars.Subsequently, research on the Dirac stars has been extended to include charged [34] and gauge field [35] additions, and the existence of the Dirac star solutions has been proven [36,37].Recently, the rotational Dirac star solutions [38] and their charged counterparts [39] have been provided for the first time by Herdeiro et al. Some comparative studies between the bosonic stars and the Dirac stars have been conducted in [40,41].Additionally, various interesting studies on the Einstein-Dirac system have been carried out [42][43][44][45][46][47][48][49][50][51]. In 2010, Bernal et al. constructed the multi-state boson stars (MSBSs) composed of two complex scalar fields in their ground state and the first excited state and analyzed the stability of the solutions [52].Subsequently, the MSBSs were extended to include rotation [53] and self-interactions [54].It is possible that the Dirac field, under its own gravitational attraction, can also form multi-state configurations.In this work, we numerically solve the Einstein-Dirac system and construct spherically symmetric multi-state Dirac stars (MSDSs), where two coexisting states of the Dirac field are present. The paper is organized as follows.In Sec.II, we introduce the Einstein-Dirac system, which couples four-dimensional Einstein's gravity with two sets of Dirac fields.In Sec.III, we investigate the boundary conditions of the MSDSs.In Sec.IV, we present the numerical results and analyze the solutions of the MSDSs under synchronized and nonsynchronized frequencies.We also discuss the binding energy of the solutions and the problem of the galactic halos.We conclude in Sec.V. II. THE MODEL SETUP We consider a system composed of multiple matter fields that are minimally coupled to Einstein's gravity.The matter fields consist of two pairs of Dirac spinor fields, with each pair containing two spinors of opposite spin.This arrangement ensures that the system exhibits spherical symmetry.One pair is in the ground state, while the other pair is in the first excited state.For such a system, the action is given by: where R is the Ricci scalar, G is the gravitational constant,and L 0 and L 1 are the Lagrangians of the spinor fields in the ground state and first excited state, respectively, where Ψ (k) n are spinors with mass µ n and n radial nodes, and the index k = 1, 2, corresponds to spinors with opposite spin.The variations of the action (1) with respect to the metric and the field functions yield the Einstein equations and the Dirac equation: where T 0 αβ and T 1 αβ are the energy-momentum tensors of the two sets of spinor fields, Furthermore, it can be seen from Eq. ( 2) and Eq. ( 3) that the Lagrangians of the spinor fields are invariant under a global U (1) transformation Ψ n , where α is an arbitrary constant.As a result, the system possesses a conserved current: Integrating the timelike component of the conserved current over a spacelike hypersurface S yields the Noether charge: To construct spherically symmetric solutions, we choose the metric to be of the following form: where N (r) = 1 − 2m(r)/r.The two pairs of Dirac fields are given by [40]: where the index n also represents the number of radial nodes, and ω n is the frequency of the Dirac field with n radial nodes.We only consider the cases of n = 0, 1 in this paper. Substituting the above ansatz into the field equations (4-5) yields the following system of ordinary differential equations: And the Noether charges of the system are: III. BOUNDARY CONDITIONS To solve the system of ordinary differential equations obtained in the previous section, it is necessary to impose appropriate boundary conditions.First, for a regular, asymptotically flat spacetime, the metric function should satisfy the following boundary conditions: where the ADM mass M and σ 0 are unknown constants.In addition, the matter field vanishes at infinity: Expanding equations (13)(14) near the origin, we obtain that the field functions satisfy the following condition at the origin: IV. NUMERICAL RESULTS In order to facilitate numerical calculations, we employ the following dimensionless quantities: where M P l = 1/ √ G is the Planck mass.For any physical quantity A, we denote the dimensionless quantity under the conditions of ρ = 1/µ 0 and ρ = 1/µ 1 as à and A, respectively. To facilitate computation, we define the radial coordinate x as follows: where the radial coordinate r ∈ [0, ∞), so x ∈ [0, 1].We utilize the finite element method to numerically solve the system of differential equations.The integration region 0 ≤ x ≤ 1 is discretized into 1000 grid points.The Newton-Raphson method is employed as our iterative approach.To ensure the accuracy of the computational results, we enforce a relative error criterion of less than 10 −5 . To ensure the accuracy of our numerical calculations, it is crucial to verify the numerical precision by validating physical constraints [57,58], in addition to employing the aforementioned numerical analysis methods.In this study, we examined the equivalence between the asymptotic mass and the Komar mass of the numerical solution, and the results consistently showed a discrepancy of less than 10 −5 between these two quantities. We denote the Dirac stars in the ground state and the first excited state as D 0 and D 1 , respectively, and the multi-state Dirac stars as D 0 D 1 (or MSDSs).The representation of the gamma matrices and the choice of the tetrad in the Dirac equation are the same as in [42]. A. Synchronized frequency Through the analysis of the numerical calculations, we observed that the solution of the MSDSs under synchronized frequency depends on the ratio of the masses of the excited state and ground state Dirac fields: µ 1 /µ 0 , which is the dimensionless mass μ1 of the first excited state Dirac field.By varying the value of the mass μ1 , various MSDSs solutions can be obtained.Based on the number of branches in the obtained MSDSs solutions, we categorize them into single-branch and double-branch solutions.For 0.7694 ≤ μ1 < 1, the MSDSs solution corresponds to a single-branch solution, whereas for 0.7573 ≤ μ1 < 0.7694, the MSDSs solution corresponds to a double-branch solution.In the following discussion, we will delve into the characteristics of these two types of solutions. Single-branch We first discuss the more general single-branch solution in the multi-field system [42,43,[53][54][55].The characteristic change in the radial profile of the matter field forming the MSDSs as the synchronized frequency ω continuously varies is shown in Fig. 1.The field functions depicted in the figure were obtained under the condition of a fixed mass μ1 = 0.898.The Furthermore, when the mass μ1 of the excited state Dirac field is small, both endpoints of the orange line are located on the first branch of the black and blue dashed lines.As the mass μ1 decreases to 0.801, the intersection point of the orange line and the blue dashed line is located at the inflection point between the first and second branches of the blue dashed line.As the mass μ1 further decreases to 0.771, the intersection point of the orange line and the black dashed line is located at the inflection point between the first and second branches of the black dashed line.When μ1 decreases to the minimum frequency at which the single-branch solution can exist, 0.7694, the orange line and the black dashed line exhibit a "tangent" form.The two plots at the bottom of Fig. 2 intuitively illustrate the variation of the orange line endpoints.It should be noted that the horizontal axis of the lower-right plot represents ω, not ω, in order to show the changing trend of the intersection points of the orange line and the blue dashed line. Double-Branch In addition to the single-branch solution described in the preceding section, the solutions of MSDSs exhibit two branches when the synchronized frequency ω is sufficiently low.In the following, we will first discuss the variation of the matter field functions with respect to the synchronized frequency ω for the double-branch solution.As shown in Fig. 3, we demonstrate the relationship between the radial profile of the matter fields that constitute the MSDSs and the synchronized frequency ω under the condition of a fixed mass μ1 = 0.898.For the first branch field functions of the MSDSs displayed in the left column of the figure, as the synchronized frequency increases, the peak values of the ground state Dirac field functions f0 and g0 also increase, while the peak values of the excited state Dirac field functions f1 and g1 gradually decrease.For the second branch field functions displayed in the right column, as the synchronized frequency decreases, the peak values of the ground state field functions f0 and g0 gradually decrease, while the peak values of the excited state field functions f1 and g1 gradually increase.It is worth noting that the ground state Dirac field disappears at the minimum synchronized frequency of the first and second branches, while the excited state field always exists. Next, we analyze the characteristics of the ADM mass M of the double-branch solutions In Fig. 4, the MSDSs solutions exhibit a double-branch structure when 1/µ 0 (μ 1 ) is less than the threshold value of 0.7964.This change in solutions occurs abruptly, and the single-branch solutions do not gradually extend into a second branch as 1/µ 0 decreases. Extensive numerical calculations were conducted, and even when 1/µ 0 is gradually reduced by an order of magnitude of 10 −5 , the single-branch solutions fail to transition continuously to the double-branch solutions.We refer to this phenomenon as bifurcation.It is worth noting that this phenomenon has not been observed in previous studies of multi-field soliton models [42,43]. The two branches of the double-branch solution are very close to each other, but their intersections with the blue dashed line are clearly different.As 1/µ 0 decreases, the synchronized frequency range of the two branches of the MSDSs gradually decreases.For all double-branch solutions, the intersections of the two branches with the blue dashed line are always in the second branch of the blue dashed line.When 1/µ 0 decreases to 0.7573, the second branch solution is very close to some of the solutions in the first branch, to the extent that the two branches almost coincide.It is noteworthy that at this point, the first branch of the MSDSs appears to be "tangential" to the blue dashed line, which is a similar feature to the single-branch solution of the MSDSs mentioned earlier. B. Nonsynchronized frequency In this section, we discuss the nonsynchronized frequency solutions of the MSDSs.To analyze the influence of the parameters on the numerical solutions, we set the masses of the When the frequency ω0 of the ground state Dirac field is close to 1, the range of the frequency ω1 corresponding to the obtained single-branch solution is very narrow.As the frequency ω0 gradually decreases, the range of the frequency ω1 increases gradually. Additionally, for a fixed frequency ω0 , as the frequency ω1 increases, the ADM mass of the system decreases gradually.When the ADM mass reaches its maximum value, the frequency ω1 reaches its minimum value, and the ground state Dirac field disappears, causing the MSDSs to degenerate into D 1 .Conversely, when the ADM mass reaches its minimum value, this minimum value is the same as the ADM mass of D 0 when the frequency ω0 takes the values indicated in each plot.This is because when the ADM mass of the MSDSs reaches its minimum value, the frequency ω1 reaches its maximum value, and the excited state Dirac field disappears, resulting in the system degenerating into D 0 .In other words, the minimum ADM mass of MSDSs depends on the frequency ω0 of the ground state Dirac field. Double-Branch Next, we discuss the double-branch solutions of the MSDSs under nonsynchronized frequency.In the single-branch solution discussed earlier, the minimum value of the frequency ω0 of the ground state Dirac field is 0.733.Since there is no solution with a frequency less than 0.733 for D 0 , the MSDSs cannot degenerate into D 0 when the frequency ω0 is less than 0.733, and thus the single-branch solution cannot be obtained.Through a series of numer-ical calculations, we found that when the frequency ω0 of the ground state field satisfies 0.6971 ≤ ω0 < 0.733, the double-branch solutions of the MSDSs are obtained.Fig. 7 shows the relationship between the radial profile of the matter fields that constitute the MSDSs and the nonsynchronized frequency ω1 .The left column shows the field functions on the first branch.As the frequency ω1 increases, the peak values of the ground state Dirac field functions f0 and g0 gradually increase, while the changes in the excited state field functions f1 and g1 are relatively small.The right column shows the field functions on the second branch.As the frequency ω1 decreases, the peak values of the ground state Dirac field functions f0 and g0 gradually decrease, while the excited state field functions f1 and g1 gradually increase.It can be observed that when the frequency ω1 of the first and second branch solutions reaches its minimum value, the ground state Dirac field disappears, while the excited state Dirac field does not disappear for any ω1 . The relationship between the ADM mass M of the nonsynchronized frequency doublebranch solutions of the MSDSs and the frequency ω1 is shown in Fig. 8.The black and blue dashed lines represent the ground state and first excited state solutions of the Dirac star (D 0 and D 1 ), and the orange line represents the double-branch solution of the MSDSs (D 0 D 1 ).Similar to the case of synchronized frequencies mentioned earlier, a bifurcation occurs when the frequency ω0 is below the threshold of 0.733, transforming the singlebranch solution into the double-branch solution.As the frequency ω0 of the ground state Dirac field decreases, the range of existence of the two branches of the double-branch solution gradually decreases with respect to the frequency ω1 .For a fixed ground state field frequency ω0 , when the frequency ω1 reaches its maximum value, the MSDSs do not degenerate into D 0 but transition to another new branch.As the frequency ω1 decreases further, the ADM mass of the system gradually increases, and eventually, the MSDSs transform into D 1 .This characteristic change in the system's mass is also reflected in the profile of the field functions presented in Fig. 7, where the disappearance of the ground state Dirac field functions on the two branches when the frequency ω1 reaches its minimum indicates the transformation of the MSDSs into D 1 . C. Binding energy After obtaining various different solutions for the MSDSs, we will analyze the stability of the system from the perspective of binding energy.Consider a MSDS with ADM mass M , where the Noether charge for the ground state Dirac field is denoted as Q 0 , and the Noether charge for the first excited state Dirac field is denoted as Q 1 .The binding energy E B of the system can be expressed as: where the coefficient 2 outside the parentheses on the right-hand side of the equation arises from the fact that in a spherically symmetric MSDS, both the ground state and the first excited state of the Dirac field have two components. 1/μ 0 =0.99We first analyze the binding energy of the MSDSs under synchronized frequency.Fig. 9 shows the relationship between the binding energy E B and the synchronized frequency ω of the MSDSs.The left plot represents the single-branch solutions of the MSDSs.When 1/µ 0 > 0.863, for a fixed value of µ 0 , the binding energy E B of the MSDSs monotonically increases with the synchronized frequency ω, and the binding energy is always less than zero. When 1/µ 0 ≤ 0.863, for a given µ 0 , the binding energy initially decreases and then increases as the synchronized frequency increases.When 1/µ 0 becomes sufficiently small, the solutions become unstable (e.g., the red curve).Therefore, when 1/µ 0 is sufficiently large, i.e., when the masses µ 0 and µ 1 of the ground state and excited state Dirac fields are sufficiently close, the MSDSs are more stable.The right plot represents the double-branch solutions of the MSDSs.As 1/µ 0 increases, the minimum value of the binding energy gradually decreases, but the binding energies of these double-branch solutions are all greater than zero, indicating that the solutions are unstable. Next, we consider the binding energy of the MSDSs under nonsynchronized frequency. Fig. 10 shows the relationship between the binding energy E B and the frequency ω1 for the nonsynchronized frequency solutions of the MSDSs.The left plot represents the singlebranch solutions, where for a fixed ω0 , the binding energy E B monotonically increases with the frequency ω1 and remains negative when the frequency ω0 is sufficiently large.However, for small values of ω0 (e.g., ω0 = 0.733), the MSDSs can undergo a transition from a stable solution to an unstable one as the frequency ω1 increases.The right plot represents the double-branch solutions, where the negative binding energy solutions appear when the frequency ω0 is sufficiently large (e.g., ω0 = 0.732).As the frequency ω0 decreases, the stable solutions in the double-branch solutions gradually disappear, and all solutions eventually become unstable.Therefore, in the case of nonsynchronized frequency, stable solutions exist within the double-branch solutions. D. Galactic halos as MSDSs The velocity of stars orbiting the central core of a galaxy remains constant over a large range of distances starting from the galactic center.This phenomenon may be attributed to the presence of a dark matter halo in the outer regions of the galaxy.By using boson stars to simulate the dark matter halo, it is possible to obtain results that are consistent with real observational data [21,52,56].In the following, we will analyze the feasibility of simulating the dark matter halo using MSDSs by computing the velocities of test particles orbiting around them.Considering timelike circular geodesics on the equatorial plane, the rotational velocity of the test particles is given by [52,56]: Substituting Equ.(10) into the expression, we obtain Next, we analyze the rotational curves of the MSDSs.As shown in Fig. 11, the left panel V. CONCLUSION In this paper, we investigate the Einstein-Dirac system and construct spherically symmetric multi-state Dirac stars, wherein two coexisting states of the Dirac field are present. We discuss the field functions, ADM mass, and binding energy of the solutions for the multi-state Dirac star under synchronized and nonsynchronized frequency conditions.Furthermore, we analyze the feasibility of considering the multi-state Dirac star as a candidate for dark matter halos. In the case of synchronized frequency, we explore different solutions for the MSDSs by varying the ratio of masses between the ground state and excited state Dirac fields (µ 0 /µ 1 ). Based on the number of solution branches, we classify the obtained numerical results into single-branch solutions and double-branch solutions.For single-branch solutions, the peak values of the ground state and excited state Dirac field functions exhibit monotonic behavior as the synchronized frequency changes.The ADM mass monotonically decreases with increasing synchronized frequency, and at the minimum and maximum synchronized frequencies, the MSDSs degenerate into the first excited state Dirac stars and the ground state Dirac stars, respectively.When μ1 (or 1/µ 0 ) is below the threshold of 0.7694, the single-branch solution undergoes a transition into a double-branch solution.In the case of double-branch solutions, the excited state Dirac field functions persist while the ground state field functions disappear at the minimum synchronized frequency on both branches, leading to the degeneration of the MSDSs into the first excited state Dirac stars.Moreover, as the mass of the ground state field decreases, the range of synchronized frequency values on the two branches of the double-branch solution gradually diminishes. Next, for the case of nonsynchronized frequency, we set the ratio of masses between the ground state and excited state Dirac fields as µ 0 /µ 1 = 1 and obtain different solutions by varying the frequency of the ground state Dirac field.Similar to the synchronized frequency case, the nonsynchronized frequency solutions of the MSDSs can also be classified into singlebranch solutions and double-branch solutions.In the case of single-branch solutions, the variations of the ground state and excited state field functions with respect to the frequency of the excited state Dirac field exhibit monotonic behavior.The ADM mass decreases as the frequency of the excited state field increases, and its minimum value depends on the frequency of the ground state Dirac field.When ω0 is below the threshold of 0.733, the single-branch solution undergoes a transition into a double-branch solution.For the doublebranch solutions, as the frequency of the ground state field decreases, the minimum ADM mass of the MSDSs gradually increases, the range of nonsynchronized frequency values on the two branches diminishes, and the MSDSs degenerate into the first excited state Dirac stars when the nonsynchronized frequency becomes sufficiently small. It is worth noting that the MSDSs solutions exhibit certain similarities between the synchronized and nonsynchronized frequency cases.In the synchronized frequency (nonsynchronized frequency) scenario, when μ1 (ω 0 ) falls below the threshold of 0.7694 (0.733), the single-branch solutions undergo an abrupt transition into double-branch solutions.Furthermore, regardless of whether the frequencies are synchronized or not, the single-branch solutions of the MSDSs can always degenerate to the ground state or the first excited state, while the double-branch solutions can only degenerate to the first excited state.Subsequently, we computed the binding energy of various solutions for the MSDSs.For synchronized frequency solutions, stable solutions only exist in the single-branch solutions, and when 1/µ 0 is sufficiently small, all stable solutions within the single-branch solutions vanish.For nonsynchronized frequency solutions, stable solutions exist in both single-branch and double-branch solutions.However, the double-branch solutions become unstable when the frequency of the ground state field reaches a sufficiently low value, whereas the singlebranch solutions maintain stable solutions for any frequency of the ground state field. Finally, we computed the rotation curves of the MSDSs.It has been observed that MSDSs containing excited state matter fields exhibit a nearly flat region near the velocity peak in their rotation curves, similar to the rotation curves of multi-state boson stars discussed in [52].In our future work, we plan to construct MSDSs with a greater number of matter field nodes.It is possible that rotational curves of models with a higher number of nodes can provide a closer fit to the observed data [56]. excited state Dirac stars (D 1 ), and the orange line represents the MSDSs (D 0 D 1 ).The ADM mass of the system monotonically decreases as the synchronized frequency increases.It can be observed that the upper end of the orange line intersects with the blue dashed line, where the MSDSs degenerate into the D 1 ; the lower end of the orange line intersects with the black dashed line, where the MSDSs degenerate into the D 0 .This degeneration of the MSDSs is manifested in Fig. 1 as the disappearance of the ground state or excited state field functions. FIG. 3 . FIG. 3. The matter functions f0 , g0 , f1 and g1 on the first (left column) and second branches (right column) of the MSDSs solution function as functions of x and ω for μ1 = 0.7645. FIG. 4 . FIG. 4. The ADM mass M of the MSDSs as a function of the synchronized frequency ω for several values of 1/µ 0 . FIG. 6 . FIG. 6.The ADM mass M of the MSDSs as a function of the frequency ω1 for several values of ω0 . FIG. 7 . FIG. 7. The matter functions f0 , g0 , f1 and g1 on the first (left column) and second branches (right column) of the MSDSs solution function as functions of x and ω1 for ω0 = 0.721. FIG. 8 . FIG. 8.The ADM mass M of the MSDSs as a function of the frequency ω1 for several values of ω0 . 2 FIG. 9 . FIG. 9.The binding energy E B of the MSDSs as a function of the synchronized frequency ω for several values of 1/µ 0 . FIG. 11 . FIG. 11.Left panel: the rotational curves of the MSDSs for several values of μ1 .Right panel: the rotational curves of the Dirac stars at different energy levels.
6,210.4
2023-06-20T00:00:00.000
[ "Physics" ]
Dirac cone protected by non-symmorphic symmetry and three-dimensional Dirac line node in ZrSiS Materials harbouring exotic quasiparticles, such as massless Dirac and Weyl fermions, have garnered much attention from physics and material science communities due to their exceptional physical properties such as ultra-high mobility and extremely large magnetoresistances. Here, we show that the highly stable, non-toxic and earth-abundant material, ZrSiS, has an electronic band structure that hosts several Dirac cones that form a Fermi surface with a diamond-shaped line of Dirac nodes. We also show that the square Si lattice in ZrSiS is an excellent template for realizing new types of two-dimensional Dirac cones recently predicted by Young and Kane. Finally, we find that the energy range of the linearly dispersed bands is as high as 2 eV above and below the Fermi level; much larger than of other known Dirac materials. This makes ZrSiS a very promising candidate to study Dirac electrons, as well as the properties of lines of Dirac nodes. The electronic structure of a three-dimensional (3D) Dirac semi-metal (DSM) contains two sets of linear, doubly degenerate bands which cross at a four-fold degenerate crossing called a Dirac point, a sort of 3D analogue of graphene [14,15].If inversion symmetry (IS) or time reversal symmetry (TRS) are broken, those doubly degenerate bands become spin split, resulting in singly degenerate band crossings called Weyl nodes [16,17].Although many different materials have been predicted to host Dirac or Weyl fermions [3,15,18], only a few real materials have been experimentally verified.Both Cd 3 As 2 and Na 3 Bi have symmetry protected 3D Dirac cones, which have been imaged with angle resolved photoelectron spectroscopy (ARPES) [4,[19][20][21].Both materials exhibit exotic transport properties such as ultrahigh mobility, large, linear magnetoresistance and negative magnetoresistance at low fields [5,22].Signatures of a chiral anomaly in ZrTe 5 have been seen in ARPES as well as transport experiments [23].Weyl fermions have been shown to exist in the ISbreaking compounds TaAs [1,24], NbAs [25], and TaP [4] (and predicted in WTe 2 , MoTe 2 , and other Ta or Nb mono pnictides [3,26,27]).Very recently, Weyl nodes were shown in the intrinsically TRS-breaking compound, YbMnBi 2 [2].Young and Kane used the concept of non-symmorphic symmetry to predict that in 2D square lattices, new types of 2D DSMs can exist that are distinct from both graphene and 3D DSMs [13].In particular, these 2D Dirac cones may host 2D Dirac fermions with a behavior distinct from their 3D analogues; experimental verification is pending.Finally, materials with Dirac line nodes, where the Fermi surface forms a closed loop, have recently been predicted but only experimentally verified in one material, PbTaSe 2 , where other bands are interfering at the Fermi level [9][10][11][12]28]. In all of the currently known DSMs, the energy range of the linear dispersion of the Dirac cone is very small.In Cd 3 As 2 , the Lifshitz transition appears according to calculations only about 20 meV above the Fermi level.In the real material however, the Fermi energy has been shown to lie 200 meV above the Dirac cone [20].In Na 3 Bi, TaAs and other monopnictides, the Lifshitz transition is only roughly 100 meV away from crossings.A material with a larger linear dispersion would allow easy study of Dirac and Weyl physics despite changes in the Fermi level due to defects or impurities.The fabrication of thin films and devices from Dirac and Weyl materials would greatly benefit from more robust Dirac and Weyl states due to the difficulties in achieving thin film quality approaching that of single crystals.Also, many of the known materials have further disadvantages, such as the toxicity of arsenides as well as the extreme air sensitivity and chemical instability of Na 3 Bi, which also make studying their exotic physics difficult. Here, we show by electronic structure calculations and ARPES that a so far unnoticed system, ZrSiS, exhibits several Dirac crossings within the Brillouin zone (BZ) which form a diamond shaped Fermi surface with a line of Dirac nodes, without any interference of other bands.This compound is non-toxic and highly stable with band dispersions of the linear region of those crossings being larger than in any other known compound: up to 2 eV in some regions of the BZ.Spin orbit coupling (SOC) introduces a small gap to the Dirac cones near the Fermi surface, of the size of ≈ 20 meV (much less than in the related Bi-based compounds).We also show the presence of a Dirac feature below the Fermi level, which is generated by the square Si sublattice and is protected by the non-symmorphic symmetry through a glide plane (regardless of SOC strength), supporting the recent prediction by Young and Kane regarding 2D Dirac fermions.We also show that around this Dirac feature an unusual, previously not predicted surface state arises.Thus, ZrSiS is a very promising candidate for investigating Dirac and Weyl physics, as well as the properties of lines of Dirac nodes. ZrSiS crystallizes in the PbFCl structure type in the tetragonal P4/nmm space group (No. 129) [29].It is related to the Weyl semimetal, YbMnBi 2 , whose structure is a stuffed version of the PbFCl crystal structure, hence the different stoichiometry.Other Bi based, stuffed PbFCl structures, such as EuMnBi 2 and (Ca/Sr)MnBi 2 , have also already been shown to host Dirac electrons and to exhibit exotic transport properties [30][31][32].Both structures display square nets of Si and Bi atoms, respectively, that are located on a glide plane. The crystal structure of ZrSiS is displayed in figure 1 (a).The Si square net is located in the ab plane and layers of Zr and S are sandwiched between the Si square nets in such a way that there are neighboring S layers in between the Zr layers.We were able to image the square lattices with high resolution transmission electron microscopy (HRTEM) shown in figure 1 (d) and (e) (see SI for more information on precession electron diffraction (PED) and HRTEM).The HRTEM image of the (110) surface shows a gap between neighboring S atoms, which is where the crystals cleave.The LEED pattern shown in figure 1 (c) clearly indicates a square arrangement of Bragg reflections, hence showing that the crystals cleave perpendicular to the tetragonal c axis.There is no sign of a surface reconstruction happening in these crystals.An SEM image of a typical crystal is shown in figure 1 (b).The crystals are very stable in water and air and only dissolve in concentrated acids. The calculated electronic structure of bulk ZrSiS is displayed in figure 2. Without SOC, several Dirac cones are visible which cross along ΓX and ΓM as well as along ZR and ZA, which are the respective symmetry lines above ΓX and ΓM.However, the crossing along ZR is higher in energy compared to the one along ΓX.The Dirac cones form an unusually shaped line node in the BZ, displayed in figure 2 (c) (this schematic assumes all Dirac points to be at the same energy to make it easier to understand the principle of the electronic structure). This gives rise to a diamond shaped Fermi surface.Note that the range in which the bands are linearly dispersed is very large compared to other known Dirac materials.The electronic structure without SOC is very similar to that observed in YbMnBi 2 , however, since the symmetry along the lines with the Dirac crossings is C 2v , SOC gaps the cones.In Bi based compounds, this effect is very dramatic.It also destroys the large dispersion of the linear bands.In ZrSiS, however, SOC is small and only produces very small gaps in the cones along the C 2v symmetry lines, maintaining the large linear dispersion (see figure 2).Furthermore, in Bi based compounds, more bands interfere with the cone structure around the Fermi energy. There are other Dirac-like crossings at the X and R point that are located at −0.7 eV and −0.5 eV, respectively.These crossings are protected by the non-symmorphic symmetry of the space group, very similar to the recently predicted Dirac cones in 2D square nets and are not influenced by SOC [13].In this template system, along the XM direction, both bands fold on the same energy.This degeneracy may subsequently be lifted to host 2D Dirac fermions.To the best of our knowledge this is the first time such a feature in the electronic structure has been observed in a real material.This Dirac-like crossing is significantly below the Fermi level with other bands also present, however, hole doping (on the Zr or S site) or the gating of thin films, may allow for investigation of the physics of the 2D Dirac fermions. ARPES data are shown in figure 3. The cone protected by non-symmorphic symmetry at X is clearly visible at −0.5 eV (figure 3 (a)).Perpendicular to ΓX, both bands fold on the same energy along the XM direction (left panel of figure 3 (b)) exactly matching the prediction of Young and Kane [13].The Dirac points of the cones along the ΓX line in figure 3 (a) are not completely visible since the Fermi level is slightly below Dirac points (as also predicted in the slab calculation shown below).Dashed purple lines indicate the predicted bulk bands.Note that we observe linearly dispersed bands for more than 1 eV energy range below the Fermi level, as predicted by the calculation.We observe additional states along ΓX (figure 3 (a)) not seen in the calculated bulk band structure, which we attribute to surface-derived states.When moving parallel to ΓX towards XM (figure 3 (c)), one can see how this surface state interacts with the bulk bands near X.This hybridization of the alleged surface state with the conical bulk state near the X point may be attributed to the inherent two-dimensional character of this bulk state.Normally, a surface state does not exist within the projected bulk band structure.However, as the bulk state is rather two-dimensional itself, we surmise that the bands hybridize in the vicinity of the surface.Since ARPES is an extremely surface sensitive technique and the unit cell along the c-axis is rather long, the actual bulk band dispersion inside the crystal remains unobservable.The left panel of figure 3 (b) shows the measured band structure along MXM.Along the high symmetry line, a gap is observed, which is much smaller than the gap in the bulk band structure, due to the presence of surface states along this direction as well.Another cone at −0.4 eV is visible in the measurement slightly parallel to the high symmetry line, towards the ΓX direction (right panel of figure 3 (b)).This cone is not seen in the bulk band structure calculation (see figure S5), which indicates that this Dirac cone is also surface derived.It is connected to the surface state along ΓX as seen in figure 3 (a) as well as in the slab calculation (see below).In figure 3 (d), we show a constant energy plot of the Fermi level.From the bulk calculation, we expect a diamond shaped Fermi surface as sketched in figure 3 (e) (lower panel).In the experimental data, we not only observe this diamond shaped Fermi surface, but, in addition, the data shows the surface-derived state around X to cross the Fermi level as well. The calculated slab Fermi surface in figure 3 (e) (upper panel) is in excellent agreement with the measured Fermi surface.If a constant energy plot is taken at lower energies (see figure S3 in the SI) the observed ARPES spectrum matches very well with the predicted constant energy surface at -515 meV of the bulk surface, due to the absence of surface states at this energy. In order to ascertain the nature of the alleged surface states, we performed band structure calculations of a slab.The resulting band structures in comparison to ARPES data are shown in figure 4. SOC is included for these calculations.The creation of a surface causes several changes to the electronic structure; the cone along ΓM remains unchanged (figure 4 (c)) but the cone along ΓX moves up in energy compared to the bulk structure.The same is true for the cone at X (see figure 4 (a)).This can be understood if one considers that in a 2D slab, the ZR bands are projected onto the ΓX bands.In addition, the surface state seen in ARPES appears along ΓX.The high level of agreement between the predicted and measured electronic structure is shown by superimposing the two images in the figure without rescaling.Furthermore, the measured Fermi energy matches the predicted one, however, the ARPES data are measured at room temperature which indicates that the samples are slightly hole doped. Continuous bands along this surface state are highlighted in orange in figure 4 In summary, we showed that ZrSiS, a stable and non toxic material, has a very exotic electronic structure with many Dirac cones that form a diamond shaped Fermi surface.The bands are linearly dispersed over a very large energy range, larger than in any other material reported to date.We confirmed our electronic structure calculations with ARPES measurements that are in excellent agreement with the calculated structure.We also show the first experimental realization of template system for 2D Dirac cones protected by non-symmorphic symmetry, in excellent agreement with recent theoretical predictions. In addition, we observe an unconventional surface state that is hybridized with bulk bands around the X point.It is uncommon for a surface state to exist within the projected bulk band structure or even hybridize with it.A possible cause might be the 2D nature of the bulk bands around X, however, further investigation into this effect is required.In contrast to compounds with a Bi square net, where a large SOC opens a large gap with parabolic dispersion, ZrSiS has a Si square net where the SOC effect is very much reduced and linear dispersion of the bands is mostly preserved.Since no other bands interfere at the Fermi level, the unusual electronic structure of ZrSiS makes it a strong candidate for further studies into Dirac and Weyl physics; especially magnetotransport, since the Fermi energy can be tuned quite substantially, while still being in the linear range of the bands. METHODS Single crystals of ZrSiS were grown in a two step synthesis.First, a polycrystalline powder was obtained following the same procedure as in [29].In a second step single crystals were grown from the polycrystalline powder via I 2 vapor transport at 1100 • C with a 100 • C temperature gradient.The crystals were obtained at the cold end.The published crystal structure was confirmed with single crystal x-ray diffraction and electron diffraction. The crystal used for SXRD was of extremely high quality and an R 1 value of 1.5% was obtained for the structural solution (see table S1 and S2 in the supplemental information (SI) for more details).Single crystal x-ray diffraction data was collected on a STOE IPDS II working with graphite monochromated Mo K α radiation.Reflections were integrated with the STOE X -Areaa 1.56 software and the structure was solved and refined by least square fitting using SHELXTL [33].Electron microscopy was performed with a Phillips CM30 ST (300 kV, LaB 6 cathode).High resolution transmission microscopy (HRTEM) images and precession electron diffraction (PED) patterns were recorded with a CMOS camera (TemCam-F216, TVIPS) equipped with a nanomegas spinning star to obtain PED images. The program JEMS (Staddmann) was used to simulate diffraction patterns and HRTEM micrographs.For ARPES measurements crystals were cleaved and measured in ultra high vacuum (low 10 −10 mbar range).Low energy electron diffraction (LEED) showed that the cleavage plane was the (001) plane.ARPES spectra where recorded at room temperature with a hemispherical PHOIBOS 150 electron analyzer (energy and angular resolution are 15 meV and 0.5 • , respectively).As photon source a monochromatized He lamp that offers UV radiation at hν = 21.2 eV (He I) was used.SEM images of crystals were measured with a scanning electron microscope (SEM; Vega TS 5130 MM, Tescan) using a Si/Li detector (Oxford).Electronic structure calculations were performed in the framework of density func-tional theory (DFT) using the wien2k [34] code with a full-potential linearized augmented plane-wave and local orbitals [FP-LAPW + lo] basis [35] together with the Perdew-Becke-Ernzerhof (PBE) parameterization [36] of the Generalized Gradient Approximation (GGA) as the exchange-correlation functional.The plane wave cut-off parameter R M T K M AX was set to 7 and the irreducible Brillouin zone was sampled by 1368 k-points (bulk) and by a 30x30x3 mesh of k-points (slab).Experimental lattice parameters from the single crystal diffraction studies were used in the calculations.Spin orbit coupling (SOC) was included as a second variational procedure.For the slab calculation it was found that cleaving between sulfur atoms resulted in the closest match to the experimental observation.This cleavage plane is in agreement with HRTEM imaging and chemical intuition.The slab was constructed by stacking 5 unit cells in c direction that were gapped by a 5.3 Å vacuum.[110] (a).These bands do not follow the expected path of the surface state, thus also indicating the hybridization with the bulk.A simulation of the slab band structure parallel to ΓX is shown in figure 4 (b).In accordance with ARPES the bands that form the surface state split apart.Calculations of the slab without SOC (see figure S4 (a) in the SI) show that the bands forming the surface state and the cone around X have different irreducible representations in the absence of SOC, along ΓX, but not parallel to it (figure S4 (b)).This indicates that SOC cannot be held responsible for the hybridization.Figure S4 (c) in the SI shows the contribution of the surface atoms to the slab band structure.This supports the surface character of the additional band observed in the experiment. Figure 4 ( Figure 4 (d) shows the predicted slab band structure along XM.Surface states that lie in between the bulk band gap in this part of the BZ are highlighted in orange.The surface states are mainly appearing around the X point.Again, ARPES data matches well with the prediction, the observed decreasing gap along this direction is also seen in the slab calculation showing that this is caused by the surface. FIG. 1 . 3 FIG. 2 . FIG. 1.(a) Crystal structure of ZrSiS.The Si square net can be seen in blue.(b) SEM image of a typical crystal.(c) LEED pattern of a cleaved crystal showing that it cleaves perpendicular to the c axis.(d) HRTEM image of the (110) surface, inset shows simulated HRTEM image.The focus plane is ∆f = -50 nm, close to the Scherzer focus, where atoms appear in black.Individual atoms could be identified and the cleavage plane between sulfur atoms is visible in white.For images with different foci and their simulations see SI. (e) HRTEM image and PED pattern of the (001) surface, the square arrangement of atoms is clearly visible. FIG. 3 . FIG. 3. Band structure measured with ARPES.(a) Band structure along ΓX.The purple lines represent the predicted bulk bands.In addition a surface state is visible.(b) Band structure along MXM (left) and parallel to MXM (right).Along the high symmetry line the band structure is gapped (left panel) but with a much smaller gap than predicted in the bulk calculation.The gap closes and a cone forms parallel to the high symmetry line (right panel).(c) Band structure parallel to ΓX. Due to the gapping of the surface state it can be inferred that it is hybridized with the bulk cone at X.(d) Constant energy plot at the Fermi energy.(e) The lower drawing sketches the predicted Fermi surface and compares calculated bulk and slab Fermi surfaces (upper panels).Pockets that are clearly surface derived are drawn in orange.The measured Fermi surface displays the predicted slab Fermi surface well. FIG. 4 . FIG. 4. Calculated slab band structure in comparison with the measured band structure.(a) Bands along ΓX.The orange bands indicate how bands are progressing, showing the hybridization of the surface state with the bulk.(b) Bands parallel to ΓX highlighting the mixing of surface states and bulk bands belonging to the cone at X. (c) Slab band structure along ΓM, showing that there is no big change to the bulk band structure in this direction.(d) Bands along MXM, surface bands are highlighted in orange.The surface states significantly reduce the bulk gap along this direction. FIG. S1.PED of ZrSiS, simulated patterns are shown on the left and measured patterns on the right. FIG. S2.HRTEM images of the (110) plane of ZrSiS, measured on a powdered sample.Top panel is close to Sherzer focus, where atoms appear in black and gaps in white.Individual atoms are labeled.Insets show simulated HRTEM images; simulations assume a sample thickness of t = 5.01nm. FIG. S4.Slab band structure along ΓX (a) calculated without spin orbit coupling showing that two different irreducible representations are possible and the surface state does not mix with the bulk bands.(b) Calculated without SOC and plotted parallel to ΓX where only one irreducible representation is allowed and hence mixing of the states is allowed, even without SOC.(c) Calculated with SOC but contribution of surface atoms is plotted thick, indicating clearly that the surface state arises from these atoms even if it is hybridized with the bulk. TABLE SI . Crystallographic data and details of data collection. TABLE SII . Position coordinates and thermal displacement parameters for ZrSiS
4,898.6
2015-09-02T00:00:00.000
[ "Materials Science", "Physics" ]
Use of Patch Clamp Electrophysiology to Identify Off-Target Effects of Clinically Used Drugs Most drugs have effects attributed to actions at sites other than those that are intended. In many cases these off-target effects have adverse consequences, though in some instances these effects may be neutral or even beneficial. Many off-target effects involve either direct or indirect actions on ion channels. Hence, electrophysiological approaches can be employed to screen drugs for effects on ion channels and thereby predict their off-target actions. The pharmaceutical industry routinely uses cellular expression systems and cloned channels to quickly screen thousands of compounds to eliminate those that have well known adverse ion channel effects, such as inhibition of the Kv11.1 potassium channel encoded by the human Ether-a-go-go related gene (hERG). However, these methods are not well-suited to predicting many other off-target effects mediated by actions on ion channels natively expressed in specific tissues. We have employed a more directed electrophysiological approach to evaluate a small number of compounds (e.g. drugs with known or predicted adverse effects) to identify ion channel targets that might explain their actions. This chapter will describe this approach in some detail and illustrate its use with some specific examples. Introduction Most drugs have effects attributed to actions at sites other than those that are intended. In many cases these off-target effects have adverse consequences, though in some instances these effects may be neutral or even beneficial. Many off-target effects involve either direct or indirect actions on ion channels. Hence, electrophysiological approaches can be employed to screen drugs for effects on ion channels and thereby predict their off-target actions. The pharmaceutical industry routinely uses cellular expression systems and cloned channels to quickly screen thousands of compounds to eliminate those that have well known adverse ion channel effects, such as inhibition of the Kv11.1 potassium channel encoded by the human Ether-à-go-go related gene (hERG). However, these methods are not well-suited to predicting many other off-target effects mediated by actions on ion channels natively expressed in specific tissues. We have employed a more directed electrophysiological approach to evaluate a small number of compounds (e.g. drugs with known or predicted adverse effects) to identify ion channel targets that might explain their actions. This chapter will describe this approach in some detail and illustrate its use with some specific examples. Approach Our general approach to evaluating ion channel effects of a specific drug on a particular cell type involves the following steps: Establish an adequate single cell physiological model to evaluate the drug of interest While expression systems such as the human embryonic kidney (HEK) cell line or Chinese hamster ovary (CHO) cell line over-expressing individual ion channels are useful tools for initial drug screening they may not adequately reflect the functional roles of ion channels in their native tissues. Immortalized or primary cell culture models that retain expression of the same ion channels that are natively expressed in the tissue under investigation should be considered. Some examples include neonatal cardiomyocytes for studying cardiac ion channel function (Markandeya et al., 2011), rat superior cervical ganglion (SCG) neurons for natively expressed neuronal ion channels (Kim et al., 2011;Zaika et al., 2011), and embryonic www.intechopen.com rat aortic (A7r5 and A10) cell lines for investigating vascular smooth muscle electrophysiology (Roullet et al., 1997;Brueggemann et al., 2005;Brueggemann et al., 2007). The advantages of using cultured cells compared with freshly dissociated cells from the native tissue include their accessibility, ease of maintenance, high experimental reproducibility, and susceptibility to molecular interventions. However, there are also disadvantages in the use of cultured cells. In particular, the expression pattern of ion channels, receptors, and signaling proteins may differ between cultured cells and native tissues due to differences in proliferative phenotype, absence of surrounding tissues in cell culture and developmental stage from which the cells were derived. Hence, the results obtained using cultured cells should be interpreted with caution and, whenever possible, supplemented by studies performed on freshly dispersed cells and/or by functional assays using intact tissues or live animals. Select the patch-clamp mode (e.g. ruptured or perforated patch) for electrophysiological recording Selection of the patch-clamp mode of recording is generally based on the known properties of the ion channels to be studied. Important considerations include their regulation by phosphatidylinositol 4,5-bisphosphate (PIP 2 ) and soluble second messengers. The open state of many types of ion channels is known to be stabilized by membrane PIP 2 (Hilgemann & Ball, 1996;Loussouarn et al., 2003;Zhang et al., 2003;Bian & McDonald, 2007;Rodriguez et al., 2010;Suh et al., 2010). With conventional (ruptured) patch-clamp recording, the levels of PIP 2 decrease over time, which can cause irreversible rundown of the currents. Inclusion of Mg-ATP in the internal solution may slow rundown of the PIP 2 -dependent currents in excised or ruptured patch recordings (Ribalet et al., 2000), but only the use of the perforated patch configuration enables extended recording of stable whole cell currents for tens of minutes. Regulation of channels via the actions of soluble second messengers may be altered in the ruptured patch configuration as cytosolic solutes may be lost by dialysis into the relatively large volume of the pipette solution. Use of the perforated patch configuration prevents dialysis of signaling molecules and loss of PIP 2 from the membrane. But the ruptured patch-clamp configuration is technically less demanding and so is often preferable if signaling mechanisms are not a concern or if the channels of interest are less dependent on PIP 2 for their activity. Ruptured patch techniques are commonly used for recording currents from voltage-gated sodium channels, Ca v 3 (T type) calcium channels, potassium channels of Kv1, Kv2, Kv3 and K2P families, cystic fibrosis transmembrane receptor (CFTR)-type chloride channels, TRPC family of non-selective cationic channels and ORAI1 store-operated channels. The ruptured patch mode also enables faster data collection, saving the investigator the extra 15-30 min required for patch perforation in each experiment. The choice of pore-forming agent used for patch perforation is often a matter of personal preference. Pores formed by amphotericin B and nystatin in the membrane under the patch are selectively permeable to monovalent ions (such as K + , Na + , Cs + , Cl -) preserving cytosolic Ca 2+ and Mg 2+ concentrations and all soluble cytosolic signaling molecules (Horn & Marty, 1988;Rae et al., 1991). Use of gramicidin for patch perforation also preserves intracellular Clconcentration as gramicidin pores are impermeable to Cl - (Ebihara et al., 1995). It is possible to record stable currents for several hours in perforated-patch mode from a single cell when appropriate pipette and bath solution compositions are used with continuous bath perfusion. www.intechopen.com When using the perforated-patch recording technique, attention should be paid to the value of access resistance achieved. The value of the access resistance (or series resistance as it is also known) will likely exceed pipette resistance by 2-to 5-fold and when current amplitudes are in the nanoampere range will introduce significant error into the true membrane voltage-clamped value. The amount of voltage error can be estimated by multiplying series resistance by current amplitude. If the error exceeds a few millivolts, series resistance compensation should be used. Determine the appropriate composition of the internal (pipette) and external (bath) solutions After choosing a physiological cell model that mimics as closely as possible cells in the intact tissue, it is logical to use intracellular and extracellular solutions with compositions similar to body fluids, at least for initial drug testing. Recipes for different extracellular physiological saline solutions (PSS) such as Krebs-Henseleit solution, Hank's balanced salt solution (HBSS), artificial cerebral spinal fluid (CSF) are readily available in the relevant scientific literature. The pH of the external solution is typically 7.3-7.4. The composition of the internal (pipette) solution should also closely match known cytosolic ionic composition: high in K + (usually in the range of 135-140 mM), low in Na + (normally from 0-5 mM). The concentration of Clmay vary depending on cell type. For many cultured cells, such as A7r5 cells, stable recordings are most easily obtained using relatively low [Cl -] in , in the range of 30-45 mM, in combination with large impermeable anions such as gluconate or aspartate to balance K + . For other cell types, including freshly dispersed smooth muscle cells, pipette solutions with 135-140 mM KCl are preferable. If the internal solution will be used in perforated patch-clamp mode, inclusion of Mg 2+ and buffering of cytosolic Ca 2+ is not required, as amphotericin and nystatin pores are impermeable to Ca 2+ and Mg 2+ (Horn & Marty, 1988;Rae et al., 1991). For ruptured patch recording, free Mg 2+ concentration should be set within the range of 1-2 mM and free Ca 2+ concentration should be approximately 100 nM. To accomplish this, Ca 2+ buffers such as EGTA, EDTA, or BAPTA should be included in the pipette solution. Free Ca 2+ concentration will depend on the concentration of the buffers (usually 0.1-10 mM), their binding constants for Ca 2+ and Mg 2+ and the amounts of added Ca 2+ and Mg 2+ . MAXCHELATOR is a series of programs freely available online (http://maxchelator.stanford.edu/) that can be used for determining the free Ca 2+ concentration in the presence of Ca 2+ buffers. Mg-or Na-ATP (1-5 mM) should also be included in the internal solution for ruptured patch recording. The range of pH for internal solutions may vary from 7.2-7.4, usually buffered with HEPES (1-10 mM). Attention should be paid to the osmolality of both internal and external solutions. Osmolality of body fluids is approximately 275-290 mOsM; the osmolality of the internal and external solutions should be measured with an osmometer, adjusted to the physiological range, and balanced (within 1 or 2 mOsM) between internal and external solutions. Use of approximately physiological external and internal solutions for patch-clamp experiments enables recording of a mix of ionic conductances for initial evaluation of drug effects on the cell type under investigation. Use appropriate voltage clamp protocols (e.g. voltage steps or ramps) to record drug effects on total currents Design of the voltage protocol should be based on biophysical properties of the ion channels expressed in the cells under investigation. It is very useful for initial drug screening to select a holding voltage close to the resting membrane voltage measured or reported for that particular cell type. A voltage protocol designed to apply a family of long test voltage steps (1-5 s) in both negative and positive directions from the resting membrane voltage allows the investigator to record a mix of both rapidly and slowly activating/inactivating currents through voltage-dependent and voltage-independent ion channels. The time between voltage steps should be sufficient for channel deactivation. The stability of the measured currents should be established before testing the effects of drugs on the currents. For example, applying the same series of voltage steps should generate approximately equal currents on successive trials in the same cell. Voltage-ramp protocols can also be used to record instantaneous (voltage-independent) or rapidly activating currents; the ramp can be applied at regular intervals to monitor the stability of the currents over time. For slowly activating currents, use of a single voltage step applied at regular intervals is generally more appropriate for time course measurements. When stable recordings of total ionic conductances are achieved, it is possible to test the effects of a drug, usually applied at varying concentrations. Drugs may affect the amplitudes of the conductances and the kinetics of their responses to the applied voltage protocols as well as their voltage-dependence of activation. The drug effects are generally time-dependent and vary with drug dose in a reproducible manner. Careful evaluation of the drug effects on total membrane currents provides important clues to the types of ionic conductances that may be affected. It is then desirable to record the drug-targeted ionic conductances in isolation. Adjust recording conditions to isolate drug-sensitive currents To record specific currents among the mix of total cellular ionic conductances, a tailored voltage protocol should be used in combination with internal and external solutions and pharmacological approaches that are rationally chosen to enhance or maintain the current of interest while minimizing other conductances. The voltage protocol should reflect the specific biophysical properties of the channels under investigation. If the data are available, consider the voltage dependence of activation and time constants of activation, inactivation and deactivation of the currents. For example, store-operated currents are known to be inwardly-rectifying and highly Ca 2+ -selective, with fast Ca 2+ -dependent inactivation at negative voltages (Parekh & Putney, 2005). To isolate the highly Ca 2+ -selective storeoperated currents from other conductances, consider using an external solution containing 10-20 mM Ca 2+ and replacing all monovalent ions (K + , Na + and Cl -) with impermeant ions such as N-methyl D-glucamine and aspartate. A voltage protocol comprised of a 0 mV holding voltage with 100 ms ramps from +100 to -100 mV can be applied every 5-20 s to record the time course of current activation in response to store depletion (often induced by dialyzing cells with EGTA-or BAPTA-containing pipette solution in ruptured patch mode or by application of thapsigargin or cyclopiazonic acid, which block the ability of the cells to sequester Ca 2+ in the endoplasmic/sarcoplasmic reticulum (Brueggemann et al., 2006)). In general, the isolation of broad classes of ionic conductances (i.e. Ca 2+ currents, K + currents, nonselective cation currents, or Clcurrents) may be achieved by using bath and pipette solutions containing ions that cannot permeate or that block the movement of ions through other classes of ion channels. For example, to record Ca 2+ conductance in isolation, Cs + can be used to replace K + because it blocks most if not all K + channels and thereby minimizes contributions of outward K + currents to the recording. Similarly, replacing Clwith aspartate, gluconate or sulfonate will minimize contributions of Clconductances. It is much more difficult to isolate ion currents within the same class. This may require the use of pharmacological agents that are selective for a particular class of channels. For example, if it is desired to isolated T-type Ca 2+ current from L-type Ca 2+ current, specific Ltype Ca 2+ channel blockers like verapamil can be used. In this case, an alternative (or adjunct) approach is the use of the ruptured patch mode, which leads to rundown of L-type Ca 2+ current over time; other Ca 2+ conductances (e.g. T-type Ca 2+ currents) that have less tendency to run-down in the ruptured patch configuration, may then be recorded in isolation. The use of pharmacological ion channel blockers to eliminate unwanted conductances should be employed with caution unless the specificity of the drugs has been thoroughly established. Probably most difficult is the isolation of specific K + currents because many different potassium channels are normally expressed in each cell. Several highly specific toxins are available for certain subfamilies of potassium channels (hongotoxin and margatoxin for Kv1.1, Kv1.2, Kv1.3 (Koschak et al., 1998), hanatoxin for Kv2 (Swartz & MacKinnon, 1995), K-dendrotoxin for homo-and heteromeric channels containing Kv1.1 (Robertson et al., 1996)). These can be used to eliminate a subtype of K + current or determine the contribution of that subtype to the larger mix of K + currents. It is important to consider that different members within a subfamily of K + channels can combine to form functional heteromeric channels, which may vary in their sensitivities to toxins depending on the subunit composition (Tytgat et al., 1995;Plane et al., 2005) In some cases, a combination of pharmacological approaches and voltage protocols that take advantage of the unique biophysical properties of the K + channels expressed in the cell type under investigation can effectively isolate a specific subtype of K + conductance (see example below). Recording a specific current 'in isolation' from other currents is never fully achieved, but conditions may be established that provide a reasonable signal to noise ratio to evaluate contributions of a subset of ion channels. Specific pharmacological ion channel blockers or activators may be useful to confirm that the currents measured are largely attributable to a particular type of channel, but molecular knockdown approaches are often the best way to determine what fraction of the currents measured are mediated by a specific channel subtype. When conditions have been optimized for recording isolated currents, the effects of the drug on those currents can be tested. Evaluate the actions of the drug of interest at its physiologically or clinically relevant concentrations An appropriate dose-response range of the drug of interest should be based on consideration of physiological or clinically achieved plasma concentrations and doses used in vitro from previously published studies. Dose-dependent effects can be evaluated both www.intechopen.com under physiological ionic conditions as well as under recording conditions that isolate specific currents. Stable recording of currents in the absence of drug should be established by applying voltage steps or ramps at regular intervals and measuring similar current amplitudes for several minutes. Increasing concentrations of the drug are then applied, starting at a dose that has little or no effect and increasing in 10-fold or smaller increments to at least the maximum clinical or physiological drug concentration. Be aware that repetitive drug administration or incrementally increasing doses may induce tachyphylaxis. Applying a single dose acutely to a naïve cell may provide the best assessment of the effect of that dose of the drug. To evaluate whether the presence of the drug changes the biophysical properties of the channel, such as its gating kinetics or voltage-dependence of activation, specific voltage protocols may be applied when steady-state effects of a particular dose of the drug have been achieved. For example, a tail current voltage protocol can be used to evaluate the effects of a drug on voltage-dependence of channel activation. This protocol should be applied at the end of the control recording (before drug application); two successive voltage protocols that yield similar currents establish the stability of the control recording. The same successive voltage protocols should then be repeated when measurement of the time course of current amplitude indicates that the current amplitude has reached a new plateau in the presence of the drug. To determine the reversibility of drug effects, it is important to measure the currents during drug application and during washout of the drug. It may require tens of minutes to achieve a stable reversal of a drug effect and in some cases the effects will not be reversed within a practical time frame. Reversibility, when it is achieved, provides convincing evidence that the effect measured was specifically due to the drug and not simply due to time-dependent changes such as run-up or run-down of currents. It is also important to include vehicle and time controls to assure that effects are due to the presence of the drug rather than the time of recording or the solvent in which the drug is dissolved. Reproducible effects of a drug on the amplitude or biophysical characteristics of a particular current in the cultured cell model may provide important clues to the drug's effect on a particular tissue. However, whenever possible, results based on cultured cells should be confirmed using freshly isolated cells from the tissue from which the cultured cells were derived. The electrophysiological characteristics of the drug-sensitive currents may suggest one or more specific ion channel subtypes as the drug targets. Molecular biological approaches may then be used to confirm the identity of the drug-sensitive ion channel. Apply molecular biological approaches such as knock-down and overexpression as necessary to confirm an ion channel drug target As was noted above, cultured cells are often suitable for molecular biological interventions. Knock-down of expression of specific ion channels may be achieved by treatment with short hairpin RNA (shRNA) or small interfering RNA (siRNA). Alternatively, expression of dominant-negative ion channel subunits may specifically abrogate the function of particular ion channels. These molecular constructs can be introduced into the cultured cells using transfection techniques with appropriate plasmids or by infecting the cells with viral vectors engineered to express the constructs. Inactive constructs (e.g. scrambled shRNA) should be used as a control. Biochemical techniques such as RT-PCR or Western blotting and/or immunohistochemistry are required to confirm the effectiveness of knock-down. Knock-down of expression or function of a specific ion channel can reveal how much that channel type contributes to the currents measured and whether a drug effect can be attributed to specific actions on that channel type. In electrophysiological recordings, knockdown of a specific channel type should eliminate the contribution of those channels to the currents measured. In other functional assays, loss of the drug effect when the channel is knocked down would provide evidence that the functional effects of the drug can be specifically attributed to its actions on that channel type. Alternatively, if the effects persist even after knockdown of a particular channel then that particular channel is unlikely to be the primary drug target. Another way to implicate an ion channel as a drug target is to over-express the ion channel and test the effects of the drug on the currents. This is best done in the same cellular environment known to express that type of ion channel endogenously because the cellular environment dictates many properties of ion channels, including regulation by signaling pathways that may at times mediate or modulate drug effects. Overexpression typically results in much larger currents that can be unambiguously attributed to the overexpressed channels. If these channels are direct or indirect targets for the drug, then drug application should have effects on the currents similar to the effects observed on native currents. Potential pitfalls of these molecular biological strategies include changes in expression or function of other molecules that may compensate for the increased or decreased channel expression or otherwise alter the measured currents. It is important to keep in mind that the effects of drugs on ion currents may not be via a direct interaction with the channel itself, but instead mediated by other mechanisms, including activation of cellular signaling molecules whose expression may or may not be altered when channel expression levels change. Establish a multicellular functional system or animal model for final proof of principle To determine the physiological significance of drug effects on particular types of ion channels, the drug can be tested on in vitro (ex vivo) and/or in vivo functional models. Examples of in vitro functional models include the isolated Langendorff heart preparation (Skrzypiec-Spring et al., 2007), muscle strips of various origins, aortic or bronchial rings, brain slices, lung slices, pressurized artery preparations, etc. These more complex experimental systems more closely mimic physiological conditions, but also introduce additional factors that may complicate interpretation of drug effects. It is important to consider whether changes in tissue function in the presence of a drug can be attributed primarily to the drug's effects on ion channels in a particular cell type. There may be multiple effects on multiple cell types within the tissue. In vivo drug testing adds a further level of complexity, but it is the ultimate test of how a drug will affect whole animal physiology. Many different animal models have been developed and are described in the literature. To determine whether an effect of a drug in vivo can be attributed to its actions on a specific ion channel, it may be possible to compare www.intechopen.com its effects with the effects of another drug that is known to have the same or opposite effects on that ion channel. Example: Cyclooxygenase-2 inhibitor effects on vascular smooth muscle ion channels The following example illustrates how we have employed the approaches described above in an attempt to elucidate the mechanisms underlying differential adverse cardiovascular risk profiles among clinically used drugs of the same class. Selective cyclooxygenase-2 (COX-2) inhibitors, such as celecoxib (Celebrex®), rofecoxib (Vioxx®), and diclofenac, are non-steroidal anti-inflammatory drugs (NSAIDs) commonly used for the treatment of both acute and chronic pain. About five years after celecoxib and rofecoxib were approved for use in the United States, rofecoxib (Vioxx®) was voluntarily withdrawn from the market because of adverse cardiovascular side effects (Dajani & Islam, 2008). The ensuing investigation of the cardiovascular side effects of this drug class revealed differential risk profiles, with celecoxib being relatively safe, compared with rofecoxib and diclofenac (Cho et al., 2003;Hermann et al., 2003;Aw et al., 2005;Hinz et al., 2006;Dajani & Islam, 2008). Early reports suggested that these differences might relate to pro-hypertensive effects of COX-2 inhibition (Cho et al., 2003;Hermann et al., 2003;Aw et al., 2005;Hinz et al., 2006) that were offset by vasodilatory effects of celecoxib (Widlansky et al., 2003;Klein et al., 2007). However, the mechanisms underlying the vasodilatory effects of celecoxib remained elusive. We employed the following strategies to investigate whether celecoxib might exert its vasodilatory actions via effects on ion channels in vascular smooth muscle cells (VSMCs). Additional details of these studies were published previously ). Patch clamp mode To investigate the vasodilator actions of celecoxib, it was important to consider two types of ion channels that are perhaps the most important in determining the contractile state of vascular smooth muscle cells: Kv7 channels that determine the resting membrane voltage (Mackie & Byron, 2008), and L-type voltage-gated Ca 2+ channels, activation of which induces Ca 2+ influx, smooth muscle contraction, and vasoconstriction (Jackson, 2000). Both of these types of channels are known to be regulated by PIP 2 (Suh & Hille, 2008;Suh et al., 2010). We therefore chose to use the perforated patch-clamp configuration to record currents in voltage-clamp mode (200 g/ml amphotericin B in internal solution was used for membrane patch perforation). Voltage clamp protocols We used a 5s voltage step protocol from -74 mV holding potential to test voltages ranging from -94-+36 mV. After each test pulse the voltage was returned to -74 mV for 10 s to allow full deactivation before the next voltage step was applied. This protocol enabled us to simultaneously record the current-voltage (I-V) relationship for L-type Ca 2+ channels (based on peak inward currents recorded at the beginning of the voltage steps; see inset on Fig. 1A) and for Kv7 channels (based on steady-state outward K + currents recorded at the end of the voltage steps). The evaluation of L-type currents at the beginning of the voltage steps was only possible because of the absence of rapidly-activating K + currents in the voltage range used. The long (5s) voltage steps enabled relative isolation of Kv7 currents at the end of the voltage steps because Kv7 channels do not inactivate, whereas most other K + channels do inactivate when stepped to a constant activating voltage for 5 s. Representative current traces and I-V relationships are shown on Fig. 1A and 1B. The I-V voltage protocol requires approximately 4 min to complete all the 5 s voltage steps with a 10 s interval between each step. We repeated this three times to determine that the currents were stable (the I-V curves were approximately superimposable). When the currents were stable we initiated a voltage protocol designed to record the time course of drug application. The time course voltage protocol combined 100 ms voltage ramps (from a -74 mV holding potential to +36 mV) to record the rapidly-activating Ca 2+ current (as the peak inward current) followed by 5 s voltage steps to -20 mV to record slowly-activating and non-inactivating Kv7 current (measured as the average steady-state current recorded at the end of the voltage step; Figure 1C). The time course protocol was applied every 15 s. Ca 2+ and K + currents were recorded for at least 5 min before application of celecoxib (10 μM). Celecoxib was then applied until a stable drug effect was achieved (approximately 15 min). Then the I-V voltage-step protocol was applied again (twice in succession) to record I-V relationships of the Ca 2+ and K + channels in the presence of the drug. The time course protocol was then re-initiated to monitor the effects of washout of celecoxib. These experiments revealed that celecoxib induced a reversible enhancement of Kv7 current and inhibition of L-type Ca 2+ current-both of these effects could potentially contribute to the vasodilatory actions of celecoxib. A, representative traces of whole-cell K + and Ca 2+ currents measured in a single A7r5 cell; i, control; ii, in the presence of 10 µM celecoxib. Inward Ca 2+ currents, activated at the beginning of the voltage steps, are shown in insets on an expanded scale for clarity. B, I-V curves, corresponding to traces in A, for steady-state K + current (filled symbols) and peak inward Ca 2+ current (open symbols) in control (circles), in the presence of 10 µM celecoxib (triangles), and after washout of celecoxib (inverted triangles). C, corresponding time course of inhibition of the peak inward Ca 2+ current and activation of K + current. Reproduced with permission from Brueggemann et al., (2009). Similar experiments were conducted to evaluate the effects of other NSAIDs (rofecoxib and diclofenac), 2,5-dimethylcelecoxib (a celecoxib analog lacking COX-2 inhibitory activity), as well as verapamil (a known inhibitor of L-type Ca 2+ channels) and flupirtine (a known activator of Kv7.2-Kv7.5 channels) . The effects of these drugs were compared with celecoxib-induced effects on L-type Ca 2+ currents and Kv7 currents. From these studies it was apparent that neither rofecoxib nor diclofenac mimicked celecoxib in its actions on either L-type Ca 2+ currents or Kv7 currents; on the other hand, 2,5-dimethylcelecoxib was indistinguishable from celecoxib in its effects . Isolation of L-type Ca 2+ currents and Kv7 currents To evaluate in more detail the actions of celecoxib on L-type Ca 2+ currents and Kv7 currents, each type of current was recorded in isolation. To record Ca 2+ currents in isolation, a Cs +containing internal solution was used (for A7r5 cells, the internal solution contained (in mM): 110 Cs aspartate, 30 CsCl, 5 HEPES, 1 Cs-EGTA, pH 7.2). Isolated Ca 2+ currents were recorded with a 300 ms voltage step protocol from -90 mV holding potential. To isolate Kv7 currents, 100 μM GdCl 3 , sufficient to block L-and T-type Ca 2+ channels and non-selective cation channels, was added to the external solution. Isolated Kv7 currents were recorded with the same 5s voltage-step protocol used to record a mix of currents (see above). Effects of celecoxib at therapeutic concentrations Celecoxib dose-response curves for L-type Ca 2+ currents and Kv7 currents were obtained by measuring the currents during successive applications of increasing concentrations of celecoxib (ranging from 0.1 µM to 30 µM), each time waiting until the drug effect had stabilized before application of the next dose. The celecoxib concentrations selected for doseresponse determinations were based on a ± 10-fold range of mean therapeutic concentrations typically achieved in the plasma of patients treated with celecoxib (1-3 µM) (Hinz et al., 2006). Our estimated IC 50 value for suppression of L-type Ca 2+ currents was 8.3 ± 1.3 µM . To extend the findings to a more physiological model system, the effects of celecoxib were also examined using freshly dispersed mesenteric artery myocytes. Celecoxib inhibited Ca 2+ currents and enhanced Kv7 currents recorded in isolation in mesenteric artery myocytes, just as had been observed in A7r5 cells . Molecular biological approaches to evaluate Kv7.5 as a target of celecoxib Kv7 currents measured in A7r5 cells had previously been attributed to Kv7.5 (KCNQ5) channel activity based on expression studies and on elimination of the currents by shRNA treatment targeting the Kv7.5 (KCNQ5) mRNA transcripts (Brueggemann et al., 2007;Mani et al., 2009). To determine whether Kv7.5 was a specific target for celecoxib, we measured the effects of celecoxib on overexpressed human Kv7.5 channels, using the A7r5 cells as an expression system. Celecoxib robustly enhanced the overexpressed Kv7.5 currents . Functional assays to evaluate how ion channel targeting by celecoxib affects cell and tissue physiology As noted above, Kv7 channel activity is believed to stabilize negative resting membrane voltages in arterial myocytes and thereby opposes the activation of L-type voltage-gated Ca 2+ channels. The latter mediate Ca 2+ influx, smooth muscle contraction, and vasoconstriction. Drugs that enhance Kv7 channel activity or that directly inhibit L-type Ca 2+ activity would therefore be expected to reduce cytosolic Ca 2+ concentration, relax the www.intechopen.com arterial myocytes, and dilate arteries. To test the hypothesis that the effects of celecoxib on arterial smooth muscle ion channels contributes to its vasodilatory actions, three different functional assays were used: a. Arginine-vasopressin (AVP) is a vasoconstrictor hormone that has been shown to induce Ca 2+ oscillations in confluent monolayers of A7r5 cells. We therefore loaded A7r5 cells with the fluorescent Ca 2+ indicator fura-2 and examined the effects of celecoxib (10 µM) in comparison with rofecoxib (10 µM) on AVP-induced Ca 2+ oscillations. In support of our hypothesis, celecoxib opposed the actions of the vasoconstrictor hormone, essentially abolishing AVP-stimulated Ca 2+ oscillations. Rofecoxib, in contrast, had no effect (Figure 2A). Inhibition of AVP-stimulated Ca 2+ oscillations was also observed using known L-type Ca 2+ channel blockers or activators of Kv7 channels (not shown, but see (Byron, 1996;Brueggemann et al., 2007)). A, Celecoxib, but not rofecoxib, abolishes AVP-induced Ca 2+ oscillations in A7r5 cells. Confluent monolayers of fura-2-loaded A7r5 cells were treated with 25 pM AVP (arrow). Representative traces show the absence of AVP-induced Ca 2+ oscillations with simultaneous addition of celecoxib (10 µM, middle) but not with addition of vehicle (top) or rofecoxib (10 µM, bottom). B, representative traces from rat mesenteric artery pressure myography illustrating the inability of 20 µM rofecoxib (top) and 20 µM diclofenac (bottom) to dilate arteries preconstricted with 100 pM AVP. Celecoxib (20 µM) fully dilated the same arteries when added after either rofecoxib or diclofenac. C, measurement of arteriolar blood flow in vivo using intravital microscopy reveals that AVP (100 pM) significantly constricts arterioles (top panels) and reduces blood flow (bar graph), but this effect is more than fully reversed by the addition of 10 µM celecoxib. Panels A and B reproduced with permission from Brueggemann et al., (2009). b. The constriction and dilation o f a r t e r i e s c a n b e m e a s u r e d i n v i t r o u s i n g p r e s s u r e myography. Small segments of artery are cannulated at either end, pressurized to their normal physiological pressure, and maintained at physiological temperatures and ionic balance; arterial diameter is monitored continuously by digital image analysis while drugs are applied to the bath. We used these methods to test the ability of celecoxib to dilate pressurized mesenteric artery segments that were pre-constricted with AVP (100 pM). In support of our hypothesis, celecoxib induced concentration-dependent, endothelium-independent dilation of pre-constricted mesenteric arteries. Similar effects were obtained using known L-type Ca 2+ channel blockers or activators of Kv7 channels (not shown, but see (Henderson & Byron, 2007;Mackie et al., 2008)). The maximum dilatory effect of celecoxib was achieved at a concentration of 20 µM; neither rofecoxib nor diclofenac induced significant artery dilation at the same concentration ( Figure 2B). c. Finally, it is important to evaluate the effects of the drug in an in vivo model. We therefore examined the ability of celecoxib to increase blood flow in mesenteric arterioles of live anesthetized rats using intravital microscopy and intravenous perfusion of fluorescent microspheres. AVP (100 pM) superfused over the exposed portion of the mesenteric vasculature induced significant arteriolar constriction and reduced blood flow (determined from the velocity of fluorescent microspheres moving through the arterioles). Application of celecoxib (10 µM) in addition to AVP more than fully restored both arteriolar diameter and blood flow ( Figure 2C). The combined functional assays provided strong evidence supporting the hypothesis that celecoxib, but not other NSAIDs of the same class, exerts vasodilatory effects via combined activation of Kv7 potassium channels and inhibition of L-type voltage-gated Ca 2+ channels in arterial smooth muscle cells. These results may explain the differential risk of adverse cardiovascular events in patients taking these different NSAIDs. Conclusion Carefully designed and executed electrophysiological experiments can provide important insights into the mechanisms of drug actions, including their off-target effects on specific tissues. This book is a stimulating and interesting addition to the collected works on Patch clamp technique. Patch Clamping is an electrophysiological technique, which measures the electric current generated by a living cell, due to the movement of ions through the protein channels present in the cell membrane. The technique was developed by two German scientists, Erwin Neher and Bert Sakmann, who received the Nobel Prize in 1991 in Physiology for this innovative work. Patch clamp technique is used for measuring drug effect against a series of diseases and to find out the mechanism of diseases in animals and plants. It is also most useful in finding out the structure function activities of compounds and drugs, and most leading pharmaceutical companies used this technique for their drugs before bringing them for clinical trial. This book deals with the understanding of endogenous mechanisms of cells and their receptors as well as advantages of using this technique. It covers the basic principles and preparation types and also deals with the latest developments in the traditional patch clamp technique. Some chapters in this book take the technique to a next level of modulation and novel approach. This book will be of good value for students of physiology, neuroscience, cell biology and biophysics.
8,692.6
2012-03-23T00:00:00.000
[ "Medicine", "Chemistry" ]
Theoretical design of tetragonal rare-earth-free alloys with high magnetisation and high magnetic anisotropy Tetragonal alloys, such as D022-Mn3Ga, are potential candidates for rare-earth free permanent magnets due to their high Curie temperature and uniaxial magnetic anisotropy. For high-performance permanent magnets, high saturation magnetisation is necessary. However, the saturation magnetisation of D022-Mn3Ga is small due to ferrimagnetic ordering. We investigated the possibility of developing ferromagnetic Heusler alloys with high magnetic anisotropy and saturation magnetisation using the first-principles calculation. We focused on the effects of Fe substitution for Mn in D022-Mn3Ga as well as the consequent volume expansion; the ferromagnetic tetragonal XA phase is stabilized in Fe2MnGa by an 8% volume expansion. This tetragonal XA-Fe2MnGa has desirable properties for a high-performance permanent magnet, such as high magnetisation (1350 emu cc−1), perpendicular magnetic anisotropy (2.12 MJ m−3), and Curie temperature (1047 K). In addition, the substitution of Sn and increasing the Ga composition in the Fe2MnGa alloy results in volume expansion, which stabilizes the ferromagnetic tetragonal XA phase. Introduction The high-performance permanent magnet is widely used in various industrial applications, including in motors for electric vehicles, power generators in wind turbines, and hard disk drives. The development of a new rare-earth free permanent magnet is important due to the cost and limited availability of rare-earth elements. Mn-Ga alloys such as L10-MnGa and DO 22 -Mn 3 Ga have been attracting much attention as candidates for spintronic materials and rare-earth-free permanent magnets because of their high uniaxial magnetic anisotropy and Curie temperature exceeding 700 K. [1][2][3] Moreover, a high saturation magnetisation is required for the application of permanent magnets. However, the saturation magnetisation of D0 22 -Mn 3 Ga is much smaller than that of L1 0 -MnGa (845 emu cc −1 ) 4) owing to the ferrimagnetic ordering of the magnetic moments at the two Mn sublattices, 2.8 μ B and 1.6 μ B per Mn atom at the 4d and 2b sites, respectively. 1) If the magnitude of the magnetic moment at each Mn site is assumed to be the same even in the ferromagnetic state, a magnetisation of about 1.7 T can be expected for ferromagnetic Mn 3 Ga. Furthermore, Mn-Fe-Ga alloys have interesting properties, which can enable their application in spintronics and magnetic shape memory alloys. Mn 2 FeGa, which is a tetragonal Heusler alloy, has also been identified as a suitable candidate for use in spin-transfer torque devices, as the magnetisation is nearly compensated, and the high perpendicular magnetic anisotropy is maintained. 5) The Fe 2 MnGa alloy has a complex phase diagram for temperature and composition. Martensitic transformation [6][7][8][9][10] and anti-ferromagnetic to ferromagnetic phase transformation [11][12][13][14] have been reported for stoichiometric and off-stoichiometric Fe 2 MnGa. The exchange coupling between the Mn atoms in most alloys and compounds empirically shows a universal dependence on the Mn-Mn distance. 15) In this study, we theoretically investigated the possibility of realizing ferromagnetic ordering by substituting Fe for Mn in Mn 3 Ga and volume expansion. Material and methods We investigated the relationship between the ferrimagnetic and ferromagnetic formation energies and volumes for D0 22 -Mn 3 Ga, Mn 2 FeGa, and Fe 2 MnGa, by first-principles density-functional calculations using the Vienna ab-initio simulation package based on the plane-wave basis set and projector augmented wave method. 16) We adopted a generalised density gradient approximation (GGA) parameterised by Perdew, Burke and Ernzerhof (PBE) for the exchangecorrelation potential. 17) The cut-off energy of the planewave basis set was 500 eV. A 12 × 12 × 12 k-point mesh was employed for Brillouin zone integrations. The formation energy of Mn 3-x Fe x Ga alloys is given as where E(Mn 3-x Fe x Ga) is the total energy of the Mn 3-x Fe x Ga alloys per formula unit; E(Mn), E(Fe), and E(Ga) denote the total energies of the bulk α-Mn, bcc-Fe, and α-Ga per atom, respectively. The magnetic anisotropy energy (MAE) values were estimated based on the magnetic force theorem 18) using a 24 × 24 × 24 k-point mesh. In addition, we evaluated the Heisenberg exchange coupling parameter (J ij ) between the magnetic atoms as a function of volume using the Liechtenstein formula 19) with the spin-polarized relativistic Korringa-Kohn-Rostoker code. 20) The angular momentum cut-off was set at 4 in the multiple scattering expansion. We used a 15 × 15 × 15 kpoint mesh and 50 energy points on the complex energy path for the self-consistent calculation. We adopted a GGA-PBE for the exchange-correlation potential. A fine k-mesh with 34 × 34 × 34 k-points was adopted for the evaluation of the exchange coupling parameters. The Curie temperature was estimated from the exchange coupling parameters in the framework based on the mean-field approximation (MFA) for multi-sublattice systems. 21,22) Furthermore, we also investigated the stable structure and magnetic property of the Ga-rich Fe 2 MnGa compound using a supercell approach. To simulate the off-stoichiometric compound, we constructed a 40 atom "special quasi-random structure" (SQS) 23) to model the off-stoichiometric Fe 2 MnGa compound. The SQSs used in this work were generated using the "alloy theory automated toolkit" package. 24) Results First, we evaluated the formation energies (E f ) of the ferromagnetic and ferrimagnetic states in the tetragonal Heusler structure. A tetragonal X 2 YZ Heusler compound can have a regular (L2 1 ) or inverse (XA) Heusler structure, as shown in Fig , where E f (ferro) and E f (ferri) denote the formation energies in the ferromagnetic and ferrimagnetic states, respectively, and the value is 0.39 eV f.u. −1 at the equilibrium volume of each magnetic structure. This value is relatively larger than the elastic energy, and therefore, it is difficult to stabilize the ferromagnetic state of the D0 22 structure by volume modulation alone [see Fig. 2(a)]. In Mn 2 FeGa, the ΔE f values are −0.02 and 0.33 eV f.u. −1 for the L2 1 and XA structures, respectively. In the L2 1 structure, the ferromagnetic state becomes stable, but the L2 1 structure is energetically unfavourable as compared with the XA structure for all the volumes studied here. A discontinuous point is seen in the energy-volume curve of the ferromagnetic L2 1 and XA phases, as shown in Figs. 2(b), 2(c), which is attributed to the transition from the low-spin to high-spin ferromagnetic phase. The magnetisation is enhanced from 6.16 (5.83) to 7.37 (8.49) μ B f.u. −1 for the L21 (XA) phase at these points. For Fe 2 MnGa, the ΔE f values are 0.19 and −0.14 eV f.u. −1 for the L2 1 and XA structures, respectively, and the ferromagnetic XA phase has formation energy of 0.01 eV f.u. −1 , and it is more stable than the ferrimagnetic L2 1 phase. In the ferromagnetic XA phase, the magnetic moments of the A site Fe, C site Mn, and B site Fe are 2.19, 2,80, and 2.30 μ B , respectively, and the magnetisation reaches 1350 emu cc −1 . The tetragonal L2 1 -Mn 2 FeGa and XA-Fe 2 MnGa, wherein the Fe atom occupies the B site, favour the ferromagnetic state over the ferrimagnetic state. Table I shows the J ij between the nearest-neighbour magnetic atoms at the A(C) site and the B site. When Mn occupies the B site, the J ij values are negative, and the ferrimagnetic spin alignment becomes stable. Conversely, the J ij values become positive when Mn is replaced with Fe at the B site, and the ferromagnetic spin alignment becomes energetically favourable. Next, we compared the E f and magnetisation of several crystal structures of Mn 2 FeGa and Fe 2 MnGa (see Table II). For Mn 2 FeGa, the ferrimagnetic state in the tetragonal XA structure is the most energetically favourable, as mentioned in a previous theoretical report. 25) For Fe 2 MnGa, the ferrimagnetic state in the cubic L2 1 structure becomes the most energetically favourable. In the previous theoretical report, the ferromagnetic L1 2 structure is more stable than the L2 1 structure. 26) However, the E f difference between the tetragonal XA structure with ferromagnetic ordering and the cubic L2 1 structure is very small, 0.016 eV f.u. −1 . Figure 3 shows the volume dependence of E f for Fe 2 MnGa with several crystal structures. The equilibrium volumes are 46.09, 48.05, and 49.76 Å 3 f.u. −1 for the cubic L2 1 , tetragonal L2 1 , and tetragonal XA structures, respectively, which are different. The ferromagnetic XA phase is stabilized by the 8% volume expansion from the equilibrium volume of the cubic L2 1 phase. The data in Table II reveal that the E f of the ferromagnetic D0 19 phase is also close to that of the cubic L2 1 . Experimental evidence for bcc to the ferromagnetic hexagonal phase transformation upon thermal annealing has been reported. 27) Afterward, we evaluate the Curie temperature and MAE of the tetragonal XA-Fe 2 MnGa compound. The Curie temperature obtained for the XA-Fe 2 MnGa by MFA is 1047 K, which is much higher than that for D0 22 -Mn 3 Ga (730 K). This high Curie temperature is predominantly attributed to the strong ferromagnetic exchange coupling (29.5 meV) between the Fe atoms at the A and B sites (Table I). A high perpendicular MAE of 2.12 MJ m −3 is also obtained for XA-Fe 2 MnGa; however, this value is slightly smaller than that for the D0 22 -Mn 3 Ga (2.80 MJ m −3 ). Additionally, we evaluate the contribution from each constituent atom to the MAE. In Discussion Volume expansion is necessary for the stabilization of the tetragonal XA-Fe 2 MnGa compound. We focus on the effect of substituting the Ga atom with other typical elements having a large atomic radius. Figure 4 shows the volume dependence of E f for Fe 2 MnGa 0.75 Sn 0.25 , where 25% of the Ga atoms are replaced by Sn atoms. The equilibrium volume of the cubic L2 1 and tetragonal XA phases expands by 4.1% and 3.6%, respectively, due to the Sn substitution. Due to the lattice expansion, the E f of the ferromagnetic state in the tetragonal XA becomes lower than that of the cubic L2 1 . The magnetisation, Curie temperature, and MAE of the tetragonal XA-Fe 2 MnGa 0.75 Sn 0.25 are determined to be 1314 emu cc −1 , 1049 K, and 2.23 MJ m −3 , respectively. Thus, Fe 2 MnGa 0.75 Sn 0.25 is a good candidate for permanent magnet applications. However, the formation energy difference between the cubic L2 1 and tetragonal XA phases is not enough to stabilize the XA phase around the Curie temperature. Next, we evaluated the effect of volume expansion by increasing the Ga composition, which has a larger atomic radius than those of Fe and Mn. First, we determine the site preference of the excess Ga atoms in the Ga-rich L2 1 and XA-Fe 2 MnGa alloys. We calculated the formation energies of the Fe-deficient alloy with a composition of Fe 1.9 Mn 1.0 Ga 1.1 and the Mn-deficient alloy with a composition of Fe 2.0 Mn 0..9 Ga 1.1 . In the present work, we consider 2, 2, 4 and 3 site-occupation configurations for the In the composition region of the blue circles, the tetragonal XA structure is more stable than the cubic L2 1 structure, as shown in Fig. 5(b). The difference between the formation energies of the cubic L2 1 and tetragonal XA structures is 0.10 eV f −1 .u. with a composition of Fe 1.5 Mn 1.0 Ga 1.5 in the composition range investigated, and this is enough to stabilize the XA phase around the Curie temperature. Next, we determine the formation energy difference between the disordered B2 phase and the ordered XA phase for the Fe 1.5 Mn 1.0 Ga 1.5 alloy, and the estimated value is 0.11 eV f.u. −1 . Further, we estimate the composition dependence of the magnetisations, MAEs and Curie temperatures of the Fe 2-x Mn 1-y Ga 1+x+y alloys with the tetragonal XA structure, as shown Fig. 6. The magnetisation decreases upon replacing the magnetic atoms of Fe and Mn with non-magnetic Ga atoms. Contrarily, the K u is maintained at a high value with replacing Fe with Ga, since the excess Ga atoms replace the B site Fe atoms that do not contribute to the MAE; in addition, tetragonal distortion (c/a) is enhanced with increasing Ga composition. The substitution of Ga with Fe is efficient for stabilizing the tetragonal XA phase and obtaining a high K u . For the Fe 1.5 Mn 1.0 Ga 1.5 alloy, B s = 1.3 T, K u = 2.3 MJ m −3 , and T c = 850 K, and large magnetisation, magnetic anisotropy and Curie temperature are expected. Conclusions We theoretically investigated the stabilization of the ferromagnetic state in tetragonal Heusler alloys. We focused on the effects of Fe atom substitution for Mn and the lattice expansion in the Mn 3 Ga alloy. When the volume of Fe 2 MnGa is expanded by about 8%, the ferromagnetic tetragonal XA phase becomes stable. The tetragonal XA-Fe 2 MnGa alloy can be applied in permanent magnets because of its high saturation magnetisation, perpendicular magnetic anisotropy, and Curie temperature. Sn substitution at the Ga sites results in volume expansion so that the ferromagnetic tetragonal XA phase is stabilized. Finally, we examined the stability of the XA phase and magnetic properties of Fe 2 MnGa in relation to the Ga composition and found that Fe 1.5 Mn 1.0 Ga 1.5 is a good material that can be utilized as a permanent magnet.
3,175.2
2020-05-01T00:00:00.000
[ "Materials Science" ]
Generalized diffusion effects on Maxwell nanofluid stagnation point flow over a stretchable sheet with slip conditions and chemical reaction The aim of this article is to investigate the heat and mass diffusion (Cattaneo–Christov model) of the upper convected Maxwell nanomaterials passed by a linear stretched surface (slip surface) near the stagnation point region. Convocational Fourier’s and Fick’s laws are employed to investigate heat and mass diffusion phenomena. Using the similarity transformations, the governing PDEs are rendered into ODEs along with boundary conditions. The boundary value problem is solved numerically using RK-4 method along with shooting technique (Cash and Karp). The effects of embedded parameters, namely fluid relaxation parameter, Hartmann number, Brownian moment, thermophoresis parameter, thermal relaxation parameter, Lewis number, chemical reactions concentration relaxation parameter, and slip parameter on velocity, temperature, and concentration distributions, are deliberated through the graphs and discussed numerically. The skin friction coefficient is deliberated numerically, and their numerical values are accessible through graphs and table. The comparison of current article is calculated in the last section, and a good agreement is clear with the existing literature. Introduction Nanotechnology has important interest in manufacturing, aerospace, and medical industries. The term nanofluid was generated by Choi [1] in 1995, designated as fluids that contain solid nanoparticles having 1-100 nm size dispersed in the base fluids, namely ethylene, water, toluene, oil, etc. Nanoparticles such as coppers, silicone, aluminum, and titanium tend to improve the thermal conductivity and convective heat assignment rate of liquids. Impact of variable viscosity on flow of non-Newtonian material with convective conditions over a porous medium is investigated by Rundora et al. [2]. Babu and Sandeep [3] discussed the numerical solution on MHD nanomaterials over a variable thickness of the surface along with thermophoresis and Brownian motion effects. Haiao [4] presented the numerical solution of magnetohydrodynamic micropolar fluid flow with the addition of nanomaterials toward a stretching sheet with viscous dissipation. Mahdavi et al. [5] illustrated the slip velocity along with multiphase approach of nanofluids. Xun et al. [6] obtained the numerical solution of bioconvection heat flow nanofluid flow over a rotating plate with temperaturebased viscosity. Khan et al. [7] numerically analyzed heat and mass diffusion in Jeffery nanofluid passed by inclined stretching surface. Lebon and Machrafi [8] analyzed the two-phase change in Maxwell nanofluid flow along with thermodynamic description. Ansari et al. [9] investigated the comprehensive analysis in order to calculate the relative viscosity of nanofluids. Khan et al. [10] considered the chemical reaction on Carreau-Yasuda nanomaterials over a nonlinear stretching surface. Magnetohydrodynamic (MHD) flow of heat and mass transfer Maxwell fluid flow over a continuous stretching surface has Technical Editor: Cezar Negrao, Ph.D. great significance in several applications in engineering such as melts, aerodynamics extrusion of plastic sheet, geothermal extractions, and purification of molten metals. Numerous researchers have made great interest and evaluated the transport phenomena for magnetohydrodynamic. Zhao et al. [11] solve the differential equations labeling MHD Maxwell fluid in permeable sheet by considering Dufour and Soret impact. Hsiao [12] investigated the combined effects of thermal extraction on MHD Maxwell fluid over stretching surface with viscous dissipation and energy conversion. Ghasemi and Siavashi [13] demonstrated the Cu-water MHD nanofluid in square permeable surface with entropy generation. Nourazar et al. [14] illustrated the heat transfer in flow of single-phase nanofluid toward a stretching cylinder with magnetic field effect. Dogonchi and Ganji [15] addressed the unsteady squeezed MHD nanofluid flow over two parallel plates with solar radiation. Hayat et al. [16] investigated the heat and mass diffusion for stagnation point flow toward a linear stretching surface along with magnetic field. Sayyed et al. [17] investigated the analytical solution of MHD Newtonian fluid flow over a wedge occupied in a permeable sheet. Representative analyses on MHD flow can be seen in Refs. [18][19][20]. The Maxwell model is a subclass of rate-type fluids, which calculates stress relaxation so it has become popular. This model also eliminates the complicating behavior of shear-dependent viscosity and is thus useful for focusing exclusively on the impact of a fluid's elasticity on the characteristics of its boundary layer. Nadeem et al. [21] deliberated the numerical study on heat transfer of Maxwell nanofluid flow over a linear stretching sheet. Reddy et al. [22] studied the approximate solution of magnetohydrodynamic Maxwell nanofluid flow over exponentially stretching surface. Liu [23] indicated the 2D flow of frictional Maxwell fluid over a variable thickness. Solution of the differential equations was obtained numerically here by L 1 technique. Yang et al. [24] considered the fractional Maxwell fluid through a rectangular microchannel. Inspired by the above studies, the current study illustrates the MHD Maxwell nanofluid flow over a linearly stretched sheet near the stagnation point and slip boundary conditions. Fourier's and Fick's laws are presented in the constitutive relations. The nonlinear ODEs are deduced from the nonlinear PDEs by similarity transaction. The solutions are obtained via shooting method (Cash and Karp). The different involved physical parameters are examined for velocity, concentration, and temperature fields. Mathematical formulation Let us consider two-dimensional laminar steady heat and mass transfer flow of an electrically conducting Maxwell nanofluid flow passed by a linear stretched surface placed along x-axis and y-axis vertical to the sheet with stagnation point at the origin (as illustrated in Fig. 1). The free stream velocity = e ( ) = and the velocity via which sheet is stretched are = w ( ) = , where a and c are positive constants. The temperature at the surface is conserved at T w and T ∞ far away from the plate; in similar a manner, the nanoparticle volume fractions are C w and C ∞ . An external magnetic field H 0 is applied normal to the sheet. Under the above assumptions, the required equations are as follows: where is the density, e is the magnetic permeability velocity, is the electrical conductivity, is the Maxwell fluid parameter, and is the kinematic viscosity. Due to hydrostatic and magnetic pressure gradient, the force will be in equilibrium as given by y . x y Fig. 1 Geometry of the problem The classical form of Fourier's and Fick's laws with the ray of Cattaneo-Christov equations takes the following form: Assume that ∇.q = 0, ∇.J = 0, and for steady state q t = 0, J t = 0, the new equations become: Now in component form, energy and concentration Eqs. (7) and (8) as temperature at the wall, C w (x, y) is known as concentration at the wall, T and C are the temperature and concentration of the fluid, respectively, C p is the specific heat, and C ∞ and T ∞ are the concentration and temperature free streams. Temperature of the sheet is T w = T ∞ + bx, for heated surface b > 0 so T w > T ∞ and for cooled surface b < 0 and T w < T, where b is a constant and D T is known as thermophoresis diffusivity. and chemical reactive species is Friction factor coefficient ( C f ) is defined as: Here w denotes the wall shear stress and is given by where Re x = u e x x is local Reynolds number. three initial guesses to f �� (0) , � (0) and � (0) for approximate solution. Here the step size and convergence criteria are chosen to be 0.001 and 10 −6 (in all cases). Results and discussion The main effort of this work is to examine the influence of magnetic field and stagnation point Maxwell nanofluid flow due to a linear stretching surface with slip conditions. The governing differential Eqs. (12)-(15) along with corresponding boundary conditions (16) are solved numerically by implying shooting procedure (Cash and Karp). Figure 3 depicts the variation of slip parameter k on velocity profile. The influence of slip parameter k significantly enhances the velocity profile. Figure 4 illustrates the variation of fluid relaxation parameter m on velocity profile. It can be analyzed that the velocity of the fluid reduces by enhancing the fluid relaxation parameter m . Figure 5 represents the change in temperature distribution for distinct values of N t . It is found that by enhancing N t , the temperature distribution also increases. Figure 6 depicts the variation of Brownian motion N b on temperature distribution. It can be analyzed that by increasing N b , the mass diffusivity trekked up which leads to enhancement in the temperature and Figure 7 shows the behavior of Prandtl number Pr on temperature profile. It is found that the temperature profile reduces with rising values of Prandtl Pr . Figure 8 presents the deviation of temperature profile for distinct values of t . It is seen that by enhancing thermal relaxation parameter t , fluid particles require more time to heat the boundary layer region, and as a result temperature profile reduces. Figure 9 displays the effect of relaxation parameter c on concentration distribution. From this figure, it is observed that by increasing the relaxation parameter t , the concentration profile reduces. Figure 10 Figure 11 represents the influence of Lewis number Le on nanoconcentration profile. It is found that the higher values of Lewis number Le lead to reduction in the mass diffusivity, so the concentration profile reduces. x enhances by enhancing slip parameter k but decreases by increasing the fluid relaxation parameter m . Table 1 shows that the fraction factor rises due to an increase in Hartmann number Ha and fluid relaxation parameter m and opposite behavior is noticed for slip parameter k . The achieved results are in Table 2. Table 3 is sketched for the comparative investigation between Hsiao x for various values of Ha and k Table 1 Computational results of C f Re Table 2 Comparison of (f �� (0) + m (f � (0)f �� (0) + f (0)f ��� (0))) with the previous literature when x for large values of slip parameter k but opposing behavior is noticed for fluid relaxation parameter m .
2,325.4
2019-02-21T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
$\alpha_sv^2$ corrections to $\eta_c$ and $\chi_{cJ}$ production recoiled with a photon at $e^+e^-$ colliders We consider the production of the $\eta_c$ and $\chi_{cJ}$ states recoiled with a photon up to $\mathcal{O}(\alpha_s v^2)$ at BESIII and B-factories within the frame of NRQCD factorization. With the corrections, we revisit the numerical calculations to the cross sections for the $\eta_c(nS)$ and the $\chi_{cJ}(mP)$ states. We argue that the search for $XYZ$ states with even charge conjugation such as $X(3872)$, $X(3940)$, $X(4160)$, and $X(4350)$ recoiled with a photon at BESIII may help clarify the nature of these states. For completeness, the production of charmonium with even charge conjugation recoiled with a photon at B factories is also discussed. Introduction Non-relativistic quantum chromodynamics (NRQCD) is a rigorous and successful effect field theory that describes heavy quarkonium decay and production [1]. The color-octet mechanics (COM) is proposed in NRQCD. The infrared divergences in the decay widths of P -wave [2,3] and D-wave [4][5][6] heavy quarkonium have been absorbed into the NRQCD matrix elements applied with COM, and the infrared-safe decay rate can be obtained. But the last decade experiment measurements at e + e − colliders and at hadron colliders reveal large discrepancies with LO (leading order) calculations. O(α s v 2 ) corrections to the decays of h c , h b and η b are studied in Ref. [63][64][65]. Actually, the corrections at higher-order(e.g., O(α s v 2 ), v 4 ), had been considered in many processes and contributed considerable effects. However, some drawbacks for fixed-order calculations involve the convergence for higher-order corrections and to which order should be considered within NRQCD. These problems can be understood by adding more higher-order calculations. More information about NRQCD can be found in Ref. [66] and related papers. Studies have focused on the production of quarkonium with even charge conjugation that are recoiled with a hard photon in the e + e − annihilation at the B factories and BESIII is a very interesting process. The production of double charmonium at B factories [7,8] aids in identifying to identify charmonium or charmonium-like states with even charge conjugation, which recoiling with J/ψ and ψ(2S). η c , η c (2S), χ c0 , X(3940) (decaying into DD * ), and X(4160) (decaying into D * D * ) have also been observed in double charmonium production at B factories, but the χ c1 and χ c2 states are yet to be determined in production associated with J/ψ at B factories. The LO calculation for heavy quarkonium with even charge conjugation recoiled with a hard photon in the e + e − annihilation at the B factories and BESIII is a pure QED process [67,68]. The one-loop calculations have been computated and analyzed [69][70][71][72][73]. And the NLO relativistic corrections have been computed too [70,72]. Quarkonium with even charge conjugation are associated to the XY Z particles [74][75][76]. The well-known one of the XY Z particles, X(3872) [77], is supposed to the χ ′ c1 state or the mixture of this state with other structure in some view [49,78]. Recently, X(3872) has been observed in photonrecoiled process with a statistical significance of 6.4σ at BESIII [79]. X(3915) (X(3945) or Y (3940)) and Z(3930) are assigned as the χ c0 (2P ) and χ c2 (2P ) states by the PDG (Particle Data Group) [80]. However this identification may be called into some questions [81]. The experimental results for states with even charge conjugation have theoretically elicited interest in the nature of charmonium-like states. The non-perturbative effects are strong because the energy region at BESIII approximates the threshold charmonium states. Hence, the applicability of NRQCD is speculative within this region. However, some NRQCD-based calculations exhibit high compatibility with the data. In this paper, the photon-recoiled η c and χ cJ production is studied according to our previous work [72]. We calculate the cross sections up to the order of O(α s v 2 ) within the NRQCD. This study verifies the applicability of NRQCD at the threshold and determines the XY Z particles related to η c (nS) and χ cJ (nP ). The paper is organized as follows. Sec.2 introduces the framework of calculations, especially the method of the expansion up to (α s v 2 ) for the amplitudes. Sec.3 presents the amplitudes expansion and discussion the cross sections for the η c and χ cJ process. Sec.4 gives the numerical results up to O(α s v 2 ). Finally, Sec.5 presents a summary. The framework of the calculation This section introduces the calculation method for the O(α s v 2 ) amplitude expansion to the process e + e − → γ * → H(η c , χ cJ ) + γ. The momenta of final states are stated as H(p) and γ(k). Cross section can be obtained and applied to express the amplitudes via expansions. Kinematics In an arbitrary fame of the charmonium, the momenta of the charm and the anti-charm can be expressed by the meson momentum and their relative momentum, (2.1) The momenta p and q are orthogonal, i.e., p·q = 0. In the meson rest frame, they can be written as, p = (2E q , 0) and q = (0, q). We calculated the amplitudes up to the order O(α s v 2 ) using an orthodox method. In this method, the rest energy E q = m 2 c + q 2 of the charm/anti-charm should be expanded around the charm mass, The momenta of the final-state particles depend on E q . For instance, the four-momenta of the particles in γ * (Q) → H(p)+γ(k) in the center-of-mass system can be written as follows: Given the expression for E q , the four-momenta can be expanded in terms of q 2 /m 2 c . For instance, the momenta of the final meson and the photon noted by p and k are expanded as the following expression, Therefore, the momenta with subscripts (0) or (2) are independent of q 2 . The scalar products of (p (0) , p (2) , k 0 , and k (2) ) can be solved in a special frame. For instance, in the center-of-mass system, the relation k (2) = −p (2) can be obtained to reduce the number of the independent momenta; all the three non-zero products are calculated as follows: Studies on the O(α s v 2 ) corrections to the decay process of charmonium with massless final-states( [63][64][65]82]) introduce a factor E q /m c to all external momenta. In our method, these momenta can be expanded as i /2. This equation indicates the compatibility of our method with that published. For the P -wave states, the spin and orbital vectors must also be expanded by Furthermore, they couple onto the total angular momentum J states (J = 0, 1, 2) with the relation presented as follows: The polarization is summed over all directions of the vector for the total angular momentum: where Π can be expanded in terms of q: The second term vanishes in the rest frame of the meson, which is consistent with the independence of the polarization vectors to q 2 in this frame. Amplitudes expansion The amplitude of e + e − →γH(η c , χ cJ ) can be written as [30] M(e + e − →γH) = L α M α (γ * → γH), (2.10) where the leptonic part L α is independent of q. We only consider the hadronic part element M α (γ * → γH) in the NRQCD frame. The Feynman diagrams are shown in Fig.1. The amplitude can be written as [30]: where the factor √ 2M H originates from the relativistic normalization. d n is the short-distance coefficient that can be obtained by matching with the full QCD calculations on the intermediate cc production. And the H|O H n |0 represents the NRQCD long-distance matrix elements that are extracted from the experimental data or determined by potential model or lattice calculations. The present study concentrates on the corrections up to the order O(α s v 2 ) under the color-singlet frame. The expansion is given as follows: The short-distance coefficients are obtained from the matching between the pQCD and the NRQCD calculations on the cc production, . where M s and M t represent the amplitudes with the cc pair coupling to spin-singlet and spin-triplet polarization, respectively. The above NRQCD operators O and P are respectively defined as follows: where Pauli spinors ψ and χ describe the quark annihilation and the anti-quark creation, respectively. The gauge-covariant derivative operator The term d (self.) originates from the one-loop self-energy corrections to the NRQCD matrix elements [1,36,63,64] and in the M S scheme . µ r is the renormalization scale. Therefore, This expression is satisfied for all 1 S J states. Therefore, d (self.) contributes to the amplitudes expansion for O(α s v 2 ). The factor √ 2N c 2E q in Eq.(2.13) originates from the perturbative calculations on the LO QQ NRQCD matrix elements. The extra factor |q| arises from the derivative operator for the P -wave NRQCD operator P. The covariant projection method is adopted to calculate the full QCD amplitudes as, The color-singlet projection operators are defined as π 1 = 1/ √ N c . The spin-singlet and spin-triplet projection operators are given as, where P 00 and P 1sz for the spin-singlet and spin-triplet states, respectively. These operators can be expanded up to the q 2 /m 2 c order applied with Eqs:(2.1),(2.2), (2.4). According to the matching expression Eq.(2.13), the short-distance coefficients are calculated by where M (0) and M (αs) are defined by the born and one-loop amplitudes, respectively. The replacements are applied to resolve the expansion to the Lorentz vector q in the amplitude expressions: for S-wave states and for P -wave states. One-loop computation The one-loop Feynman diagrams are shown in Fig.1. The dimensional regularization scheme is selected here. The ultraviolet divergences in one-loop amplitude are canceled by the counterterms. The infrared divergences at the α s order in one-loop amplitude are also canceled by the counterterm amplitude, and the additional infrared divergences at the order of α s v 2 are canceled by the one-loop self-energy contribution to the NRQCD matrix elements in Eq.2.15 and Eq.2.16. The real corrections need not to be included for the exclusive processes. We apply the method in Ref. [83] to reduce the tensor integration. The relativistic expansion is done before dealing with the loop integrand. The on-mass-shell(OS) renormalization scheme is adopted and in this scheme the renormalization constants are chosen as 22) where N ǫ (m Q ) has been previously defined and the renormalization scale µ r is canceled by the loop and counterterm diagrams up to the order of O(α s v 2 ). In the OS scheme, the diagrams for the external leg correction are not included. In our calculations, the 't Hooft-Veltman (HV) regularization scheme [94,95] is adopted in which γ 5 is defined as The traces involving more than four Dirac γ-matrices with a γ 5 are evaluated recursively by the West Mathematica programs [96]. Our strategy handing of γ 5 is same as that in Ref. [63]. In the HV scheme, the Ward identities may be violated in the one-loop calculations, such as for the axial current known as the Adler-Bell-Jackiw anomalies, which arising from the symmetry breaking of γ 5 definitions in D-dimension as Eq.2.23 . In our case, γ * → γη c process, for γ 5 appears outside of one-loop integrals, the amplitudes would satisfy the ward identities, that is seen as the short-distance results given in Eq.2.24 in the next section. More discussions of γ 5 -scheme and the anomalous Ward identities could be refered to Refs. [63,91,92,[94][95][96][97][98][99]. We use the FeynArts [84] package to generate Feynman diagrams and amplitudes, and the FeynCalc [85,86] package and our self-written Mathematica package to handle the amplitudes and the phase space integrand. Matching results for η c This subsection presents the matching results for the short-distance coefficients for η c . The final matching results of the coefficients are given in the Appendix, where r ≡ 4m 2 c /s and s is the square of beam energy. The coefficients are given as follows 1 : Figure 1. The typical born, loop, and counterterm Feynman diagrams. There are two diagrams for the born amplitude, six diagrams for the counterterm amplitude, and eight for the one-loop amplitude including two self-energy diagrams, four triangle diagrams, and two box diagrams. where where ǫ Q and ǫ k represent the polarization vector of the initial virtual photon and the final photon, respectively. The coefficients in Eq.(2.24) are provided in the high-energy region. In the limit r → 0, the asymptotic behavior of these coefficients can be obtained. The lowest order of the coefficients is O(r); the higher-order contributions are omitted, and the reduced equations are given as follows: The terms of ǫ 2 disappear in the expressions because ǫ 2 is suppressed by a factor of r than ǫ 1 . The asymptotic behavior of d (αs) is consistent with that in Ref. [70]. The asymptotic behavior of the coefficients for r → ∞ corresponds to the process η c → 2γ are mentioned in Ref. [70] and given as follows 2 : Note that, Therefore, the NLO short-distance coefficients in v 2 are given by if we disregard the contribution of the relativistic renormalization in Eq.(2.12) which contributes a factor of 1/4. Matching results for χ cJ This subsection presents the matching results for the short-distance coefficients for χ cJ . Similar to the η c case, the short-distance coefficients in the orders of v 2 and α s v 2 for χ cJ are also written in two parts. All the coefficients are given as follows: The asymptotic behavior in the limit r → 0 is also considered. For the ǫ 4 is higher order than ǫ 3 in r, then the coefficients are given as follows: Cross section The cross sections of the process e + e − → γH are relative to the squared amplitudes of the process γ * → γH, means obtaining the sum of NRQCD amplitudes M over the final-state color and polarization and the average over the ones of the initial states. Where the differential two-body phase space in D dimensions can be solved: where M H ≈ 2E q has been chosen. This expression implies that the two-body phase space contributes another factor of −r/(1 − r) to the v 2 order cross section. This factor is linearly divergent near the low-energy threshold. The results of short-distance amplitudes are obtained in the last section. Then the cross sections for η c and χ cJ states can be obtained as follows: whereσ (0) is the LO short-distance cross section, and the matrix element v 2 is defined as follows: η c The LO short-distance cross section for η c is given by: (3.5) Fig.2 shows that the radios c 10 , c 02 , and c 11 ranging r from 0 to 0.5, ie. high-energy to low-energy, for η c production. The O(α s v 2 ) correction is suppressed by α s v 2 and negligibly contributes to the total cross section at the r = 0.5. Tab.1 presents the asymptotic behaviors of the radios near the threshold. The radio c 12 Tab.3 lists the corresponding coefficients for the decay process η c → γγ. The O(α s v 2 ) contribution slightly affects the decay rate, although our numerically calculated value is slightly larger than that from Ref. [63]. However, the O(α s v 2 ) contribution can re-determine the elements v 2 for the color-singlet S-wave states. χ cJ The LO short-distance cross section for χ cJ is calculated aŝ Table 3. The asymptotic behaviors of the radios in the limt of r → ∞. The radios are defined in Eq. (3.3). These results are corresponding the radios of the two-photon decay rates for η c , χ c0 and χ c2 . χ c1 → 2γ is forbade therefore the radios are not given for it. 6) For χ c0 , the radios may be divergent at r = 1/3 for the LO short-distance coefficient reach zero at this point as Eq.(3.6). Thus, we change Eq.(3.3) into the following formula to define the radios: (3.7) The redefined radios are shown as Fig.3 and these radios are proportional to the relative short-distance cross sections. By a rough estimation, the LO cross sections are diluted by the sum of the O(α s ) and O(v 2 ) corrections as shown in figure. Furthermore, the O(α s v 2 ) contributes additional negative corrections. Thus, the total cross sections for the χ c0 process may be small. The radios for χ c1 and χ c2 processes are shown in Fig.4 and Fig.5, respectively. In the low-energy region (0.3 < r < 0. The radios corresponding to the two-photon decay for χ c0 and χ c2 are also given in Tab.3. By the rough estimation, we select α s and v 2 as a range of 0.2 ∼ 0.3. Therefore the O(α s v 2 ) corrections contributes 10% ∼ 20% to the LO decay rate for χ c0 → 2γ or χ c2 → 2γ. These O(α s v 2 ) corrections may also significantly affect fitting to the element v 2 for χ cJ . Numerical results and discussion In this section, we revisit the numerical calculations to the cross sections.In our numerical calculation, the total cross sections strongly depend on the input parameters (e.g., mass of the charm quark, long distance matrix elements, and the strong-coupling constant). The relativistic matrix elements can hardly be determined. In the consequent calculations for the η c (1S), χ cJ (1P ) process, we select the fine structure constant α = 1/137 and the charm quark mass as m c = 1.5 ± 0.1GeV, The LO long-distance matrix elements are obtained from the radial wave functions at the origin in the potential model calculations [88] with the replacements and In the last step, we ignore the O(v 2 ) term to simplify the input parameters. The results markedly depend on the selections of the wave functions at the origin. Studies in Ref. [69,73] have adopted two sets of wave functions at the origin with large gaps. We re-estimate the wave functions at the origin by averaging the two sets of wave functions with the uncertainties in Tab.4. The wave functions at the origin for 4S and 3P states are estimated like Ref. [72] as Table 4. The wave functions at the origin [88]. The two sets represent the calculations from the Cornell potential and the B-T potential. "Re-est" are averaged from the two sets of functions with the uncertainties. In the BESIII energy region, the corrections from the phase space are significant for the cross sections. Hovever, the contributions cannot be determined because of the nonperturbative effects. In the previous works, two different strategies are used to to remedy the non-perturbative effects from the phase space integrand, a extra factor is introduced in Ref. [73] and the charm quark mass is set to half of the meson mass in Ref. [72]. Furthermore, as stated in Ref. [32,89,90], the v 2 corrections from the phase space, which are related to the terms in the short-distance cross section expansion different with that in the sub-amplitude expansion, could be resummed to all orders in v 2 by the 'shape functions' method. In this paper, we calculate the contributions from the phase space just by a simplified expansion by Eq. (3.2). Therefore, we analyze the cross sections with and without the phase space corrections for comparative and referential purposes. Tab.5 presents the total cross sections of up to α s v 2 order and the corresponding uncertainties for the η c process. And Fig.6 presents the corresponding cross sections at the BESIII energy region. The uncertainties for the total cross sections come from the uncertainties of m c , α s , v 2 , and the wave functions at the origin. The phase space reduces the numerical results by a factor of 25% ∼ 10% and enhances the uncertainties by a factor of 35% ∼ 25% in the BESIII energy region 4 ∼ 5GeV. The O(α s v 2 ) corrections negligibly contribute to the η c process. Numerical simulations reveal that these corrections are approximately one-eighth and one-tenth of the O(α s ) contributions in the energy regions of the B-factories and BESIII, respectively. 93 ± 70 ± 6 ± 9 14 ± 11 ± 1 ± 2 12 ± 9 ± 1 ± 2 4S WP 36 ± 27 ± 2 ± 37 13 ± 10 ± 1 ± 3 11 ± 8 ± 1 ± 2 Tab.6 presents the total cross sections up to α s v 2 order for χ c0 process with the uncertainties. The positive O(α s ) corrections and negative O(v 2 ) corrections cancel to each other in the BESIII energy region [72], but O(α s ) parts also contribute negative corrections which decrease the LO cross sections significantly even to a negative values. And the uncertainties are too large compared with the central values to give a reliable predictions for the χ c0 processes in the BESIII energy region. Table 6. The total cross sections in f b up to α s v 2 order of e + e − → χ c0 (nP ) + γ with n = 1, 2, 3 in the BESIII and B-factories energy region. WP and OP indicate considering or ignoring the phase space contributions, respectively. The uncertainties in each cell come from the uncertainties of the wave functions at the origin, α s , v 2 , and charm quark mass m c in turns. For the excited states, we select the charm quark mass as the half of the meson mass in the calculations, therefore there are no m c uncertainties. The mass of χ c0 (nP ) is selected as 3.918GeV and 4.131GeV for n = 2, 3 respectively [74,80]. Tab.7 and Tab.8 present the total cross sections up to the α s v 2 order for the χ c1 and χ c2 processes, respectively, with the uncertainties. In the BESIII energy region, they exhibit similar tends. In addition, Fig.7 and Fig.8 show that the total cross sections for χ c2 decrease slightly faster than those for χ c1 as the energy increase. The O(α s v 2 ) contributions are in behalf of the O(v 2 ) ones in this region as discussed in Sec.3. The phase space corrections reduce the total cross sections by a factor of 10% ∼ 20% in the BESIII energy region for both the χ c0 and χ c1 processes. The corresponding uncertainties markedly decrease. From the tables, the χ c1 and χ c2 states will be found in the BESIII energy region even if the lower bound of the numerical values is adopted for the cross sections. For the high η c (ns) and χ cJ (nP ) states, the masses of these states extremely approximate the BESIII beam energy. NRQCD factorization will be broken down near the endpoint. In our previous works by Ref. [72], the charm quark mass is set the half of the meson. But in Ref [69,73], different strategy is used to remedy the phase space integrand near the threshold, an additional unitary factor is introduced, and the charm quark mass is set about 1.5 GeV. Table 7. The total cross sections in f b up to α s v 2 order of e + e − → χ c1 (nP ) + γ with n = 1, 2, 3 in the BESIII and B-factories energy region. WP and OP indicate considering or ignoring the phase space contributions, respectively. The uncertainties in each cell come from the uncertainties of the wave functions at the origin, α s , v 2 , and charm quark mass m c in turns. For the excited states, we select the charm quark mass as the half of the meson mass in the calculations, therefore there are no m c uncertainties. The mass of χ c1 (nP ) is selected as 3.901GeV and 4.178GeV for n = 2, 3 respectively [74,80] 10.0 ± 2.9 ± 0.7 ± 1.4 7.9 ± 2.3 ± 0.5 ± 1.1 3P OP 1061 ± 214 ± 119 ± 234 7.6 ± 1.5 ± 0.5 ± 0.8 5.9 ± 1.2 ± 0.4 ± 0.6 3P WP 786 ± 159 ± 88 ± 97 7.3 ± 1.5 ± 0.5 ± 0.9 5.7 ± 1.1 ± 0.4 ± 0.8 Unfortunately, they obtain significantly different cross sections for the production of these near-threshold particles for the excited P -wave states. We remain the strategy in our previous work to set the quark mass to the half of the meson. The results are shown in Tab.5, 6, 7 8 and in Fig.9, 10, 11, 12. The numerical cross sections for η c (2S) states positively increase compared with those for O(α s + v 2 ). Meanwhile, the cross-sections for the excited P -wave states are lower than those from the previous O(α s + v 2 ) results. However for η c (3S) state, the numerical values are still assigned to BESIII to determine the states. For excited P -wave states, the cross sections come down compared with the previous O(α s + v 2 ) results. But the numerical values are still referred for BESIII to find these states. As discussed in our previous works, the results of η c (mS) and χ cJ (nP ) states are helpfull charifly the nature of XY Z particles with the even charge conjugation, such as X(3872), X(3940), X(4160) and X(4350). Taking X(3872) state for an example, we considered it as the mixture with χ c1 (2P ) component [72], therefore, the cross sections for X(3872) are determined by where k = Z X(3872) cc ×Br[X(3872) → J/ψπ + π − ]. Br[X(3872) → J/ψπ + π − ] is the branching fraction for X(3872) decay to J/ψπ + π − . Z X(3872) cc is the probability of the χ c1 (2P ) component in X(3872). k = 0.018 ± 0.04 [49,78]. With the results up-to O(α s v 2 ), we revisit the cross sections for X(3872) shown in Fig.13. In the figure, we also give the total cross sections at Table 8. The total cross sections in f b up to α s v 2 order of e + e − → χ c2 (nP ) + γ with n = 1, 2, 3 in the BESIII and B-factories energy region. WP and OP indicate considering or ignoring the phase space contributions, respectively. The uncertainties in each cell come from the uncertainties of the wave functions at the origin, α s , v 2 , and charm quark mass m c in turns. For the excited states, we select the charm quark mass as the half of the meson mass in the calculations, therefore there are no m c uncertainties. he mass of χ c2 (nP ) is selected as 3.927GeV and 4.208GeV for n = 2, 3 respectively [74,80] From the figure, the cross sections for the predictions of X(3872) may be smaller than the experiment data, but one still can not jump to conclusions for the nature of the X(3872) and the more data are required. Summary In this study, we extend our previous works on the production of charmonia with even charge conjugation in the processes e + e − → η c (nS)(χ cJ (mP )) + γ up to the O(α s v 2 ) corrections. The results indicate that these corrections exhibit a logarithmic singularity of ln(1− r), which is not observed in the O(α s ) corrections near the threshold. The O(α s v 2 ) corrections also contribute to the total cross-sections near the threshold and are important to the diphoton decay for the χ c0 and χ c2 states. We revisit the numerical calculations to the cross -sections for the η c (nS) and χ cJ (mP ) states using the results for the O(α s v 2 ) corrections. Appendix In this section, we give the matching results of the short-distance coefficients for η c process. The Lorentz invariance determines the amplitude should have the form of Therefore the coefficients in v (0) must be like A (0) ǫ 1 . The O(v 2 ) coefficients obtained in proceed of derivate the amplitude will be like A (v 2 ) ǫ 1 + B (v 2 ) ǫ 2 . ǫ 1 and ǫ 2 have defined as Eq.(2.25). Therefore we write the O(v 2 ) short-distance coefficients into a plus of three parts as seen in the following results. The third term is just to cancel the O(v 2 ) contributions from the relativistic normalization factor in Eq.(2.12). And we omit the imaginary parts in the coefficients in O(α s ) and O(α s v 2 ), which do't contribute to cross sections at order of O(α s v 2 ). Figure 2. The relative ratios for the corrections in the order of α s , v 2 , and α s v 2 to the LO cross section for η c production recoiled with a hard photon as a function of r. The ratios c 10 , c 02 , and c 11 are defined by the expression σ =σ (0) 1 + α s c 10 + (c 02 + α s c 12 ) v 2 0|O H |0 . Figure 11. The cross sections of the χ c2 (2P ) process at the BESIII energy region. The uncertainties for the total cross sections come from the uncertainties of α s , v 2 and the wave functions at the origin. Figure 13. The cross sections of the X(3872) process at the BESIII energy region when taking X(3872) as the mixture with χ c1 (2P ) component. The uncertainties for the total cross sections come from the uncertainties of m c , α s , v 2 , and the wave functions at the origin. "With RES.CONT" means considering the contributions from both continuum and resonance.
7,096.6
2014-07-14T00:00:00.000
[ "Physics" ]